Skip navigation

Category Archives: verification

While researching the recent P!=NP paper and the activity surrounding it, I found that there are claims of P!=NP or the inverse being made all the time. One of the researchers posted a list of rules they use to quickly filter out bogus claims. Among these were some amusing ones (the paper was not created using LaTex), and some quite simple and powerful ones (the proof does not exclude known counterexamples). Deolalikar’s paper managed to pass most of these rules, which is one of the reasons it was taken seriously.

I recently read a blog post about verification with some claims that I find quite dubious. This gave me the idea of posting some “rules” for reading papers/articles/blog posts on verification that I think are useful in judging the validity of these claims. Note also that I have addressed most of these rules in more detail in previous posts.

  1. The most frequent claim I see are those involving the coming  “verification bottleneck”. The claim is invariable that, today verification is 70% of the effort (an oft-quoted, but unproven statistic), and tomorrow, because chips will have twice as many transistors, will be much higher.

    What seems to get lost in these claims is that the same claim could have been made two years ago for the current generation, and the effort for the previous generation was still 70% (the 70% number has been around for quite a long time). In fact, going back to the dawn of the RTL era, when there was a dramatic shift in methodology,  one could have made the same claim for the last ten generations of chips. This is usually ignored and no reason is given for why the 11th generation is somehow profoundly different than the last ten such that we need to buy these new tools (which is, inevitably, the other shoe falling in these articles).

    In short, be skeptical of claims of future bottlenecks that ignore the fact that the jump from this generation to the next is no different from the jump over the previous ten generations.

  2. Be skeptical of claims uses the term “verification bottleneck”. Do a Google search on “verification bottleneck” and you will find all the results are either research papers or vendors selling trying to sell a new product.
  3. Be skeptical of  claims using math. You will sometimes see something like, verifying N gates today requires 2^N effort and tomorrow there will be double the number of gates, means it will take 2^(2N) effort. Therefore, effort is exponentially increasing and so buy our tool which greatly increases capacity. This is basically a variant of rule 1, but, in general, if you see math, be skeptical.
  4. It is common (and almost a condition of acceptance) that verification research papers always conclude with something like “…and a bug was found that was missed by simulation” or something along these lines. Remember, bugs are easy. An algorithm that found new bugs does not mean that this algorithm is better than any existing algorithm.

    The best way to read verification papers is to ignore the fact that bugs may have been found. If you still find value in the paper even if you make the assumption that it found no bugs, then it probably does have some value.

  5. The last rule comes from the premise that bugs are easy, or as I like to call it, the “pick axe and shovel” argument. This is usually in relation to some claim about better results (more bugs) being found on some, usually large-scale, verification effort. The better results are usually used to justify the proposed methodology as being superior to other methodologies.

    The issue here is that the project usually involved a large number of well-coordinated, smart people. The counter-argument to this is, I could take the same smart people and, using nothing more than pick axes and shovels, find lots of bugs. Why does this prove that your methodology is better?

What prompted to write this post was this post by Harry Foster. This is posted on Mentor’s website, so Harry is obviously pitching the company party line. I won’t bother critiquing the post here. I suggest reading Harry’s post after reviewing the rules above. Let me know what you find.

I was listening to a talk by Daniel Kroening, a software verification researcher at Oxford University , who was explaining the certification process for safety-critical software. He mentioned that one of the requirements is that all test cases must be verified on the actual hardware. Now, in the case of avionics software, that means one test flight for every test case. Since test flights are expensive, optimizing the number of test cases required to cover everything  is extremely important.

This is one of these cases where you get a dot and you connect them with other seemingly random dots and see that you have a line. In this case, what I realized is that the difficulty in verification is not really finding bugs (bugs are easy, right?), but in how efficiently we find these bugs. Recently I posted on how constrained random testing is essentially a (hard) optimization problem. I also posted on the best verification methodology being to combine orthogonal methodologies in order to optimize bug finding productivity. The criticality of optimizing safety-critical test cases was another data point  that led me to this  realization.

This is reflected in the fact that many of the most successful verification tools introduced over the last twenty years have succeeded by optimizing verification productivity. As we all know, faster simulation really does very little to improve the quality of a design, but it helps enormously in improving verification productivity. Hardware verification languages are probably the second most important development in verification in the last twenty years. But again, they don’t improve quality, they simply improve productivity in developing verification environments.

This is not to say that there have not been tools that improve quality. Formal proof clearly improves quality when it can be applied, although semi-formal verification, which focuses on bug hunting is more of a productivity increase. In-circuit emulation allows you to find bugs that could not be found in simulation due to being able to run on the real hardware. However, emulation used simply as faster simulation is really just a productivity increase.

Is verification optimization related to the well discussed verification bottleneck (you know, the old saw about verification consuming 70% of the effort)?  Verification became a bottleneck when the  methodology changed from being done predominantly post-silicon to being done predominately pre-silicon. Many people saw the resulting dramatic increase in verification effort as being correlated to increased design size and complexity. If this were true, then verification would consume 98% of the effort today since this switch occurred twenty years ago and there have been many generations of products since then. Since relative verification effort has not changed significantly over the last twenty years, I think it is safe to say that verification effort is constant with increased design size and that differences in relative effort reflect differences in methodology more than anything.

The real question is: will verification optimization become more important in the future as designs become larger and will that result in relative verification effort rising? If there is no change to design methodology, we would expect verification optimization effort to remain constant.  If high-level synthesis allows us to move up the abstraction ladder, this should improve the ability to optimize verification. In short, there does not seem to be a looming crisis in overall verification optimization.

However, if we look at the software side we see that software content on hardware platforms is growing rapidly which is putting enormous pressure on the ability to verify this software. Effectively, we have managed to forestall the hardware verification optimization crisis by moving it to software

When I started Nusym eight years ago, it was my feeling that formal verification would succeed only when it didn’t look like formal verification.  Nusym was founded with the vision of using formal techniques under the covers of a standard simulation environment to extract value from the wealth of information provided by the simulation environment.

Talking about this strategy in a response to Olivier Coudert’s blog post on formal verification, I wrote:

I think this use model will continue to grow, while formal verification, the product, will continue to whither. I predict that in 10-15 years there will be no formal verification products, but most, if not all, verification solutions at that time will incorporate formal technologies under the covers.

At this year’s DAC, however,  it was apparent that this trend is happening now, rather than 10-15 years from now. There were a number of new verification startups and products announced this year, including:

  • Avery Insight – X propagation and DFT
  • Jasper ActiveDesign – design exploration and debug
  • NextOp – assertion synthesis
  • Vennsa – automated debugging

Notice that none of these products claims to be a formal verification product. But look  who created these products: formal verification Ph.D.s. Do you doubt that these products rely heavily on formal verification techniques under the covers?

The only exception to this, ironically, appears to be my new employer, Jasper, whose JasperGold formal verification product has been seen huge growth over the last year.  They have found that  the recipe to success for formal verification is to buffer the complexity of it by using methodology and services. Also, Jasper’s latest product, ActiveDesign,  a tool that derives simulation traces for a specified design target, is not specifically intended to be a verification tool, except of course, it uses the same formal verification technology under the covers as JasperGold.

Now, there are still traditional formal verification products out there. Cadence IFV, Mentor 0-in, and Synopsys Magellan are still around. I may have missed it, but I didn’t see these tools making any a big impact at DAC this year. However, I suppose we must recognize that they still exist and that it may just take 10-15 years for them to die off.

Constrained random testing is the predominant method used to verify chips today. It has two advantages over directed testing. First, randomness allows testing things not explicitly thought of by the verification engineer or designer. Second, automated random testing is more productive than manual directed testing in terms of number of test vectors produced as a function of human effort put in.

There are several variants on the basic concept of random testing. Unconstrained random testing generates random values for all input variables without any consideration of legality. Constrained random testing is used when the legal set of input values is a subset of the entire input space. In the last dozen years, hardware verification languages (HVLs) have enabled the use of constraint-based random testing. Constraint-based random testing uses static constraints and constraint solvers to do constrained random testing.

Constraint-based random testing improves productivity in writing constrained random testbenches because it allows writing tests at a higher level of abstraction. Static constraints specify the legal input space, but not how to randomize values; this is left up to the solver. As designs/protocols have become more complex, constraint-based random testing’s productivity gain has increased. As a result, there is a proliferation of large, complex constraint sets in today’s testbenches.

Increasingly, however, solving these large, complex constraint sets has become a bottleneck in simulation due to the large amount of time spent solving these constraint sets. Consequently, it is common that a lot of time is spent debugging and trying to workaround these solving bottlenecks. In the worst case, it is necessary to discard the static constraints and rewrite the randomizer using imperative code, negating the productivity gains of using static constraints.

The fundamental cause of this problem is the fact that uniformly randomizing across a constrained input space is a NP-hard problem. As we have seen before, NP-hard problems are optimization problems. Uniformity of randomization is the optimization goal. For example, suppose we have a simple constraint: 0 <= X <= 10. If we randomize X a number of times and each time it returns the value 0, this meets the constraint, but is not very uniform. Randomization that produces the value 0 to 9 with equal probability is optimal. Because NP-hard problems have the additional burden of requiring an optimal value, not just a satisfying value, they are generally harder to solve than NP-complete problems.

What are solutions to this problem?

The obvious solution is to build better solver/randomizers. Companies that produce HVLs already spend a large amount of effort trying to improve their solvers. If it were that easy, it would have been done already.

Barring better solvers, the only other choice is to change methodologies. One change is to ditch static constraints and go back to writing imperative code with the ensuing reduction in productivity. Not a pleasant thought to contemplate.

An alternative to this is keep the static constraints, but reduce the complexity from NP-hard to NP-complete. This means dumping uniform distribution for, potentially, very poor, but still legal, randomization. This is feasible considering that the real goal of random testing is to find bugs and improve coverage. We believe the uniform distribution is required to do this. However, if the randomizer knew precisely what values to generate in order to find a bug or hit a coverage point, it could just generate these values, which may or may not be uniform. If it were possible to do this, randomization would go from being an NP-hard problem to being a NP-complete problem. Technologies such as Nusym’s path tracing technology allow this.

One thing I think would help to implement this strategy would be to provide an open solver API in today’s HVLs. This would allow plugging in different solvers for different circumstances and developing custom solvers on a case-by-case basis rather than being forced to have a one-size-fits-all solver.

I recently received a Call for Papers email for a workship entitiled “First Hardware Verification Workshop”. The workshop description states:

…recently we have seen a steady dwindling in the number of publications that specifically target hardware. The purpose of this workshop is to rekindle the interest in and enthusiasm for hardware verification.

Almost simultaneously, Olivier Coudert wrote a thought provoking post entitled “Has formal verification technology stalled?” He was prompted to ask this question by the fact that the number of formal verification submissions to the Design Automation Conference (DAC) has dwindled to almost nothing. He wonders if this indicates that innovation has stalled.

I  think several factors may be at work here.

One factor may be simple economics. The economic downturn has undoubtedly had an impact on the number of formal verification papers written over the last two years. Both Synopsys and Cadence have had their research labs, a staple of DAC papers every year, decimated.

Researchers may be forced to attend fewer conferences than before. If you attended, say, DAC and FMCAD before, you may have been forced to choose to attend only one. As a researcher, I would probably choose the one where I would most likely meet more formal verification researchers, since this is the prime motivation for attending conferences. In bad economic times, this is going to be the focussed conferences such as FMCAD instead of the broader EDA-wide conferences such as DAC and ICCAD (which also has seen dramatically reduced numbers of formal verification papers).

But, a more fundamental trend that I see is a change in the character of formal verification papers published at all conferences.

A formal verification research project generally consists of three parts:

  1. A formulation of an abstraction of the problem being solved.
  2. A translation algorithm from the problem domain to a solving problem.
  3. A solving algorithm.

In the early days of formal verification (late 80s and early 90s), papers would often address all three of these areas. For example, you might see something like “Verification of Cache Coherence Protocols Using BDD-based Model Checking”. The paper would describe an abstraction of a processor cache plus some properties that need to be checked (1), specify a detailed algorithm of how this is translated into a model checking problem (2), and then some novel model checking optimizations. As research progressed, each of these areas became specialized such that papers would often focus on one aspect while only cursorily addressing the others.

In the late 90s, model checking using SAT became the state-of-the-art. There were a number of papers detailing translations into SAT, a lot of papers on SAT algorithms tailored for model checking, but relatively little on problem formulation since SAT-based model checking was largely a drop-in replacement for BDD-based model checking.

However,within a few years, these papers started dwindling. First, translation has essentially become a solved problem. It is not necessarily straightforward, but it is no longer a research problem because today’s powerful SAT solvers can often compensate for weak translation.

Second, papers on SAT optimization, a staple of DAC in the early 2000s, have all but disappeared from formal verification conferences. Does this mean SAT solving technology has stagnated? No, far from it. Because of the advent of SAT competitions, almost all SAT papers now go to dedicated SAT conferences, which are still quite active. A quite large source of EDA formal verification papers has essentially disappeared and gone back to its AI community roots.

What is left then is mostly papers from the first area: problem formulation. We get many papers on how to verify this or that type of circuit using this particular abstraction and properties. Papers talk about the abstractions and properties and then says “and then we translate this into a SAT problem and give it to a SAT solver and here are the results using miniSAT” or something along those lines. SAT solvers are sufficiently robust that there is no need to worry about optimizing them or even investigating the effects of different solving strategies.

The researchers writing these types of papers don’t have the connections to the EDA industry that were prevalent in the early days of formal verification. Therefore, thare are fewer research papers being published in general EDA conferences such as DAC and ICCAD.

At the same time, I notice that the user track at DAC last year had many formal verification papers. Maybe all that research has finally translated into real tools. That would definitely be a sign of maturity.

In an article in a recent issue of Computer entitled “Really Rethinking Formal Methods”, David Parnas questions the current direction of formal methods research. His basic claim is that (stop me if this sounds familiar) formal methods have too low ROI and researchers, rather than proclaiming the successes, need to recognize this and adjust their direction. As he so eloquently puts it:

if [formal methods] were ready, their use would be widespread

I haven’t spent a lot of time trying to figure out if his proscriptions make sense or not, but one thing stood out to me. He talks about a gap between software development and older engineering disciplines. This is not a new insight. As far back as the 60’s, the “software crisis” was a concern as the first large complex software systems being built started experiencing acute schedule and quality problems. This was attributed to the fact that programming was a new profession and did not have the rigor or level of professionalism of engineering disciplines that had been around for much longer. Some of the criticisms heard were:

  • programmers are not required to have any degree, far less an engineering degree.
  • programmers are not required to be certified.
  • traditional engineering emphasizes using tried and true techniques, while programmers often invent new solutions for every problem.
  • traditional engineering often follows a rigorous design process, programming allows hacking.

These explanations are often used as the excuse when software (usually Microsoft software) is found to have obvious and annoying bugs. But is this really the truth? Let’s look at an example of traditional engineering to see if if this holds up.

Bridge building is technology that is thousands of years old. There are still roman bridges built two thousand years ago that are in use today. Bridges are designed by civil engineers who are required to be degreed, certified engineers. Bridge design follows a very rigorous process and is done very conservatively using tried and true principles. Given that humanity has been designing bridges for thousands of years, you would think that we would have gotten it right by now.

You would be wrong.

Even today, bridges are built with design flaws that result in accidents and loss of life. One could argue that, even so, the incidence of design flaws is far less in bridges than in software. But this is not really an apples to apples comparison. The consequences of a bug in, say, a web browser are far less than a design flaw in a bridge. In non-safety critical software, economics is a more important factor in determining the level of quality of software. The fact is, most of the time, getting a product out before the competition does is economically more important than producing a quality product.

However, there are safety critical software systems, such as airplanes, medical therapy machines, spacecraft, etc. It is fair to compare these systems to bridges in terms of catastrophic defect rates. Let’s look at one area in particular, commercial aircraft. All commercial aircraft designed in the last 20 years rely heavily on software and, in fact, would be impossible to fly if massive software failures were to occur. Over the past 20 years, there have been roughly 50 incidents of computer-related malfunctions, but the number of fatal accidents directly attributed to software design faults is maybe two or three. This is about the same rate of fatal bridge accidents attributable to design faults. This seems to indicate that this gap between software design and traditional engineering is not so real.

The basic question seems to boil down to: are bridges complex systems?  I define a complex system as one that has bugs in it when shipped. It is clear that bridges still have that characteristic and, therefore, must be considered as complex systems from a design standpoint. The intriguing question is, given that they are complex systems, do they obey the laws of designing complex systems? I believe they do and will illustrate this by comparing two bugs, one a bridge design fault and another a well known software bug.

The London Millennium Footbridge was completed in 2000 as part of the millennium celebration. It was closed two days after it opened due to excessive sway when large numbers of people crossed the bridge. It took two year and millions of pounds to fix. The bridge design used the latest design techniques, including software simulation to verify the design. Sway is a normal characteristic of bridges. However, the designers failed to anticipate how people walking on the bridge would interact with the sway in a way to magnify it. The root cause of this problem is that, while the simulation model was probably sufficiently accurate, the environment, in this case, people walking on the bridge, was not accurate.

This is a very common syndrome in designing complex hardware systems. You simulate the chip thoroughly and then when you power it up in the lab, it doesn’t work in the real environment. I describe an example of this exact scenario in this post.

In conclusion, it does seem that bridges obey the laws of designing complex systems. The bad news is that the catastrophic failure rate of safety-critical software is of roughly the same magnitude as that of bridges. This means that we cannot expect significant improvements in the quality of software over the next thousand years or so. On the plus side, we no longer need to buy the excuse that software development is not as rigorous as “traditional” disciplines such as building bridges.

The computational complexity class, NP-hard, is at the core of a number of problems we encounter on a daily basis, from loading the dishwasher (how do I get all these pots to fit?) to packing a car for a vacation, to putting together a child’s train tracks.

If we look at these things, they have several things in common. First, they each involve a potentially large number of parts (pots, luggage, pieces of track) that need to be put together in some way. Second, we want to meet some objective, such as fitting all the dishes in the dishwasher. Third, there are a large number of constraints that must be met. In the case of loading the dishwasher, no two dishes can be put in the same place. There are N^2 constraints just to specify this, among many others. A fourth characteristic is that we may get close to an optimal solution, but find it difficult and not obvious how to get to a more optimal one (just how are we going to fit that last pot in the dishwasher). Furthermore, getting from a near optimal solution to an optimal one may involve a complete rearrangement of all the pieces.

One way to solve problems like packing a dishwasher is to view it as a truth table. Each dish can be put in one of, say, 100 slots, in, say, one of ten different orientations. This results in 1000 combinations, requiring 10 bits. If there are 40 dishes, 4000 bits are required to represent all possible configurations of dishes in the dishwasher. The resulting truth table is vast. Each entry in the table indicates how much space is left in the dishwasher if dishes are put in according to the configuration of that entry. A negative number indicates an infeasible solution. There will be many invalid configurations which have two or more dishes occupying the same location. We give all of these entries a large negative number.

The resulting table describes a landscape that is mostly flat with hills sparsely scattered throughout. We can also imagine that this landscape is an ocean in which negative values are under water and positive values represent islands in the ocean. The goal is to find the highest island in the ocean. We start in some random location in the ocean and start searching. We may find an island quickly, but it may not be the highest one. Given the vastness of the ocean, it is understandable why it can take a very long time to find a solution.

But, wait a minute. What about polynomial algorithms like sorting? A truth table can be constructed for these also. For example, to sort 256 elements, we can create 8 bit variables for each element to describe the position of that element in the sorted list. The value of each entry would indicate the number of sorted elements for that configuration. The complete table would again be around 4000 bits and have vast numbers of infeasible solutions in which two or more elements occupy the same slot in the list and only one satisfying solution. Yet, we know finding a solution is easy. Why is this?

The ocean corresponding to the sorting problem is highly regular. If we are put down in an arbitrary point in the ocean, we can immediately determine where to go just be examining the current truth table entry (point in the ocean). Knowing the structure, we may be able to determine from this that we need to go, say, northeast for 1000 miles. We may have to do this some number (but polynomial) times before getting to the solution, but is guaranteed to get to the solution. Structure in a problem allows us to eliminate large parts of the search space efficiently.

In contrast, for an NP-hard problem, there is no guarantee of structure. Furthermore, as we are sailing around this ocean, we are doing so in a thick fog such that we can only see what is immediately around us. We could sail right by an island and not even know it. Given this, it is easy to see that it could take an exponential amount of time to find a solution.

But then, how do we account for the fact that, often, NP-hard problems are tractable? The answer to this question is that there usually is some amount of structure in most problems. We can use heuristics to look for certain patterns. If we find these patterns, then this gives guidance similar to the sorting example above. The problem is that different designs have different patterns and there is no one heuristic that works in all cases. Tools that deal with NP-hard problems usually use many heuristics. The trouble is that, the more heuristics there are, the slower the search. At each step, each of the heuristics needs to be invoked until a pattern match is found. In the worst case, no pattern match will be found meaning it will take an exponential time to do the search, but the search will be much slower due to the overhead of invoking the heuristics at each step.

I hope this gives some intuition into NP-hard problems. In future posts I will talk about even harder classes of problem.

Problems that we use computers to solve can be divided into three basic classes:

  • easy to solve
  • hard to solve
  • impossible to solve

Pretty much all the algorithms we use day-to-day fall into the easy class. Text editing, web browsing, searching, even problems such as weather prediction, which run on supercomputers, are examples of easy to solve problems. These problems are solvable with algorithms that are called polynomial-time (P). This means that, given a problem size of N, an algorithm exists to solve the problem in time that is proportional to a fixed power of N, written as O(N^k). For example, sorting a list of N words is solvable in O(N^2) time.

The class of hard-to-solve problems pretty much includes all problems involved in the creation and verification of a design. Synthesis, place-and-route, formal verification, all fall in this class. You  probably have heard the term NP-complete. NP-complete is just one class of many in the hard-to-solve category. The class of NP-complete problems is the easiest of the hard classes, which means that, often, these types of problems can be solved reasonably well. The hardest of the hard-to-solve classes that you are likely to encounter  is the PSPACE-complete class. Problems in this class are, for all intents of purposes, intractable. We will look at these two classes, plus another one that occurs frequently in design problems, the NP-hard class.

Both the easy and hard-to-solve classes are at least theoretically possible to solve. The last group, the impossible-to-solve group, consist of the set of uncomputable problems. These problems mostly are of theoretical interest, which makes sense, since nobody is going to make any money trying to solve problems that cannot be solved.

One note about complexity classes: I refer to the complexity of problem classes, not algorithms. For example, sorting is a problem, shell sort is an algorithm that solves the sorting problem.  So, saying sorting is an easy problem means that there exists some algorithm that solves it easily. Saying a problem is hard means there is no algorithm that solves it easily.

Since all design and verification related problems fall into the hard-to-solve category, this is what I will talk about most. However, the boundary between easy-to-solve and hard-to-solve problems gets very blurry when we start to look at problems in the real world.

But, before talking about this, let’s first look at a basic property of these different classes – scaling. Suppose we had an algorithm that computed a result for a problem instance of size N (say, sorting a list of size N) in time T. Now suppose we had a processor that ran ten times faster. How much larger problem size could our algorithm handle in the same amount of time T? The following table shows how N grows for a CPU performance increase of 10X for different complexities:

  • linear O(N) N -> 10N
  • O(N lgN) N ->  5.7N
  • O(N^2) N -> 3.2N
  • O(2^N) N -> N+3.4

What this shows is that capacity gains from increased processor performance  diminish with increased problem complexity. For algorithms that have exponential complexity, there is basically no significant gain in capacity from increased processor speed. For example, suppose I had an exponential verification algorithm that could handle 100 variables. Having a 10X faster processor would allow me to process 103 variables in the same amount of time. Considering that Moore’s law implies that the number of variables doubles every year or so, this does not bode well for our ability to keep up.

But, the fact is that verification does  manage to keep up with increased design sizes, albeit with difficulty. So, verification is not always an exponential problem. We will explore the reasons for this in the next post.

There was an interesting article on verification methodology in Chip Design Magazine recently. The author, Carl Ruggiero, works at an IP supplier and, so, doesn’t have any particular agenda to push with respect to verification methodology (unlike most of the articles in this magazine).

He makes the following points:

  • quality of verification is not correlated to quantity of verification.
  • Directed testing and constraint-based random testing can both be equally successful or unsuccessful.
  • quality of verification is not correlated to the language chosen.
  • Good verification planning, execution, and tracking are the keys to producing a high-quality (low bug) design.

The interesting thing about this is that Ruggiero states that these things were surprising to him. If you understand the concepts of Bugs are Easy, none of these things should be surprising. Let’s try to put these statements in the context of the three laws of verification and the orthogonality concept.

We know that putting effort on multiple, orthogonal methods is better than putting all the effort on a single method. This alone can explain why high verification effort  can fail to produce a high quality design compared to using low verification effort.

We know that directed and random testing are orthogonal methods and both are capable of finding a majority of bugs. We also know that either can appear to be efficient or inefficient depending on how they are deployed. Thus, it is not surprising that Ruggierio sees different groups having different levels of success for random and directed testing.

I’ll come back to languages in a minute.

His conclusion that good planning, execution, and tracking is the key to producing a high quality design, however, is counter to the principles of Bus are Easy because it is essentially a statement there is an absolute best methodology. First, I think Ruggiero would take issue with calling good planning a methodology. After all, isn’t good planning essential to any successful endeavor? How could this be a methodology if all methodologies require good planning? Well, it turns out that back in 1996, two researchers from DEC proposed a verification methodology in which the central tenet was to not do any planning (Noack and Kantrowitz, DAC, 1996) (Sorry, can’t find an online version to link to). Their reasoning, which should sound familiar, was that no matter what you did at the beginning of verification, you would find bugs, so why bother spending a lot of time planning up front. We used this methodology successfully on the MCU chip that I worked on at HAL.

So, planning is a methodology, not planning is a methodology and both are, therefore, subject to scrutinization using the laws of verification. The fact that planning was successful does not mean it is the best methodology. Not planning has also proven successful.

Now let’s return to the issue of languages. Ruggiero states that he has seen simple Verilog-based environments produce high-quality designs and complex HVL-based environments produce low-quality designs. He goes on to further say

…a commitment to execute it turned out to be far more important than the tools chosen to implement it…

where , in this context, “tools” refers to languages. This conflation of tools as languages is made more explicit in his concluding paragraph:

…methodology matters far more than tools in delivering working hardware designs. While certain EDA languages can make engineers more productive…

There is a clear assumption in his mind as he equates tools and languages: languages are technology. That is, there is something about advanced languages that makes testbenches written using these languages more likely to find bugs or get higher coverage or whatever. While advanced languages certainly are useful and enhance productivity by including features that you probably would have to create manually, nothing about them is inherently smarter or better with respect to finding bugs. It’s like saying it’s better to design in English than in Chinese. Or that if you have power steering in your car, you are less likely to get lost than if you have manual steering. The languages are technology argument makes no sense, but as Ruggiero’s article shows, this mindset is pervasive.

Ruggiero correctly concludes that languages are not important to final quality, but he misses the more fundamental conclusion: languages are not technology.

I talked previously about comparing methodologies and the that the key to choosing different methodologies is to choose those that are orthogonal. I also talked about how to decide whether a new method is worth doing or not. In this post, I will compare two methodologies, simulation and formal, in more detail using the concepts I developed in these previous posts.

When starting out with simulation, you may write some simple directed tests or the simplest random environment that exercises only the most basic cases. This represents a very over-constrained environment, that is, it tests a very small subset of the legal input space. This is insufficient to find all bugs, so generally you will modify the environment to relax the constraints such that it can test more of the input space. New bugs are found as new parts of the input space are explored.

Generally, the hard part of increasing the input space is refining the checker. For example, it is easy to inject errors on the input side, but extremely difficult to check results – should the packet be discarded? which packet? How many packets? Another example is configuration. The initial environment usually only exercises a single configuration. Each configuration bit adds some amount of work to verify correctness and, in a complex system, there are usually many configuration bits and combinations of configuration bits to be tested.

As a consequence of this, it becomes more time consuming to increase the input space that is being exercised. And the closer it gets to the exact legal input space, the harder it becomes to get there. This is one of the main causes of the verification bottleneck. At the same time, there is no way to short circuit this because not getting to the point of exercising the complete input space means potentially missing easy bugs that could be show stoppers.

This is where assertion-based verification (ABV) comes in. Rather than starting with a very over-constrained environment and then refining it by gradually loosening the constraints, ABV starts with a highly under-constrained environment and, as constraints are added, gradually restricts it toward the exact legal input space.

The problem in reducing the over-approximation inherent in assertions is analogous to expanding the under-approximation in simulation. The input space is restricted by adding more constraints. The closer the input space gets to the legal input space, the harder this becomes. The reasons are similar to simulation, but there is an additional factor with assertions, namely, that is basically impossible to specify high-level behavior using assertions, which means it is not possible to completely verify a design using assertions only. This means that you will never get to point of constraints specifying the exact input space, it will always be under-constrained to at least some extent.

Effort required to constrain simulation and formal to the exact legal input space

Effort required to constrain simulation and formal to the exact legal input space

So, why use assertions at all? The fact that assertions approach the problem from the opposite end of the input space spectrum means that assertions are an orthogonal method of verification. They force the user to view the design in a very different way. This has two benefits. First, bugs that are hard to find using simulation may be easy to find using assertions and/or formal. Second, assertions make it easier to debug because the error is generally caught closer to the source of the bug (in fact, I believe this is the prime advantage of using assertions).

What is the downside of using assertions and formal? First, since it is not possible to completely verify using assertions alone, nobody is going to abandon doing random and directed testing and, generally, this will be done first. As we have seen before, whatever is done first will find the most bugs. This means that assertions and formal generally are relegated to finding a small number of bugs. This is OK, since these methods are orthogonal to random and directed testing, as long as it is not too much effort.

Unfortunately, writing assertions is very time consuming, error prone, and because it is under-constrained, prone to false negatives (bugs) that cause a lot of frustration and wasted effort. Using formal is another large effort on top of writing the assertions in first place.

Bottom line, assertions and formal are orthogonal methods to simulation and, therefore, useful. But, their return-on-investment is low compared to simulation. If you have the budget, time, and manpower, or verification is a critical problem for you, they are probably worth doing, otherwise, they are probably not.