Skip navigation

Tag Archives: formal verification

When I started Nusym eight years ago, it was my feeling that formal verification would succeed only when it didn’t look like formal verification.  Nusym was founded with the vision of using formal techniques under the covers of a standard simulation environment to extract value from the wealth of information provided by the simulation environment.

Talking about this strategy in a response to Olivier Coudert’s blog post on formal verification, I wrote:

I think this use model will continue to grow, while formal verification, the product, will continue to whither. I predict that in 10-15 years there will be no formal verification products, but most, if not all, verification solutions at that time will incorporate formal technologies under the covers.

At this year’s DAC, however,  it was apparent that this trend is happening now, rather than 10-15 years from now. There were a number of new verification startups and products announced this year, including:

  • Avery Insight – X propagation and DFT
  • Jasper ActiveDesign – design exploration and debug
  • NextOp – assertion synthesis
  • Vennsa – automated debugging

Notice that none of these products claims to be a formal verification product. But look  who created these products: formal verification Ph.D.s. Do you doubt that these products rely heavily on formal verification techniques under the covers?

The only exception to this, ironically, appears to be my new employer, Jasper, whose JasperGold formal verification product has been seen huge growth over the last year.  They have found that  the recipe to success for formal verification is to buffer the complexity of it by using methodology and services. Also, Jasper’s latest product, ActiveDesign,  a tool that derives simulation traces for a specified design target, is not specifically intended to be a verification tool, except of course, it uses the same formal verification technology under the covers as JasperGold.

Now, there are still traditional formal verification products out there. Cadence IFV, Mentor 0-in, and Synopsys Magellan are still around. I may have missed it, but I didn’t see these tools making any a big impact at DAC this year. However, I suppose we must recognize that they still exist and that it may just take 10-15 years for them to die off.

I recently received a Call for Papers email for a workship entitiled “First Hardware Verification Workshop”. The workshop description states:

…recently we have seen a steady dwindling in the number of publications that specifically target hardware. The purpose of this workshop is to rekindle the interest in and enthusiasm for hardware verification.

Almost simultaneously, Olivier Coudert wrote a thought provoking post entitled “Has formal verification technology stalled?” He was prompted to ask this question by the fact that the number of formal verification submissions to the Design Automation Conference (DAC) has dwindled to almost nothing. He wonders if this indicates that innovation has stalled.

I  think several factors may be at work here.

One factor may be simple economics. The economic downturn has undoubtedly had an impact on the number of formal verification papers written over the last two years. Both Synopsys and Cadence have had their research labs, a staple of DAC papers every year, decimated.

Researchers may be forced to attend fewer conferences than before. If you attended, say, DAC and FMCAD before, you may have been forced to choose to attend only one. As a researcher, I would probably choose the one where I would most likely meet more formal verification researchers, since this is the prime motivation for attending conferences. In bad economic times, this is going to be the focussed conferences such as FMCAD instead of the broader EDA-wide conferences such as DAC and ICCAD (which also has seen dramatically reduced numbers of formal verification papers).

But, a more fundamental trend that I see is a change in the character of formal verification papers published at all conferences.

A formal verification research project generally consists of three parts:

  1. A formulation of an abstraction of the problem being solved.
  2. A translation algorithm from the problem domain to a solving problem.
  3. A solving algorithm.

In the early days of formal verification (late 80s and early 90s), papers would often address all three of these areas. For example, you might see something like “Verification of Cache Coherence Protocols Using BDD-based Model Checking”. The paper would describe an abstraction of a processor cache plus some properties that need to be checked (1), specify a detailed algorithm of how this is translated into a model checking problem (2), and then some novel model checking optimizations. As research progressed, each of these areas became specialized such that papers would often focus on one aspect while only cursorily addressing the others.

In the late 90s, model checking using SAT became the state-of-the-art. There were a number of papers detailing translations into SAT, a lot of papers on SAT algorithms tailored for model checking, but relatively little on problem formulation since SAT-based model checking was largely a drop-in replacement for BDD-based model checking.

However,within a few years, these papers started dwindling. First, translation has essentially become a solved problem. It is not necessarily straightforward, but it is no longer a research problem because today’s powerful SAT solvers can often compensate for weak translation.

Second, papers on SAT optimization, a staple of DAC in the early 2000s, have all but disappeared from formal verification conferences. Does this mean SAT solving technology has stagnated? No, far from it. Because of the advent of SAT competitions, almost all SAT papers now go to dedicated SAT conferences, which are still quite active. A quite large source of EDA formal verification papers has essentially disappeared and gone back to its AI community roots.

What is left then is mostly papers from the first area: problem formulation. We get many papers on how to verify this or that type of circuit using this particular abstraction and properties. Papers talk about the abstractions and properties and then says “and then we translate this into a SAT problem and give it to a SAT solver and here are the results using miniSAT” or something along those lines. SAT solvers are sufficiently robust that there is no need to worry about optimizing them or even investigating the effects of different solving strategies.

The researchers writing these types of papers don’t have the connections to the EDA industry that were prevalent in the early days of formal verification. Therefore, thare are fewer research papers being published in general EDA conferences such as DAC and ICCAD.

At the same time, I notice that the user track at DAC last year had many formal verification papers. Maybe all that research has finally translated into real tools. That would definitely be a sign of maturity.