Skip navigation

I talked previously about comparing methodologies and the that the key to choosing different methodologies is to choose those that are orthogonal. I also talked about how to decide whether a new method is worth doing or not. In this post, I will compare two methodologies, simulation and formal, in more detail using the concepts I developed in these previous posts.

When starting out with simulation, you may write some simple directed tests or the simplest random environment that exercises only the most basic cases. This represents a very over-constrained environment, that is, it tests a very small subset of the legal input space. This is insufficient to find all bugs, so generally you will modify the environment to relax the constraints such that it can test more of the input space. New bugs are found as new parts of the input space are explored.

Generally, the hard part of increasing the input space is refining the checker. For example, it is easy to inject errors on the input side, but extremely difficult to check results – should the packet be discarded? which packet? How many packets? Another example is configuration. The initial environment usually only exercises a single configuration. Each configuration bit adds some amount of work to verify correctness and, in a complex system, there are usually many configuration bits and combinations of configuration bits to be tested.

As a consequence of this, it becomes more time consuming to increase the input space that is being exercised. And the closer it gets to the exact legal input space, the harder it becomes to get there. This is one of the main causes of the verification bottleneck. At the same time, there is no way to short circuit this because not getting to the point of exercising the complete input space means potentially missing easy bugs that could be show stoppers.

This is where assertion-based verification (ABV) comes in. Rather than starting with a very over-constrained environment and then refining it by gradually loosening the constraints, ABV starts with a highly under-constrained environment and, as constraints are added, gradually restricts it toward the exact legal input space.

The problem in reducing the over-approximation inherent in assertions is analogous to expanding the under-approximation in simulation. The input space is restricted by adding more constraints. The closer the input space gets to the legal input space, the harder this becomes. The reasons are similar to simulation, but there is an additional factor with assertions, namely, that is basically impossible to specify high-level behavior using assertions, which means it is not possible to completely verify a design using assertions only. This means that you will never get to point of constraints specifying the exact input space, it will always be under-constrained to at least some extent.

Effort required to constrain simulation and formal to the exact legal input space

Effort required to constrain simulation and formal to the exact legal input space

So, why use assertions at all? The fact that assertions approach the problem from the opposite end of the input space spectrum means that assertions are an orthogonal method of verification. They force the user to view the design in a very different way. This has two benefits. First, bugs that are hard to find using simulation may be easy to find using assertions and/or formal. Second, assertions make it easier to debug because the error is generally caught closer to the source of the bug (in fact, I believe this is the prime advantage of using assertions).

What is the downside of using assertions and formal? First, since it is not possible to completely verify using assertions alone, nobody is going to abandon doing random and directed testing and, generally, this will be done first. As we have seen before, whatever is done first will find the most bugs. This means that assertions and formal generally are relegated to finding a small number of bugs. This is OK, since these methods are orthogonal to random and directed testing, as long as it is not too much effort.

Unfortunately, writing assertions is very time consuming, error prone, and because it is under-constrained, prone to false negatives (bugs) that cause a lot of frustration and wasted effort. Using formal is another large effort on top of writing the assertions in first place.

Bottom line, assertions and formal are orthogonal methods to simulation and, therefore, useful. But, their return-on-investment is low compared to simulation. If you have the budget, time, and manpower, or verification is a critical problem for you, they are probably worth doing, otherwise, they are probably not.

Advertisements

2 Comments

  1. Hi Chris,

    I really enjoy reading your posts. Your style, content, and thoroughness are wonderful. This post has really piqued my interest and I thought I would respond with some of my thoughts.

    You seem to be lumping ABV with formal verification. I don’t understand this. Assertions are used in simulation as well and they are not necessarily needed to do formal verification. In my opinion, they are just another way of writing an error check. For instance, the directed tests you mention need some set of checks to say whether or not the test passed. These checks could be written using a PSL/SVA assertion or using any of the other ways to write the checks to detect errors.

    Also, you mention/show that simulation environments start over-constrained and work towards being constrained exactly. You then mention/show that ABV/formal starts under-constrained and works toward being constrained exactly. Again, I don’t think the amount of constraints is dependent on whether ABV is being used or whether simulation or formal verification is being done. A simulation environment can and often does start very under-constrained and a formal environment can and often does start very over-constrained. It just depends on what the user is trying to accomplish. As you mention, early on it may be beneficial to do bug hunting in an over-constrained environment. The bug hunting and over-constraining can and is done in both simulation and formal verification environments.

    Where we definitely do agree is that it is important to have orthogonal methods of verification. However, I believe the reason simulation and formal verification are orthogonal is more because of how bugs are found and almost has almost nothing to do with how the environments are constructed, i.e. using PSL/SVA for checks and constraints, over-constraining, under-constraining, etc. Simulation is randomly searching for bugs or directly searching for a particular bug whereas formal verification is exhaustively looking for bugs. Since formal verification is exhaustive, it may find more corner case bugs and can find proofs. Since simulation is not exhaustive, it may be able to search deeper into the state space (note, forms of formal verification can be non-exhaustive and search deep into the state space as well).

    You also mentioned design configurations bits. This is an example of where formal verification can do something that simulation cannot. Verifying different configurations in simulation often requires separate tests or runs for each configuration setting. With formal verification, all configurations can be verified at the same time.

    Another point where you lose me is when you say that “writing assertions is time consuming, error prone, and because it is under-constrained, prone to false negatives.” Of course writing an error check does literally consume time, but as I mentioned before, all your tests need something to check for errors. Using PSL/SVA just gives you another way to write the error checker and gives tools a way to easily find and group those checkers. Whether or not the error checker one writes are error prone is dependent on the skill set of the creator. If someone knows how the design is supposed to operate, they can write a good error check regardless of the form of the language they choose to write it in. If for instance they have never used SVA to write an error check, they will probably make some mistakes. Similarly, if they have never used the VMM or OVM methods for error capturing, they will probably also make mistakes.

    To the point about false negatives in under-constrained environments, yes it is true that false negatives are possible. However, again under-constrained environments are not specific to simulation or formal verification. And of course there are some positives to verification using under-constrained environments and looking at false negatives. A user who sees a false negative can choose to tighten the constraints to eliminate the false negative or make the design more robust to handle what currently is a false negative. In the process, the user learns a lot about the limitations of the design. However, if this is not a goal/concern or seems too time consuming, the user can constrain the environment, simulation or formal, more tightly up front.

    You mention, “since it is not possible to completely verify using assertions alone, nobody is going to abandon doing random and directed testing and, generally, this will be done first.” I would argue that this may be the wrong approach. Doing formal verification first can have very positive down stream effects. For instance, if the designer of the block is doing formal verification and is using the under-constrained approach to verification, they may end up creating a much more robust design that in the end may be much more reusable. Of course, as I mentioned, the under-constrained approach to verification at the designer’s block level could also be done with simulation. However, the environment one would create to verify the block with simulation may be largely throw-away. Whereas a formal verification environment will likely include properties used as input assumptions that can be reused as assertions when the block is connected to the rest of the design. These interface properties are a contract on the interface that ensures that other blocks drive the designer’s block as expected. Also, during the formal verification process many of the checks written may be proven. This can potentially eliminate items on the overall verification plan. I think the fact that a low-level verification environment can be created that is not entirely throw-away, but instead has many down stream benefits are prime reasons for doing formal verification early. In fact, I wouldn’t stop at using formal verification at the very low-level. It can have the same benefits at higher levels of verification.

    In the end, where and how a team applies resource to do verification may be based largely on the skill set of the team. However, a group of smart people can just as easily learn to create either a simulation or a formal verification environment. If I were in charge, I would focus on using formal verification at the low and mid levels of verification and simulation at the higher levels. The higher levels being the level of the design that is delivered either to the customer if it is an IP block or to the chip level verification team if it is a SOC or other large ASIC. I think this approach meets the requirements you’ve articulated in the past about what makes a good verification methodology. Both simulation and formal verification and subsets of each are being used and different people with different views are doing the verification.

    I must close by saying that I am currently an AE for Jasper Design Automation, but I want to stress that the opinions I am expressing mostly come from my previous 10 years of experience doing design and verification (using simulation and formal methods) of processors and chipset ASICs for high-end servers.

    • Ross,

      thanks for the thoughtful response. I’m sorry it’s taken me this long to respond. I have been very busy getting ready for a vacation next week.

      I will try to reply to some of your points.

      “You seem to be lumping ABV with formal verification.”
      For the most part, yes. This is because, historically, they were lumped together. You can’t use formal verification without assertions and, before formal verification, nobody used assertions (at least temporal assertions, which are really what people mean when talking about ABV). 0-in pioneered the use of assertions in design. But, they didn’t start out doing this. They started out with a lint-like model, where they would try to infer properties of the design and then prove them. This failed miserably so they scrambled to find an alternative. They realized that the user would have to provide some sort of input, but that this would meet with a lot of resistance. So they tried to market assertions and ABV as having value in itself and, oh by the way, we have a tool that can leverage those assertions to find bugs. So, while it is true the users do use assertions without formal verification today, this has mostly been due to marketing push rather than pull from users who see a benefit to spending the effort in writing assertions. So, while I don’t explicitly lump the two together, I don’t have a problem with that perception.

      “simulation can start very under-constrained and formal can start very over-constrained”
      It is certainly possible to do it this way and if it is possible, I am sure some people are doing it that way. But, I think this is the exception rather than the rule. It is much more natural to start with writing a directed test to verify 2+2=4 and the generalize from there than to do the opposite.

      “formal is better than simulation for configuration bits”
      I think the difficulty in verifying different combinations of configuration bits is checking results, not in exercising the combinations. The limitation in the number of configurations that can be tested is the ability to verify these combinations. Therefore, I don’t think formal has a significant advantage in this respect.

      “you lose me is when you say that “writing assertions is time consuming,…’
      I think you need to listen to your customers more closely 😉 I have tried using ABV and found it inefficient and error prone, everyone I have talked to who uses it says it is tedious, difficult, and error prone. I don’t really think there is a lot or argument on this in the industry.

      “doing under-constrained verification has the benefit of making the design more robust”
      Sure, but as a tool supplier, this is not a very compelling business model. First, making designs more robust/re-usable is a nice-to-have feature for most users. Most users are under the gun to get the product out and rarely have time to make it more robust in the off-chance the block will be re-used for something else. Second, this advantage only matters if you can apply it consistently across all blocks. But formal’s capacity limits still mean that there are very few blocks that it works on. So, if 9 of 10 blocks are too big for formal, do you really care that one can be made more robust/re-usable? Probably not.

      I think the real value of under-constrained verification is doing “super-linting”, or quickly proving simple properties with very little work on the user’s part. For example, with a completely unconstrained environment, if you can show that certain code coverage points can’t be hit, then there must be a problem in the code. This can be done with almost zero effort on the user’s part.

      cheers,

      –chris


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: