Skip navigation

The obvious solution to the problem of increasing complexity is to raise the level of abstraction at which we design. If we look at the languages used to design software and hardware, the level of abstraction has not increased significantly in over 30 years in software and 20 years in hardware. Why is this if it is such an obvious and compelling solution to the complexity problem?

To understand this, we first must understand why raising the abstraction level increases productivity. In the “Mythical Man-Month”, Brooks concludes that the number of bugs per line of code is constant regardless of what level of abstraction you are working at. Designing at higher levels of abstraction requires less code for the same functionality. Therefore there are fewer bugs per function when coding at a higher level of abstraction, resulting in higher productivity.

So, the goal of abstraction is to reduce the amount of code written to achieve the same level of functionality. The improvement in productivity was dramatic when the move was made from assembly language to the first high-level languages such as FORTRAN and Algol. The procedural programming language, C, became the dominant language in the seventies. C maintained its dominance despite the introduction of languages such as APL and LISP, which did significantly increase the level of abstraction.

Since then, object-oriented programming (OOP) has become the standard programming methodology. But, does OOP represent an increase in the level of abstraction? We can answer this by looking at the four types of abstraction. OOP languages have the same level of structural abstraction as non-OOP procedural languages. The fact that OOP allows member functions and private data doesn’t change the level of structural abstraction, it just allows abstraction errors to be detected at compile time. There is no difference in behavioral abstraction. Both methodologies use the same data abstractions so there is no difference there and temporal abstraction does not really apply to programming languages. Thus, they are at the same level of abstraction.

Have there been any increases in the level of abstraction in programming that have had a significant impact? Scripting languages such as Perl, in which variables are declared implictly, are a form of data abstraction since the implicit declaration hides these details. Garbage collected languages such as Java hide the details of construction/destruction. Templates, such as is found in C++, are another form of data abstraction. All of these innovations have had some effect on improving productivity, but none has had the dramatic impact that the move from assembly to high-level languages had. In summary, there has been no sigificant increase in the level of abstraction of programming languages in over 30 years of software design.

In hardware design, the story is similar. Initially, IC design entry was done by drawing the masks directly. A significant jump in abstraction level occurred when most designs were entered at the gate-level with tools automating the creation of masks. RTL synthesis was the next major jump in abstraction level. Since then, there have been attempts to increase the abstraction level, primarily by trying to do temporal abstraction, but these have not caught on.

We are left with the question of: how have we been able to design much more complex systems with languages that have not increased in their level of abstraction in decades?

The answer is reuse. Today, no complex software or hardware design is built from scratch. We use either standard building blocks or IP blocks, either externally or internally developed. Today, SOCs are designed which consist of a small amount of custom-designed logic which glues together a number of existing IP blocks. The software that runs on these SOCS consists of off-the-shelf operating systems, drivers, and libraries with a few simple applications. Using this methodology, very complex devices can be developed very quickly.

Does this mean that abstraction is not the answer or is not even relevant? No. When we analyze this, we find that reuse is just another way of raising the abstraction level. An IP block or library is an implementation of some behavior. The block itself is a black box; that is, we don’t care how it is implemented. In abstraction terms, when using this block, we have effectively abstracted away all its structure and only care about its behavior. Thus, reuse is a way of dramatically increasing the level of structural abstraction. This is why the abstraction level has not raised in languages in many years. There has been no need because reuse has achieved the necessary results.

Reuse is likely to continue to be the dominant method of raising the abstraction level for the foreseeable future. It is not without its problems, however. First, IP blocks and libraries are generally very inflexible. IP reuse works best when fitting a round peg into a round hole and the benefit/cost ratio drops off rapidly the worse the fit. If the IP block that is available doesn’t quite fit what you want, the temptation is to use it anyway. More often than not, it requires less effort to design a round peg from scratch than try to make a square peg fit, but designers are forced to reuse an ill fitting block anyway.

The other problem with reuse is with verification. From the designer’s viewpoint an IP/library block is a black box. But from a verification standpoint, it is not. First, the IP block is implemented at a low level of abstraction. Therefore, when executing the code for verification purposes, execution occurs at the lowest level of abstraction, making run time a bottleneck. Second, it is usually necessary to exercise all of the code in the IP/library block to ensure correctness. But, this is a very time consuming task because the low level of abstraction means that there are many cases that need to be tested.

In conclusion, from a design standpoint, reuse is the best way today of achieving increasingly higher levels of abstraction. From a verification standpoint, the issues with reuse fuel a desire to find languages that raise the level of abstraction and this is true even at the block level. Because language decisions are primarily driven by design considerations rather than verification, there has been no incentive to increase the level of abstraction in languages, which is why methodologies such as behavioral synthesis and ESL have not caught on.



  1. Hi Chris,

    Interesting posts. One question though, what about the dual of abstraction – refinement? IMHO, refinement seems to be a more natural approach to turning an idea into a product.

    People doing things bottom-up because of the lack of automation that turns an abstraction into reality. And this is exactly why verification is required – do the transistors that I’m piling up really going to make that microwave work as the customer desired?

    Abstraction requires guaranteed refinement. This is the rational behind the “correct-by-construction” camp. Once the abstraction reaches the product spec level, then we verification guys are history. Just as RTL synthesis has pretty much eliminated gate level simulation, behavior synthesis will have similar impact on today’s vera/e/sv based verification. But that day should not come any time soon.


    • I guess my assumption is that refinement is essentially a solved problem. What I mean by that is that there is no issue with building a compiler for any programming language, nor is there much of an issue of creating synthesis tools for hardware. There have been higher-level languages/methodologies that have automated refinement that have not caught on. So, the issue is not lack of automated refinement, but something else.

      I believe the issue is that the abstractions that these higher-level languages use do not match how designers think, forcing them to do a mapping from how they think to what the tool/language requires. This mapping lowers productivity and is error prone. I am not saying it isn’t possible that some language that works at a higher level of abstraction won’t succeed. I am saying that we haven’t found the right one yet and the criteria for success for this will be how well it matches how designers think at a higher level of abstraction.

      Also, if we had some tool that could take in a product spec and magically turn it into a design, I think it would get rid of the design engineers, not the verification engineers. There is no such thing as a perfect spec. (you can read some of the posts I have written about this), so there will always be a need to verify that the whatever level of specification you are using is the correct specification. On the other hand, if a marketing person could write a product spec. and tools could automatically turn it into a design, what would you need design engineers for?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: