Skip navigation

In the late Sixties, efforts were underway to define the second generation of programming languages. The first generation of languages, FORTRAN, algol, and cobol, had been in user for almost ten years and enough experience had been built up to allow improved languages to be built. At the time, IBM was the dominant player in programming languages and was in a position to define a language and have it become the defacto standard programming language. The language they designed, called PL/1, was basically a superset of the existing mainstream languages of the time. It had features to support scientific computing, business computing, and systems programming. IBM made PL/1 its flagship programming language and they expected it to become the dominant programming language in the industry.

Meanwhile, at Bell Labs, a programming language called C was being developed. The philosophy behind C was completely opposite to that of PL/1. Instead of being a superset of all existing languages, C was a minimal common subset. C made it easy to create libraries that made up for the lack of features found in other languages. The practice of having a standard library associated with a language started with the C language. Essentially, C is not a language for writing programs, it is a language for writing libraries. Applications are built by putting libraries together. In fact, it is not possible to write a Hello World program in C without using a library (stdio). Today, all modern programming languages still follow this basic philosophy.

The whole Open Source movement is enabled by the philosophy behind C. Imagine if Pl/1 had won the language race and the only way to add new standard features was to add them to the language. The PL/1 compiler was controlled by IBM and the language was not designed to be easily compilable. The software industry would not be as nearly as advanced today if PL/1 had won the language wars.

Contrast this with how hardware design languages have evolved. The analogs of FORTRAN, algol, and cobol in hardware are Verilog and VHDL. These were first generation hardware description languages (HDLs) developed about twenty years ago. Shortly after that, specialized verification languages, such as e and Vera, were introduced that added features useful for verification. Then, along came assertion languages such as PSL and SVA. Today, the hardware design world is migrating to a language called SystemVerilog. Can you guess which path it took? Yup, the PL/1 path. SystemVerilog is a superset of Verilog, Vera, and SVA. The initial release version was version 3.1a. That’s right, it took three major revisions to get it right, then someone decided to add some functionality to get version 3.1. Oh, but wait, that still wasn’t enough, we just had to have a few extra features to get to 3.1a.

And it gets worse. In 35 years, there has been exactly one revision of the C language. SystemVerilog is due for a revision next year after only five years. Verilog has undergone four major revisions in twenty years. The proprietary languages, e and Vera, are changed practically every quarter.

The cost of all this is that it is much harder to innovate in hardware design than it is in software design. At Nusym, we spend 90% of our effort (and money) on language support rather than our core differentiating technology. To illustrate this problem, we have spent dozens of person-years developing Verilog and Vera infrastructure for our tool. By contrast, we had one person spend three months to prototype it on C.

Is there a solution to this problem? Well, there are C-based hardware design languages, such as SystemC. But, using C directly as a hardware design language is trying to fit a square peg into a round hole. Developers and users of SystemC do leverage the ability to easily create libraries, but the goal of this language is to raise the level of abstraction, not create a minimal hardware design/verification language.

It would be interesting to think of what a real equivalent of C for hardware design/verification would be. If there is any interest, I can post my ideas of how to do this and others can contribute. I don’t know if anything would come out of this, but it might be interesting to look at.

9 Comments

  1. Amen to that.
    glad to have found your blog. very interesting posts.

    • Hi Muhammad,
      Thanks for the comment,
      glad you found my blog.

      –chris

  2. Hi Chris,
    To extend the analogy a bit, don’t we want to try to skip over the C-equivalent generation and arrive directly at the modern-scripting-language-equivalent generation? (I’m thinking of issues in C — memory and string handling in particular — that have had negative impacts on system design and security, and that are handled more easily in something like Ruby or Perl or Python or PHP.) Or maybe I’m just stretching your analogy past the breaking point….

    (And please note, this isn’t a my-language-is-better-than-your-language argument. Just curious about your thoughts on software language evolution post-C and how that might relate to hardware languages.)

    • Hi Alan,
      Without actually defining what the C-equivalent hardware language would be, it is hard to think about what it’s problems would be and how to avoid them. All we can look at is the problems that current hardware-oriented languages have.

      I think the biggest problem they have is that they have ill-defined semantics, particularly with respect to concurrency. It is common, and, indeed, expected that the same code will give different results running on different simulators! Part of the reason for this is that the concurrency model is implicit in all current hardware design/verification languages. One way to fix this would be to make it explicit, which lowers the level of abstraction, but then have standard concurrency libraries that hide the scheduling details from the user while ensuring consistency of results across different simulators.

      I talk a little more about software evolution in another post: , which you might want to read.

      regards,

      –chris

  3. Chris,

    Interesting blog — found it today due to the reference from Sanguinetti’s article today.

    There’s no doubt that cycle behavior should be predictable and consistent from simulator to simulator. I think there’s another issue with concurrency semantics in that, whether explicit or implicit, it is all low-level, manually intensive and error prone right now. SystemVerilog/SystemC and C++ do nothing to address this. Concurrency is the source of design complexity — and a major source of bugs. Without a better way to express concurrency, a new “abstraction” won’t provide much to the hardware community.

    The only high-level abstraction that I’m aware of for concurrency is atomic transactions. And, this is the foundation on which we’ve built our solution.

    Chris – I presume you’re not a target customer, but I think you’d be intrigued with our technology anyway. My email is in your comment registration.

    • Thanks for the comments George.

      With respect to concurrency, I was really referring to HDL scheduling semantics, which is at a lower level than you are talking about. I disagree somewhat with your statement that concurrency is the source of design complexity. It is certainly a source, but is not the only one. I think the problem with concurrency at the level you are talking about is not so much how it is expressed as much as designers just don’t understand it that well. I am planning some posts about this subject in the future.

      –chris

  4. Ah… I was referring to your comment: “The cost of all this is that it is much harder to innovate in hardware design than it is in software design.” I presume what you’re saying is that the semantics of Verilog make it hard to build tool infrastructure easily. I guess I agree with that statement — and your conclusion that it would be useful to have a higher level language, a la what C is to software.

    I would be interested in your thoughts on this — as well as on concurrency. I, of course, agree with you that the only problem is not concurrency — but I believe it is a fundamental root cause. That is, without addressing concurrency, you can’t easily raise the level of abstraction for hardware.

    And, I’d agree that “designers just don’t understand it that well”… but what is the answer to that? (I assume this is one of the areas for future posts?)

    I don’t believe that manual management of complex concurrency is easily improved — even if you provide better analysis tools to visualize (or at least conceptualize) it. Atomic transactions allow designers to think one problem at a time without having to simultaneously attend to and manage all of the concurrent interactions with shared resource implications. Often in hardware, designers have to think through a monolithic state machine up front for a given micro-architecture — this is time consuming and error prone. Most people can keep only a half dozen things in their head at one time — atomic transactions allow you to express concurrency as isolated, local effects so that designers don’t have to understand it on a global basis at the outset. The designers can focus on the architecture/micro-architecture and functionality — the toolset can focus on getting the concurrency right. This allows designers to do what they’re good at (architecture) — and let a simple, automatable problem be handled by the tools.

    • I was really trying to say that current hardware design languages are too complex and have too much functionality. I think the equivalent of C in hardware design would be a lower level language, not a higher level language, with a lot less functionality. There are problems with current languages other than concurrency. For example, modern verification languages require the use of constraint solvers, but the solver implementation is proprietary, even in standard languages such as SystemVerilog. It would be better if the language allowed using different solver implementations. This requires a lower level language definition.

      I think if you want to move up the level of abstraction with respect to concurrency, you need to make sure the abstraction you choose matches how designers think. I wrote a post about this. If you don’t succeed in this, you won’t succeed in improving productivity.

  5. Hi Chris,
    Few years back I read a comment from some top shot in Microsoft which read like:
    “If auto industry had made similar advances as software, we would have a car running for $50”.
    The response from auto-industry was “Yes perhaps, but then we would be crashing every now and then”.

    So not sure if we have data or not, but the complexity of a language is also governed by the problem its trying to solve.
    – Is Hardware more Complex than Software?
    – What are the relative tolerance limits for hardware and software applications?
    – How costly it is to have a bug in hardware vs a bug in software.

    thanks,
    ravi


One Trackback/Pingback

  1. […] “John Cooley” languages software hardware, wiretap Somehow, the powers that be decided that my post on languages was worthy of a wider audience. We made it to Deep Chip! Deep Chip is John […]

Leave a reply to bugsareeasy Cancel reply