Skip navigation

One of the projects I worked on, some time ago, was to improve the efficiency of the physical design process. The goal was to reduce the time required to get from RTL ready to tapeout from nine months to six months. We looked at all aspects of the flow to find bottlenecks and streamline the flow. One of the major problems that emerged from this was how little hardware designers understand computational complexity and its impact on the design process.

As an example, one of the issues that the logic designers wanted addressed in the physical design flow was the fact that scan nets were given equal priority during place and route to the functional nets. Scan is used for testing only and is not performance critical, while the functional nets are very performance critical. Place-and-route tools generally don’t have the knowledge to know which nets are critical and which are not. The result is that the more critical nets are less optimal than they could be.

The proposal to fix this was to do the place and route in two steps. First, a netlist would be created with no scan nets. This netlist would be placed. Presumably, the placer would place all registers optimally with respect to the critical functionality nets only. This would guarantee, that when the critical nets were routed, they would be optimal. Second, a full netlist including all scan nets would be created, which would be placed and then routed. However, all registers would be “locked” into place as determined by the first placement run. Since these placements are supposedly optimal, the critical nets should end up be routed more optimally.

I told them that locking the registers in place for the second run was going to over-constrain the placer and would result in far less than optimal results, just based on understanding the computational complexity of place-and-route. The lead logic designer, who had proposed this scheme, argued that the placement was proven to be optimal, so it shouldn’t matter that the register placements were being locked down. There was a lot of discussion on this, but no amount of argument was going to convince this designer that his scheme wouldn’t work.

As an experiment, we decided to do a trial place-and-route on two different netlists, with only minor differences between them. Neither had scan hooked up (comparing a netlist with scan to one without wouldn’t prove anythning). The results were that, in one case all the registers in one floorplan room ended up one corner of the room and, in the other, they all ended up in the diagonally opposite side. When the designer who proposed this scheme saw that, he said, “I didn’t expect that.” I said, “I did.” Needless to say, we abandoned this scheme shortly thereafter.

The fundamental property that this exhibits is non-linearity of response – a small change in the input causes a large change in the output. This property is inherent when using complex algorithms to create and verify designs. I don’t know if today’s place-and-route tools handle the scan net problem, but even if they do, I still encounter many situations in which there is an expectation of proportional response to a design change when reality is that the response is non-linear.

Formal verification, for example, is plagued by this problem. But, it can even occur in simulation. You are probably familiar with the experience of spending a lot of time tuning a constrained random environment to get a certain level of coverage. But, then the designer makes a minor change, and suddenly you get much lower coverage and then have to spend a lot of effort to get back to where you were. This particular problem is one that we are trying to address at Nusym with our intelligent test generation.

The bottom line for this unpredictable behavior is that it is unavoidable. But, I still find the expectation that design can be made predictable common (especially amongst managers). I plan to publish some posts trying to demystify computational complexity (complexity for dummies 😉 ). I will try to frame this in a way that even managers can understand, which, hopefully should help lead to better decision making with respect to planning the amount of effort that needs to go into using complex algorithms in design.

Advertisements

One Trackback/Pingback

  1. […] Are Easy essays on designing complex systems About « Bridging the Gulf Amdahl’s Law is a Law (Hey, a New Post!) […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: