Skip navigation

Tag Archives: computational complexity

On  August 6, Vinay Deolalikar, a researcher at HP Labs in Palo Alto, released a paper with a proposed proof for the open problem P != NP. Somehow, news of this proof made it to the mainstream press with headlines proclaiming that the problem was solved. Problem was, the computational complexity community not only had not had to time to review the proof, but viewed it with great skepticism.

The significance of this is that proving P != NP is a grand challenge problem. You will hear that if it turned out the other way, that is, P = NP, then all known cryptographic systems would become easily breakable. On the other hand, it would mean that many verification and optimization problems would suddenly become tractable. More troubling, P=NP would also would mean that Skynet is just around the corner. Most researchers already believe that P != NP, so there is really no immediate effect from producing a valid proof of P!+NP.

The P!+NP problem has been deemed so important to solve that whoever solves it either way will receive a $1M prize from the Clay Mathematics Institute. Before this happens, though, the proof would need to be generally accepted by the research community as being correct, a process that will probably take months, if not years.

An extraordinary thing happened in response to the release of the proof. A number of experts in the field dropped what they were doing and started poring over the proof in earnest. This is no small feat given that the proof is over 100 pages long. Pretty quickly, a consensus was reached that there were a number of serious flaws, although, as far as I can tell, no smoking gun yet. For his part, Deolalikar has not conceded defeat yet and is working on addressing all issues that were found.

The proof and mathematics involved are  beyond my pay grade. But, the most interesting part of this episode for me is the sociological issues brought up. A group of the top experts in the world dropping everything to review a paper in a matter of days is highly unusual.  Unprecedented is that fact that the discussion happened entirely out in the open in online forums. This is in contrast to the normal peer review process, which, for journals, can stretch to years, and is often done by only a few anonymous reviewers who generally don’t share their discussion outside the editorial board (or conference program committee as the case may be).

The discussion could be followed by anyone interested. Now, it is not clear that this is a scalable model as is: it is not possible to drop everything at a moments notice for every paper that comes along. This could have the negative effect of, the next time this happens due to the crying wolf effect. On the positive side, the openness of the process allowed the computational complexity to come together and get in sync on this problem. Many of the objections raised were “standard” objections. For example, the proof may prove other things that are known to be false. This is something that researchers routinely check for when reviewing a paper.

Researchers have an, essentially, agreed upon set of “barriers” to a proof of P!=NP. They also point to a number of results (theorems) that any valid proof would need to address. Any proof in the future (assuming Deolalikar’s doesn’t work out) would need to address these barriers. Putting all this discussion in the open, hopefully, means that proposed proofs in the future would be much higher quality ( P!=NP (and some P+NP) “proofs” are produced  on a regular basis).

The other thing that is clear from this episode is that anyone who is going to receive the $1M prize needs the blessing of the computation complexity community. The most interesting barrier that I learned about is a theorem that essentially says “proofs of P!=NP are hard because P!+NP”. All of the other barriers also imply that any proof must be hard. No one is going to come up with a short, elegant, easy-to-understand proof for this. This means that a proof is highly unlikely to come from an outsider. Deolalikar is an outsider, but managed to meet enough of the community requirements to be taken seriously. While researchers acknowledge the contributions of outsiders, the nature of this problem makes it highly unlikely to come from an outsider.

In the (likely) event that Deolalikar’s proof ends up being incorrect, years from now, when a proof is produced, I think we will look back at this event and see that it was a step forward because of the openness of the discussion. Here’s hoping that this leads to positive improvements in how scientific review is done. I think we should be grateful to Deolalikar even if his proof doesn’t work out and even if it represents no new insight into the P!=NP problem.

For further reading:

bbc news (an easy to understand article blessed by an expert in the field)

http://rjlipton.wordpress.com/ (where most of the discussion took place)

wiki (gory details. warning: heavy math)

Advertisements

We have seen in previous posts that design and verification problems fall in the class of computationally complex hard problems. That is, in the worst case, it takes O(2^N) time to optimize/verify a design of size N, where N could represent the number of binary inputs. If we run into this limit, our ability to design large systems is severely limited.

Given these limits, then, how it is possible that we can routinely create designs consisting of millions of lines of code or gates? More importantly, how we can continue to design increasingly larger, more complex designs?

One answer is that we can’t. We can’t find all the bugs in complex designs, but we can hope that there are no critical bugs. We can’t perfectly optimize a design, but we can do a good enough job. Even then, exponential complexity should be a brick wall to increasing complexity, but is not. Somehow, we are able to avoid these limits.

One way to look at this is to consider the way human brains are built and work in creating complex designs. The jumping off point for understanding this is Shannon’s counting argument.

Claude Shannon was one of the most important computer scientists of the twentieth century. He is most famous for his work on information theory. Shannon’s counting argument is one of his less well known results. Shannon was trying to answer the question, what is the minimum number of gates (say two-input NAND gates) needed to be able to implement any function of N binary inputs? The basic answer is that there must exist functions that require an exponential number of gates to implement. The proof goes by counting the number of possible functions of N inputs and counting the number of functions implementable with T gates and showing that T is o(2^N). The argument is as follows:

  • the number of functions of N inputs is 2^(2^N). For example, a truth table for N=4 would have 16 entries. The number of functions, therefore is 2^16=16,384.
  • The number of functions of N inputs that can be implemented with T 2-input gates is (N+T)^(2T). Each gate input can connect to the output of any other gate or an input for a total of (N+T) possible connections and there are 2T gate inputs.
  • To implement any possible function, then, we need (N+T)^2T >= 2^(2^N).
  • Solving for T shows that T >= 2^(N-d) for some small delta d, which implies that T = o(2^N).

What is the significance of this? Today’s chips have as many as 1000 pins. What is the minimum number of gates required on the chip that would allow us to implement any function of a thousand inputs? Is, say, one million gates sufficient? Off the top of your head, you probably would say no. What about a billion? Our preconceived notion is that this should be sufficient. After all, that is right at the limit of today’s most advanced chips and it is inconceivable that we couldn’t implement what we want with that many gates.

Now lets look at what the counting argument has to say about this. The total number of possible functions is 2^(2^1000), which is a vast number, roughly 10^(10^300). A billion 2-input NAND gates would allow us to implement roughly 10^(10^10) different functions with a thousand inputs. This is also a vast number, but is infinitesimal compared to the number of possible functions. The fact is, we are limited to a very small number of functions that can be implemented!

And this picture doesn’t fundamentally change even if we assume we have a trillion gates at our disposal. So, how is it we believe that we can implement whatever functionality we need? To answer this, let’s work backward from the number of gates available and determine the maximum number of inputs for which we can implement any arbitrary function. Let’s choose the most complex system we have at our disposal, the human brain.

The human brain contains on the order of a trillion neurons, each with roughly a thousand connections. Assuming all neurons can be connected to any other neuron (which is not the case), we come up with an upper bound on the number of functions that can be implemented: (10^10)^(1000 * 10^10) = 10^(10^14) = 2^(2^40). In the best case, therefore, the human brain could implement any function of just 40 binary inputs.

Now we can do a thought experiment. What is the largest size truth table that you can mentally make sense of at one time? Two bits is easy. Three bits not so hard. Four bits is difficult to visualize as a truth table, but using visual aids such as Karnaugh maps makes it possible.

Another test would be to look at the placement of arbitrarily placed objects on a plane. Given a glance, how accurately could you replicate the placement of all objects. If there were only one object, I could envision determining placement with an accuracy of 5-6 bits in both X and Y dimensions. More objects with less accuracy. Beyond about seven objects , it is is difficult to even remember exactly how many objects there are. Maybe 10-12 bits accuracy overall is possible.

The bottom line is the human brain has a very low capacity for consciously grasping an arbitrary function. Let’s say its no more that 10 bits in the best case. From the counting argument, we can determine that it requires slightly more than 500 gates to implement any arbitrary function of 10 inputs. With a million gates on a chip, we could put 20000 such functions on the chip.

Now we treat each of these 500 gate blocks as a unit and then consider how to hook these 20000 blocks together to perform some function. This is certainly possible using abstraction. In this case it would probably require two or more levels of abstraction to be able to deal with 20000 blocks.

Abstraction is the essence by which we are able to design very large systems.  The need for abstraction is driven by the limitations on our brain in dealing with large functions imposed by Shannon’s counting argument. And it is the structure imposed by multiple layers of abstraction that makes intractable design and verification problems become tractable.

[Note: this is another in my series on computational complexity for dummies.]

The field of Artificial Intelligence (AI) got its start in 1950 with the famous Turing test. It was thought at that time that, by the year 2000, computers would possess  human-level intelligence. Well, this didn’t come to pass and human-level intelligence still seems as far off now as it did then.

One metric back then was the game of chess. It was thought that for a machine to beat the best human at chess would require human-type thinking. This metric was met in 1997 when Deep Blue beat Gary Kasparov. However, as the designers of Deep Blue freely admit, Deep Blue won mostly by brute force enumerating all the solutions. it was clear, and had been for a while, that the chess problem was going to be solved by exploiting computers vastly faster speed compared to humans rather than by any form of human-type thinking.

Chess is a particular type of game in which there are two players, with one player moving first, the second making a counter-move, and the players alternating until one player wins or a draw is achieved. The goal is to find a winning strategy. We can formulate this as follows (for player 2 in this case):

for all first moves player 1 makes,
there exists a first move that player 2 makes such that,
for all second moves player 1 makes,
there exists a second move that player 2 makes such that,
...
player 2 wins.

The computationally complexity of problems with this structure (alternating “for all” and “there exists” quantifiers) is in the class PSPACE-complete. Recall that the NP-hard class is the set of problems that tend to be hard. The NP-complete class is a subset of the NP-hard class and these tend to be the easiest problems to solve in the NP-hard class. PSPACE-complete problems are the hardest subset of the NP-hard class. In general, there is no easier way to solve them than by enumerating all the solutions until a satisfying one is found. This is why research into chess programs focusses on making search speed as fast as possible and why it takes specialized hardware to beat a human.

But, what has this got to do with verification? At the most abstract level, a verification system consists of two parts: the device-under-test (DUT) and the environment it sits in. The DUT doesn’t do anything unless the environment initiates something by injecting a request. The DUT responds to this in some way. This is analogous to a two player game in which the DUT is one player and the environment is the other. Requests and responses correspond to moves and counter-moves. The question then is: what is analogous to a game in verification?

There are two fundamental categories of properties that all verfication checks fall into: safety properties and liveness properties. A safety property is one that says “nothing bad will happen” and a liveness property says “something good will always happen.” An example of a safety property is that a traffic light should never be green in both directions simultaneously. An example of a liveness property is that, for a given direction, the light should always eventually turn green.

The vast majority of properties that we verify on a day-to-day basis are safety properties. For example, the property, 2+2=4, in an adder is a safety property. It is not possible to prove any type of property using simulation alone. It is possible to demonstrate the existence of safety property violations, but not liveness property violations using simulation alone. For example, suppose we simulated the traffic light controller for a million time steps and the light was always red in one direction. Does this constitute a violation of the liveness property? It is not conclusively a violation because it is possible the light could turn green if we simulated one more time step. At the same time, even if we saw the light turning green, this does not mean that it won’t get stuck red at some point.

It is possible to prove liveness properties using formal verification. How would we formulate the liveness property of the traffic light? One way is to consider it as a game between the cars arriving at the light (the environment) and the traffic light state. Then we could formulate the problem:

for all possible combinations of cars arriving at step 1,
there exists a state of the traffic light at step 1 such that
for all possible combinations of cars arriving at step 2,
there exists a state of the traffic light at step 2 such that
....
if the traffic light state is red, 
                     then in the next state it will be green.

This has the same structure as a game and so is PSPACE-complete. Since PSPACE-complete problems are solveable to a certain degree, we can prove liveness properties, but generally only on very small designs. The most effective use of this is to verify highly abstracted designs, typically when doing protocol verification.

This deep connection between the formal verification field and AI field goes further than having similar complexity. The core tool used in formal verification today, SAT solving, emerged from research into AI problems such as planning. The cross fertilization between these two fields has yielded dramatic improvements in efficiency in SAT solving over the last 15 years.

The computational complexity class, NP-hard, is at the core of a number of problems we encounter on a daily basis, from loading the dishwasher (how do I get all these pots to fit?) to packing a car for a vacation, to putting together a child’s train tracks.

If we look at these things, they have several things in common. First, they each involve a potentially large number of parts (pots, luggage, pieces of track) that need to be put together in some way. Second, we want to meet some objective, such as fitting all the dishes in the dishwasher. Third, there are a large number of constraints that must be met. In the case of loading the dishwasher, no two dishes can be put in the same place. There are N^2 constraints just to specify this, among many others. A fourth characteristic is that we may get close to an optimal solution, but find it difficult and not obvious how to get to a more optimal one (just how are we going to fit that last pot in the dishwasher). Furthermore, getting from a near optimal solution to an optimal one may involve a complete rearrangement of all the pieces.

One way to solve problems like packing a dishwasher is to view it as a truth table. Each dish can be put in one of, say, 100 slots, in, say, one of ten different orientations. This results in 1000 combinations, requiring 10 bits. If there are 40 dishes, 4000 bits are required to represent all possible configurations of dishes in the dishwasher. The resulting truth table is vast. Each entry in the table indicates how much space is left in the dishwasher if dishes are put in according to the configuration of that entry. A negative number indicates an infeasible solution. There will be many invalid configurations which have two or more dishes occupying the same location. We give all of these entries a large negative number.

The resulting table describes a landscape that is mostly flat with hills sparsely scattered throughout. We can also imagine that this landscape is an ocean in which negative values are under water and positive values represent islands in the ocean. The goal is to find the highest island in the ocean. We start in some random location in the ocean and start searching. We may find an island quickly, but it may not be the highest one. Given the vastness of the ocean, it is understandable why it can take a very long time to find a solution.

But, wait a minute. What about polynomial algorithms like sorting? A truth table can be constructed for these also. For example, to sort 256 elements, we can create 8 bit variables for each element to describe the position of that element in the sorted list. The value of each entry would indicate the number of sorted elements for that configuration. The complete table would again be around 4000 bits and have vast numbers of infeasible solutions in which two or more elements occupy the same slot in the list and only one satisfying solution. Yet, we know finding a solution is easy. Why is this?

The ocean corresponding to the sorting problem is highly regular. If we are put down in an arbitrary point in the ocean, we can immediately determine where to go just be examining the current truth table entry (point in the ocean). Knowing the structure, we may be able to determine from this that we need to go, say, northeast for 1000 miles. We may have to do this some number (but polynomial) times before getting to the solution, but is guaranteed to get to the solution. Structure in a problem allows us to eliminate large parts of the search space efficiently.

In contrast, for an NP-hard problem, there is no guarantee of structure. Furthermore, as we are sailing around this ocean, we are doing so in a thick fog such that we can only see what is immediately around us. We could sail right by an island and not even know it. Given this, it is easy to see that it could take an exponential amount of time to find a solution.

But then, how do we account for the fact that, often, NP-hard problems are tractable? The answer to this question is that there usually is some amount of structure in most problems. We can use heuristics to look for certain patterns. If we find these patterns, then this gives guidance similar to the sorting example above. The problem is that different designs have different patterns and there is no one heuristic that works in all cases. Tools that deal with NP-hard problems usually use many heuristics. The trouble is that, the more heuristics there are, the slower the search. At each step, each of the heuristics needs to be invoked until a pattern match is found. In the worst case, no pattern match will be found meaning it will take an exponential time to do the search, but the search will be much slower due to the overhead of invoking the heuristics at each step.

I hope this gives some intuition into NP-hard problems. In future posts I will talk about even harder classes of problem.