P = NP: A Million Dollar Problem

In 2000, the Clay Math Institute awarded one million dollars each for seven important, long-standing math problems. Of these, only one has been solved since (the Poincaré conjecture). I will discuss another one, the P = NP problem. This is arguably one of the most relevant and famous problems in modern computer science. Solving the question cures cancer, obliterates all existent computer privacy, instantly beats everyone at chess and Tetris, reconstructs the full history of species from DNA, and would most likely create general mayhem on earth.

I stumbled upon this problem while reading The Master Algorithm by Pedro Domingos. He provides a very intuitive description of the problem: ‘P and NP are the two most important classes of problems in computer science. [..] A problem is in P if we can solve it efficiently, and it’s in NP if we can efficiently check its solution. The famous P = NP question is whether every efficiently checkable problem is also efficiently solvable’.

Now if we want to solve the problem and win the lottery, we need to dig a little deeper than that. As Domingos writes, the two most important classes of problems in computer science are P and NP. We designate any problem that can be solved in polynomial time to be P (polynomial), and with NP (non-deterministic polynomial) we mean any problem that can be solved in at least exponential time but for which the solution can be checked in polynomial time. The catch here is, if you find a way to efficiently solve one of the NP problems, you solve all of them because at their core, they are all the same. This is called NP-completeness.

For understanding what the aforementioned sentences mean, we need to grasp exactly what computer scientists mean when they talk about time. For solving problems, they do not use the normal, linear time that we know, but correct for the amount of elements in solving a problem, namely N. In this case, time is linear only for very simple problems. For example, let’s say you need to find the highest number in a list (or select for any other condition). Then you need to iterate through the entire list only once. This is a linear problem, in the sense that every added element (to the length of the list) adds only one extra computation/extra time unit. In the graph, it’s the O(n) line.

However, many problems in computer science are in polynomial time. This means that the amount of computations needed to solve the problem is the amount of elements raised to some power (for example N-squared or N3; O(n^2) in Figure 1). One notable problem in P is that of exactly figuring out what the maximal profit is in the case of multiple cost functions, called linear programming. For example, a car manufacturer disposes of one factory, a certain amount of employees and a certain amount of raw materials. The company can choose between building two car models that both have a different required amount of raw materials, labor, and retail price. Within the boundaries of available resources, you’d compute all possible combinations of amounts for both car models (so compute the full solution space), and go with the most profitable, or optimal, solution. For more info, check this page.

However, problems in NP need more than that amount of computing to be solved. The solution space is way larger than that of exponential problems. The official definition is a problem that can be solved in polynomial time by a non-deterministic Turing machine (Cook, 2000).

An ‘easier’ way to look at NP problems is this: They require at the very least an amount of computations that is equal to some number raised to N, sometimes even a factorial of N. So for example if N = 20, at least 220 computations are needed (Figure 1: O(2^n)) . This means that the number of computations increases so fast with N, that we can only compute exact solutions for very small problems. To illustrate this, I’ll use an example from MIT: If a computer takes a second to perform a computation with 100 elements in the case the algorithm is in linear time (N = amount of computations), that same algorithm will take ~3 hours if the amount of computations is equal to N3 (polynomial time), and will take 300 quintillion years if the computation time is equal to 2N (exponential time). This example should provide some insight into the timescale which is required for solving NP problems with large Ns. As a result, we can currently only solve very simple NP problems with only a few components.

However, NP problems are usually easily checkable in polynomial time. For example, computing the optimal solution for a game of Tetris requires a ridiculously large amount of computations, but it is very easy to see that one has solved Tetris. Same for Sudoku; solving is difficult, checking the solution for mistakes is easy.

Another way of looking at many NP problems is that for most NP problems, one needs to compute all possible combinations of the included elements to find the optimal solution. This is way worse even than 2N; in this case the amount of possible combinations is (N-1)!, which is, in the case of 9 elements, 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1=40320 (~1 hour), and in the case of 100 elements, too large to compute on my laptop. The point here is that the time increase for each added element is so large, that only very simple computations can be made with current methods.

An example of an NP problem is that of the travelling salesman. Let’s say that the salesman needs to visit a number of cities (N), that all have different distances from each other, and he wants to visit them in the most efficient route possible. In that case, he needs to compute distance to be travelled for all possible travel itineraries, which equals (N-1)!. That is, even for 10 cities, 362880 possibilities. In the case of the following picture, with 50 cities, that is approximately 6082818 * 1086 possibilities.

The aforementioned example is an example of a sub-class of NP problems, namely NP-hard problems. These are problems that cannot be solved in polynomial time and whose solutions cannot be checked in polynomial time. In the aforementioned example, this implicates that, even if you were able to have a computer put out a solution for optimal travel, you would never be sure if the computer got it right. Another example of an NP-hard problem is chess; if you have a computer find the optimal move to make, how would you even know that the computer got it right?

Here, I’ll recap NP-completeness; even though the travelling salesman problem, sudoku, cracking encryption and reverse engineering gene sequences seem not even remotely similar, they’re all the same problem at heart. They all consist of finding a unique, perfect solution, in which with current methods, iterating through all possible solutions is required. Therefore, solving one means solving them all. You’ll need to prove that problems that can only be solved in exponential or factorial time, perfect solutions exist as well in polynomial time (all relative to N). In terms that we can all understand, this implies that for receiving the million dollars, you need to prove that you can find the perfect combination of elements for a problem, without going through all possible combinations.

Now the aforementioned does not mean that scientists have not found solutions to any of the NP problems in this essay. For example, if you look at the map of the United States, it is likely that you could draw a line between all the cities that is going to be very close to the optimal solution. This is one way scientists approach NP problems; through heuristics. These are decision models that do not require computing all possible solutions, but that use a set of rules to approximate the solution. For example in the travelling salesman problem, one could set the rule that the computer/algorithm needs to move to the closest next city. This might in some cases result in an approximate solution. However, for things like cracking 50-digit passwords and reconstructing species from gene codes, this will not get us very far.

It is not very likely that mathematicians will ever find evidence for P = NP. For example. In a 2002 study, MIT researchers found that 61 computer scientists thought that P probably is not equal to NP, and 9 thought that is does (6). However, some of those told the researcher that they just said that to be controversial. Time has proven them right; no solution has arisen in the 16 years since. Thus, the general consensus us that P is not equal to NP, but no evidence has been found for this statement either.

Then why does anyone still look for the solution to P = NP? Firstly, many breakthroughs in computer science come from someone searching a solution and accidentally finding ways to make algorithms more efficient or powerful. Secondly, looking makes one understand one of the major limitations of computer science and how to deal with it. Thirdly, it provides insight into some major problems in modern society. And lastly, it’s just interesting. So have a go!

I refer to this video for another quite intuitive explanation of P=NP.

Leave a Reply

Your email address will not be published. Required fields are marked *