I used to regularly take a shuttle from my university campus back to my house. As the students climbed aboard, we would each dutifully nod to the driver and name our destination. What happened next never failed to amaze me: the driver would squint at his clipboard, review which of the 20 or so intersections had been requested, and work out in two seconds the quickest route to get everyone home.

What floored me was that this was a textbook example of a problem that computers find surprisingly difficult. The task facing my driver – finding the shortest route to hit a bunch of stops – is known as the **Traveling Salesman Problem** (TSP), and it has many practical applications: optimizing UPS delivery routes, connecting electronics components with minimal wiring – the list goes on. At first glance, finding these shortest routes may seem no harder than any other question we ask of our computers. But TSP exemplifies a class of problems that have left computer scientists scratching their heads for decades: the **NP-complete** problems.

That’s not to say we don’t know how to solve NP-complete problems. Take my shuttle driver, for example: theoretically, he could just list every possible order of stops, calculate how far he’d drive for each one, and pick the shortest. It’s a bit crude, but it’s guaranteed to work.

The crushing weight of NP-completeness comes in when you try to scale things up. Say the driver had two stops to make, and then a student requested a third. That new stop could be added before, between, or after the stops in any two-stop route. The driver now has to consider not just one more option, but *three times* as many options as before. And if we added a fourth stop, he'd have four times as many options as that. This is typical of NP-complete problems: there's a perfectly valid brute-force solution, but the time it would take to calculate blows up exponentially as you increase the size of the problem. For just 20 stops, and with a computer that could try one million routes per second, it would take over a million years to find the best route.

Of course, if you claimed you’d found a route that would take, say, under 15 minutes, I could easily check that – I'd just ask you for the route, and I'd calculate the driving time. This is another property of NP-complete problems (a defining property, in fact): given a proposed solution, it's reasonably easy^{1} to check that it's correct. It's getting to that proposed solution that seems to take huge amounts of time.^{2}

Now, my proposed method for TSP was basically the dumbest one possible. Why don't we just try a cleverer algorithm? This is where NP-completeness gets kind of mind-boggling: after decades of research, we *still don't know* whether there's a clever method that doesn't take absurdly long for large routes. We do, however, have some evidence that it's impossible.

To understand where that evidence comes from, imagine you're dying for some tiny crumbs of chocolate (you have weirdly particular tastes), but all you have is raw ingredients, and you have just one hour tonight to cook. Strangely, you can't find any recipes for chocolate crumbs, but you do find a recipe for chocolate fondue. “Aha!,” you think, “I’ll just make fondue, let it cool, and chop it up!” In mathematical parlance, you've **reduced** the problem of making chocolate crumbs to the problem of making fondue. Of course, this is probably not the most efficient recipe. But if making fondue takes 45 minutes, and it would only take you 15 minutes to turn it into crumbs, then you're golden! You know that it won't take *more* than an hour, even if you don't find a cleverer recipe.

Unfortunately, the fondue recipe says it takes two hours, so you search again. You find another recipe, this one for chocolate bars. This recipe takes only 60 minutes – joy! – but then you realize that you’ll have to chop up the bars, adding 10 more minutes. You encounter similar problems with recipes for chocolate bricks, chocolate rabbits, even M&M's. After all this effort, you *still* don't know whether you can actually make your dessert in an hour – after all, maybe there's a faster fondue recipe that no one's found yet. But if you keep failing to find a quick method, you have to wonder whether it's possible at all.

This is precisely the situation with NP-completeness. Theoreticians have spent decades seeking their equivalent of chocolate recipes – algorithmic problems. For many of these problems, they've proven that with a little^{3} slicing and dicing, you could turn any solution for that problem into a solution for TSP. In other words, TSP can be **reduced** to any one of these problems – the second crucial property of NP-complete problems. In fact, every NP-complete problem can be reduced to every other one. It's as though you could convert any chocolate dessert into any other. If we could just find a fast^{4} solution to *any* of the problems – any recipe for something chocolatey that takes less than an hour to make and convert – we could solve all of the problems quickly. Yet every known method for *any* of these problems takes exponentially more time on larger problems. Nobody's proven that there isn't a cleverer, faster method, but given that thousands of people working on hundreds of problems have all failed to find one, few people are holding out hope.

Whether NP-complete problems have quick solutions is a massive open question in computer science. There's even a $1 million prize waiting for the first person who either finds a fast technique or proves that none exists. In the meantime, I'll continue to gawk at my occasionally imperfect but nonetheless impressive shuttle driver. We humans are pretty darn clever sometimes.

^{1} The mathematical definition of “reasonably easy” here is that the solution can be checked in “polynomial time.” In other words, if we're trying to solve a problem with *n* inputs (e.g., *n* stops to hit), the amount of time it takes to check the solution must grow proportionally to some power of *n* as *n* grows large.

Computer scientists really care about polynomial time, because even if it's a large polynomial (i.e., a large constant of proportionality or a large power of *n*), that's still generally better for sufficiently large problems than any kind of exponential increase.

^{2} Technically, I just pulled a fast one here. I claimed that you can quickly verify a 15-minute route, and that being able to verify the answer is part of what it takes to be NP-complete, but the original problem was to find the *best* possible route. That's obviously a little harder to check a solution for. It turns out, though, that for the purposes of NP-completeness, these two problems are formally equivalent: if I had a way of solving either one, I could solve the other, albeit with some significant (but still polynomial) extra computation time. The problems **reduce** to each other; see below.

^{3} i.e., the process of extracting a TSP solution from the solution to the other problem takes at most polynomial time.

^{4} Again, “fast” here means polynomial-time.