## A complexity theorist’s (non)apology

Several respected physicists wrote to me privately to say how disappointed they were that Umesh and I would fight shoddy journalism by making a shoddy claim of our own: namely, that the inability of quantum computers to solve NP-complete problems efficiently is an established fact. I took a lot of flak in the comments section over the same issue.

Ladies and gentlemen of the jury, I will answer the unjust charges being leveled against me and my advisor.

But first, let’s review the facts. As I’ve said in pretty much every introductory talk I’ve ever given, *obviously* we can’t yet hope to prove that NP-complete problems are hard for quantum computers, since we haven’t even proved they’re hard for *classical* computers! (Nor, for that matter, do we have any idea how to prove that *if* they’re hard for classical computers then they’re also hard for quantum computers.) These are some of the most profound open problems in mathematics. Solving them could easily take decades or centuries.

I dare say that Umesh and I know this as well as anyone on Earth. And that’s why, even while trying in the space of a few sentences to correct a breathtaking misconception about the nature of the physical world that was being endlessly repeated to millions of people, we *still* took care in what we said.

Here’s Umesh:

Most egregious is your assertion that quantum computers can solve NP-complete problems in “one shot” by exploring exponentially many solutions at once. This mistaken view was put to rest in the infancy of quantum computation over a decade ago … For unstructured search problems like the NP-complete problems this means that there is no exponential speed up but rather at most a quadratic speed up.

In the above passage, Umesh is talking about an epochal theorem that he and others *did* manage to prove: namely, that quantum computers could not solve NP-complete problems by any “one-shot” method based on exploring exponentially many solutions in parallel. Throw away the structure of an NP-complete problem — consider it just as an abstract space of 2^{n} solutions — and we *know* that quantum computers will give you at most a quadratic speedup over classical ones.

In the thirteen years since this “BBBV theorem” was proved, two interesting things happened:

- Various experts dismissed the theorem as irrelevant, knocking down a straw man, stacking the deck in favor of its conclusion by imposing an utterly-unjustified “black-box” assumption, etc.
- Hundreds of articles appeared, in both the popular press and the arXiv, that directly contradicted the theorem.

It reminds me of how theologians chide Richard Dawkins for refuting only a crude, anthropomorphic, straw-man god instead of a sophisticated Einsteinian one, and then (with an air of self-satisfaction) go off and pray to the crude god.

To be fair, we do have *one* quantum algorithm for NP-complete problems that falls outside the scope of the BBBV theorem: namely, the adiabatic algorithm of Farhi et al. This algorithm can be seen as a quantum version of simulated annealing. Intriguingly, Farhi, Goldstone, and Gutmann gave examples where simulated annealing gets stuck at local optima, whereas the adiabatic algorithm tunnels through to the global optimum. On the other hand, van Dam, Mosca, and Vazirani gave other examples where the adiabatic algorithm *also* gets stuck at local optima, taking exponential time to reach a global optimum.

The upshot is that, if a fast quantum algorithm for NP-complete problems existed, then just like a fast *classical* algorithm, it would have to be radically different from anything that’s yet been imagined. Because of this — not to mention the civilization-changing consequences that such an algorithm would have — Umesh and I feel strongly that claims to solve NP-complete problems should never be bandied about lightly. As with perpetual-motion machines or antigravity shields, the burden of proof lies entirely with the would-be inventor. “In case of fire, break glass.” “In case of algorithm, break skepticism.”

It might be objected that, while the *experts* know that this is what Umesh meant, laypeople could easily misinterpret his words — or in other words, that Umesh has pulled a D-Wave of his own. But here’s the crucial difference. Any motivated reader who wanted the real story behind Umesh’s three-sentence caricature could *find* that story in peer-reviewed articles only a Google search away. But with D-Wave, all they’d have to go on is the PR. Simplifying mathematical subtleties is a right you have to earn, by having the cards in case anyone calls your bluff.

So much for Umesh’s letter. Now let’s look at mine:

Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time. Indeed, the view of quantum computers as able to “try all possible solutions in parallel,” and then instantly choose the correct one, is fundamentally mistaken.

Notice I didn’t say it was *proved* that quantum computers can’t solve NP-complete problems in reasonable time: I said it was *accepted*. This, I felt, was a difference few people would have trouble understanding. As an example, if biologists said it was *accepted* that the Loch Ness monster doesn’t exist, presumably no one would interpret that as meaning they’d actually *proved* its nonexistence. Indeed, the interesting difference between the two cases is that someday, it *might* actually be possible to prove the nonexistence of the fast quantum algorithm.

Or are we complexity theorists being too dogmatic? Should we concede to a certain subset of our physicist friends that, until an actual proof has been discovered, we have no basis even to *guess* whether P versus NP or NP versus BQP will go one way or the other way? Should we, in other words, hold ourselves to the same lofty standards of uncompromising mathematical rigor that the physicists themselves have always adhered to?

Oh — pardon me. I had momentarily forgotten that we were talking about the headmasters of handwaving, the sultans of sloppiness, the princes of proof-by-example. Indeed, I think it’s fair to say that if physicists had discovered the P versus NP question, they would have *immediately* declared that P≠NP — and they would have hailed this ‘discovery’ of theirs as another remarkable success for physics as a discipline. And everyone else — from other scientists to programmers to journalists to the general public — would have gone right along with it. The task of *proving* P≠NP would have been left as a technical detail, to be filled in by the mathematical hairsplitters — just like the task of proving quark confinement, or the ergodicity of particles in a box, or the existence of Yang-Mills theory, or the perturbative finiteness of string theory.

Clearly, the issue here can’t be the intelligence of physicists, some of whom actually seem reasonably smart. The issue, rather, is their different standard — much closer to the standard of everyday life — for saying that they know something is true. My favorite example in this vein comes from Leonid Levin, who tells me he couldn’t convince Richard Feynman that P versus NP was an open problem at all.

I believe Feynman was onto something, in that *the only reason P versus NP is called an “open problem*” is that we — the theoretical computer scientists and mathematicians — hold ourselves to a different standard of rigor than any other scientists. Were we less cautious, we could easily talk about the hardness of NP-complete problems as one of our great *discoveries*, a discovery for which working out the mathematical underpinnings admittedly remains as a challenge for future generations.

Ironically, our higher standard of rigor often gets *turned against us*, when outsiders use it to argue that we’re just guessing, or building castles in the sky, or making conjectures that could all turn out to be wrong. The same charges could obviously be leveled against the central hypotheses of physics or economics or pretty much any other field, but they rarely are — at least not by the same people.

I’m tired of double standards, is all I’m saying.

Comment #1 February 21st, 2007 at 5:29 pm

“Or are we complexity theorists being too dogmatic? Should we concede to a certain subset of our physicist friends that, until an actual proof has been discovered, we have no basis even to guess whether P versus NP or NP versus BQP will go one way or the other way? Should we, in other words, hold ourselves to the same lofty standards of uncompromising mathematical rigor that the physicists themselves have always adhered to?”

Why not compare yourselves to the highly rigorous mathematics community instead?

What are the requirements for a conjecture to be taken seriously in the mathematics community?

Comment #2 February 21st, 2007 at 5:38 pm

Is it really relevant whether there’s two sets of standards? If physicists wore special badges that allowed them to commandeer jet planes and have the last word in every discussion, would it be right to demand the same treatment? There’s certainly a place for criticism of the discursive standards surrounding physics, economics, etc., but for CS theorists the real question is, what are the right standards for us?

To me it’s clear that we should speak to the public in a way that reflects math’s essential rigor. Part of this is by clearly marking the conjectural status of our conjectures. Given the choice between

“it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time” and

“there is no convincing evidence that quantum computers could solve…”,

the latter claim is

-as simple and easy-to-understand,

-no more subjective,

-equally deflationary of D-Wave hype

-more faithful to the spirit of the discipline.

Incidentally, I also believe speaking in this way is more likely to increase the respect shown to complexity theorists by the scientific public (or rather, minimize the disrespect). We shouldn’t give the impression that TCSers spend most of their time making up their minds that P != NP while being far from a proof; instead we should emphasize the established results, which have value and interest independently of whether P = NP. –But complexity theory should represent itself faithfully, even if it is doomed to mockery.

Scott, your expertise is certainly available both in reviewed articles and in this interactive format, so your simplifications can be queried by interested parties. And yes, you’ve earned the right to them, but that doesn’t put your choice of simplifications beyond criticism.

Comment #3 February 21st, 2007 at 5:48 pm

If physicists wore special badges that allowed them to commandeer jet planes and have the last word in every discussion, would it be right to demand the same treatment?What — physicists can

commandeer planes?! Not only do I demand exactly the same right, I want to be able to commandeer spaceships too!Seriously, you raise some good points. Maybe it would’ve indeed been better to use your phrasing. I wasn’t arguing that my choice of words was optimal, only that it was legitimate.

Comment #4 February 21st, 2007 at 5:49 pm

“it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time” and

“there is no convincing evidence that quantum computers could solve…”,

Speaking as a CS practitioner with some theoretical background, I think the first of these is more accurate and clearer. It -is- accepted that QC’s can’t solve NP-complete problems reasonably. The “no convincing evidence” is very misleading, it sounds like there is some other convincing evidence out there…

I think that this just shows that this is all nit-picking wordplay.

Comment #5 February 21st, 2007 at 6:30 pm

Why not compare yourselves to the highly rigorous mathematics community instead?A physicist, a computer scientist, and a mathematician are all riding on a train which passes several fields of brown cows. So the physicist says, “That settles it, all cows are brown!” The computer scientist replies, “No, what we really know is that these cows are brown, although I sometimes assume that all cows are brown.” Finally the mathematician says, “Correction! What we really know is that these cows are brown

on one side. All else is conjecture.”Comment #6 February 21st, 2007 at 6:42 pm

I think theoretical computer scientists should stress the

possibilitythat NP-complete problems can be solved in polynomial time. That way, people will keep trying and failing, and maybe the analogy with perpetual motion-machines won’t seem so fanciful after all…Comment #7 February 21st, 2007 at 8:18 pm

So are you going to do anything about the tagline at the very top of your blog, then?

BTW, following links around your blog is always an adventure. I went to the “Speaking Truth to Parallelism” category to “Reasons to believe II: quantum edition” to “Ten Reasons To Believe P!=NP,” from which I found the fascinating papers “NP-complete problems and physical reality” as well as the one on the possible independence of P vs. NP. Now I feel very tempted to put off my computability theory homework and read these… the independence of P vs. NP especially has been something that’s been floating around in my mind for some time. Must… resist…! Homework first, learning later! 😛

Comment #8 February 21st, 2007 at 8:27 pm

I would be interested to hear your thoughts on the following question, which seems relevant. How strong is the evidence that NP is not contained in BQP? To me the evidence appears much weaker than the evidence that P != NP. For example, let’s look at your ten reasons for believing P != NP and see how well they apply to NP not contained in BQP.

1. The obvious argument is a lot weaker since quantum algorithms have been under development for far less time. Also one could potentially argue that the counterintuitive nature of quantum mechanics might make it difficult to discover even relatively simple quantum algorithms.

2. There is no empirical argument since quantum computers don’t yet exist.

3. The Bayesian argument that lower bounds are harder than algorithms basically applies, although as far as I know there is no known quantum analogue of the fact that proofs of P not equal to NP must be nonrelativizing.

4. Multiple surprises: It seems much harder to come up with surprising implications of NP contained in BQP than of NP = P, although perhaps I just don’t know enough quantum complexity theory.

5. It doesn’t seem like a heirarchy argument applies since BQP is not known to be contained in NP. (The strongest containment I am aware of is that BQP is contained in P^#P, but I may be slightly out of date.)

6. ehh… I’ll skip this one, since I don’t understand it. (How could there be generic exponential speedups for Turing machines which don’t immediately imply P = NP?)

7. As for known lower bounds, we have the sqrt(N) lower bound for search. However, this just proves that quantum brute force search (i.e. Grover’s algorithm) can’t solve NP-complete problems in polynomial time. This is something, but I think it would be an overstatement to say that this evidence would make all reasonable people accept that NP is not contained in BQP.

8/9. The self-referential argument and philosophical arguments are nullified by the fact that quantum computers do not yet exist. Actually, I somewhat object to these arguments even for P != NP since P = NP only would only make it easy to prove that P = NP if we knew the algorithm for solving NP-complete problems. Similarly P=NP would only make it as easy to create great works as to recognize them if we knew the algorithm. I suppose one could argue that if P = NP then we should be evolved to exploit this fact, but there are plenty of useful algorithms that our brains are too feeble to effectively run, and it does not seem evolutionarily easy to vastly improve the symbol-crunching capacity of brains.

10. I assume the pessimism argument was meant as a joke and needn’t be addressed.

Comment #9 February 21st, 2007 at 9:36 pm

Speaking of long standing computational complexity challenges, here’s one that I personally wouldn’t be terribly surprised to find quantum computers—indeed, the usual type of adiabatic quantum computation—solving efficiently but classical computers failing.

My reasons for lack of surprise (or vague optimism, if you prefer) are some vague hand-waving intuitions about physicists’ approaches to quantum spin glasses… but that’s a topic for a different day. (Shameless plug: for example, I’m giving a short talk on related-but-less-ambitious topics at the APS March Meeting on Thursday, March 8 entitled Adiabatic Quantum Computation and the Theory of Quantum Phase Transitions.) Today, I’m much more curious about hearing what all your intuitions would be.

The challenge is from Karp and is at least 30 years old, namely the average-case complexity of MAX CLIQUE on Erdos-Renyi random graphs, that is graphs where each pair of vertices has some probability p of being edge-connected. Thanks to Erdos and Renyi it’s known that the max clique of such a graph with n vertices is 2 log n – O(log log n) with high probability (where the base of the logarithm is 1/p).

It’s also known that there’s many, many more “maximal cliques” (i.e., cliques to which you can’t add any more vertices and still be a clique) that are just size log n – O(log log n)—that is, half the size of the max clique. It’s not hard to find such a maximal clique (e.g., the simple greedy algorithm works in linear time). However, there’s no known classical algorithm that can find in average polynomial time anything larger than these log n sized maximal cliques, let alone the max clique. More precisely, here’s Karp’s now 30-year-old challenge:

For any \epsilon > 0, is there an algorithm than can find cliques of size (1 + \epsilon) log n in polynomial time on average over the set of Erdos-Renyi graphs of size n?So, would any of you be shocked if quantum computers could do this and classical computers could not? On a related note, would any of you think meeting Karp’s challenge would really grant godlike powers of combinatorial optimization to us mere mortals… or are random graphs somehow significantly different from practically interesting instances?

Comment #10 February 21st, 2007 at 9:45 pm

5. It doesn’t seem like a heirarchy argument applies since BQP is not known to be contained in NP. (The strongest containment I am aware of is that BQP is contained in P^#P, but I may be slightly out of date.)Let me refer you to my complexity zoology pages for a summary of current knowledge. BQP is contained in AWPP, which vaguely smaller than PP.

7. As for known lower bounds, we have the sqrt(N) lower bound for search. However, this just proves that quantum brute force search (i.e. Grover’s algorithm) can’t solve NP-complete problems in polynomial time. This is something, but I think it would be an overstatement to say that this evidence would make all reasonable people accept that NP is not contained in BQP.I have to say that you are a great devil’s advocate for this question. Up to a point you are right, there really is rather less evidence that BQP does not contain NP. And yet, what I really believe is that there is hundreds of tons of evidence that P != NP, while for NP not contained in BQP, there is merely a ton of evidence. You are making me grope for a good argument.

Part of it is the closure property that you might expect from any realistic complexity class. BQP^BQP, that is BQP with BQP subroutines, equals BQP. On the other hand, NP^NP is the next rung in the hierarchy and is thought to be much bigger than NP. Although the relation between BQP and NP, whatever it may be, is already known not to relativize, you might expect it to extend to natural oracles that are contained within BQP. If you iterate this argument, it would suggest that all of PH, not just NP, is contained in BQP. Of course this is not a rigorous inference, and I even think that to the contrary that there is a (highly artificial) oracle that puts NP in BQP but still separates PH from BQP. Even so, this picture does make me skeptical that NP is in BQP.

Another remark is that quantum circuits have many of the same combinatorial properties as classical circuits. E.g., the space of all unitary operators on n qubits is far larger than the space of efficient circuits.

Maybe the most relevant remark to the discussion is this: If the BBBV lower bound does mean that if BQP contains NP or even a particular problem in NP, it certainly isn’t for a brute-force reason. Yet all of the popular discussion that conflates BQP with NP

isat the level of brute-force search, as is the D-Wave demo to all appearances.I guess you’ve got me. I didn’t see anything wrong with Scott’s subtitle before, but now that you have made me think, what I might say instead is “quantuming computing is not free parallelism”. I personally would be shocked if BQP contained NP, but I’m not completely sure why. Somehow I believe that BQP is realistic and NP isn’t.

Comment #11 February 21st, 2007 at 9:52 pm

On the question of “would successfully meeting Karp’s challenge signify mortals obtaining godlike powers of combinatorial optimization,” I was remiss not to make explicit the following fact, which I think is decent basis for answering “Nope, sorry. Godhead not yet achieved. Please try again.”

Erdos-Renyi graphs have small max cliques. Again, they’re only 2 log n – O(log log n) for Erdos-Renyi graphs with n vertices. So, meeting Karp’s challenge marks only a superpolynomial speedup over known classical algorithms, not an exponential one. In contrast, there are most definitely graphs whose max cliques are O(n). Indeed, the following restriction of MAX CLIQUE remains NP-complete.

HALF CLIQUE:

Given a graph, determine whether there exists a clique containing exactly half of the graph’s vertices.So, despite the above, does anyone want to take up the counterargument and hold that godlike (or at least demigodlike) powers of combinatorial optimization would accrue to mere mortals by successfully meeting Karp’s challenge?

Comment #12 February 21st, 2007 at 10:37 pm

Hey you people …. let’s get back to the interesting stuff. This topic’s been on for far too long and is getting boring …

Comment #13 February 21st, 2007 at 10:44 pm

Thanks, Greg! The argument regarding BQP^BQP vs NP^NP is very interesting and I’ve not previously encountered it. Thanks also for clarifying how much is based on complexity theory intuition and giving some explanation of the intuition. Such things are not to be dismissed.

Comment #14 February 21st, 2007 at 11:45 pm

So, what about these people? Do they need a good debunking too? I’m counting on you, Scott…

Comment #15 February 22nd, 2007 at 1:44 am

Quantum computers cannot solve NP-complete

problems in polynomial time.

.. i think you should change this sentence.

Comment #16 February 22nd, 2007 at 3:59 am

Homework first, learning later! 😛LOL

Comment #17 February 22nd, 2007 at 4:03 am

So, what about these people? Do they need a good debunking too? I’m counting on you, Scott…Quantum crypto is a much,

mucheasier problem than quantum computing — there are already several devices on the market. I don’t know the technical details of this latest announcement — would anyone who does care to comment?Comment #18 February 22nd, 2007 at 4:10 am

Quantum computers cannot solve NP-complete problems in polynomial time... i think you should change this sentence.No, I’d rather not.

Obviouslyit hasn’t been proved — if it had, there would be no reason to adopt it as my tagline!Let me put it this way: if it’s ever proved that BQP contains NP, I will eat an actual crow, bones, feathers, and all. Then I’ll throw myself into experimental quantum computing work, in the hopes of automating mathematical creativity and thereby becoming a god.

Comment #19 February 22nd, 2007 at 4:15 am

Hey you people …. let’s get back to the interesting stuff. This topic’s been on for far too long and is getting boring …Hey, this is only my fourth post about it (admittedly a record, given my usual ADHD-like attention span). Peter Woit’s written 500+ posts about the same topic, and everyone still seems interested … what do other people think?

Comment #20 February 22nd, 2007 at 6:13 am

Good to know about quantum crypto. I always felt that it was less interesting than quantum computing just because it was so “applied,” but now I have a theoretical reason for thinking it inferior as well, heh.

Becoming a god is always a good goal to have. As for the topic, I think it’s still interesting, although perhaps less so than other posts I’ve seen. And regarding the comparison to Woit, his blog isn’t as interesting as yours—just informative, and a nice way to get my semi-daily dose of string skepticism.

Comment #21 February 22nd, 2007 at 6:55 am

“And everyone else — from other scientists to programmers to journalists to the general public — would have gone right along with it.”

Except mathematicians, of course.

Couldn’t resist…….:-)

Comment #22 February 22nd, 2007 at 8:24 am

To be fair, I’m a physicist and I continually have to fight this fight with people all the time

Actually, if physics had come across the whole P vs NP issue then we would have declared P\neq NP to be a physical principle and dared people to find counter examples.

Comment #23 February 22nd, 2007 at 9:15 am

[…] Over at Shtetl-Optimized, Scott goes a little ballistic on criticism he’s received over the wording of his and Umesh Vazirani’s letters to the Economist. (As a side note, it is kind of sad to see such a poorly written article in the Economist: I know for a fact that an early Economist article on Shor’s algorithm drew a lot of great people into the field of quantum computing.) Part of Scott’s issue is with “physicists” insisting on rigor when they themselves are “the headmasters of handwaving.” So when Scott says Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time. […]

Comment #24 February 22nd, 2007 at 9:23 am

Actually, if physics had come across the whole P vs NP issue then we would have declared P\neq NP to be a physical principleI don’t think that’s right. Consider the central limit theorem, which physicists conjectured was math and mathematicians insisted was physics. I think there are other examples.

Comment #25 February 22nd, 2007 at 9:32 am

Am I nitpicking, or does Umesh’s statement

“For unstructured search problems like the NP-complete problems…”suggest that the PCP theorems relativize?So PCP shows that there is some minimal structure to even as-unstructured-as-possible search problems, e.g., the NP-complete ones. It does seem very unlikely to me that there are quantum computer algorithms for these problems. On the other hand, I don’t think computer scientists should go around ignoring one of the best results in our field by pretending that NP-complete problems are unstructured.

Comment #26 February 22nd, 2007 at 10:11 am

Though I don’t have much authority over math I

totallyfeel compelled about “earning right to simplify math subtleties”andbecoming a god by being able to automate mathematical thinking.I like this discussion a lot and no it’s not boring at all! It is actually giving non experts an opportunity to see what goes on at roof tops of science. BTW this kind of discussions is one of the most attractive themes of your blog.

Comment #27 February 22nd, 2007 at 10:14 am

It does seem very unlikely to me that there are quantum computer algorithms for these problems.What’s your best defense of that view? I tried last night and didn’t come up with as much as I wanted.

Comment #28 February 22nd, 2007 at 11:31 am

Stephen: I have a somewhat different perspective about how well my ten Reasons to Believe P≠NP carry over to the case of NP vs. BQP. So let’s go through them again (hopefully this will also answer Greg’s question).

1. The Obvious Argument. It’s true that we’ve spent less time looking for quantum algorithms, but we also had a more sophisticated starting point (standing on the shoulders of decades of classical CS). I’d hold, with Greg, that we “merely” have a ton of evidence here as opposed to hundreds of tons in the classical case.

2. The Empirical Argument. Even though we don’t yet have quantum computers, we can certainly simulate them on small instances. However, the results of such simulations have so far been inconclusive, so I agree that this argument fails.

3. The Bayesian Argument goes through.

4. The Multiple-Surprises Argument. For NP to be in BQP, graph isomorphism would need to be in BQP,

andbreaking one-way functions would need to be BQP-reducible to graph isomorphism,andNP would need to be BQP-reducible to breaking one-way functions,andNP would need to be in BQP/qpoly,andNP would need to be in coQMA… This argument goes throughexactlyas before.5. The Hierarchy Argument. Sure, we can push this one through. Either NP⊄BQP, or NP≠PP, or PP≠EXP, or EXP≠BQEXP. But if some of these hold, why not all of them?

6. The Known-Algorithms Argument. Let me elaborate for you: suppose we knew that NP⊆DTIME(n

^{log n}), or NP=coNP, or AvgP=DistNP. Any of these would make it seem more likely that P=NP, no? The same argument goes through in the quantum case: if we knew that NP⊆BQTIME(n^{log n}) or NP⊂BQP/qpoly or DistNP⊆AvgBQP, any of these would make it seem more likely that NP⊆BQP. But we don’t know them.7. The Known-Lower-Bounds Argument. Besides the BBBV lower bound for search, we also have the exponential lower bound for the adiabatic algorithm. But I agree that we still have fewer lower bounds even than in the classical case — that means we’ve got some work cut out for us!

(Possibly dumb question: is there a natural quantum analogue of, say, DPLL with caching and arbitrary variable ordering, and can we prove an exponential lower bound analogous to the known lower bounds in the classical case? Of course, if it just reduces to the BBBV lower bound then it’s not interesting.)

8. The Self-Referential Argument. Alright, I won’t push this one, since we don’t yet have quantum computers.

9. The Philosophical Argument. I completely disagree with you that

thisone is nullified by our not having quantum computers. The question is not whether wearegods already — it’s whether we live in the kind of universe where wecouldbe gods.(Incidentally, regarding “what if P=NP but we don’t know the algorithm?”: just dovetail over all possible algorithms! )

10. The Utilitarian Argument. Serious or not, this one certainly goes through in the quantum case!

Comment #29 February 22nd, 2007 at 11:36 am

My critique on Scott’s and Umesh’s letters was their unreasonable bashing of the journalist (especially Scott’s) and the overall irrelevance of this interesting and even important issue (“can quantum computer solve NP-complete problems”) to the evaluation of the company D-wave.

What should we believe? What are the clear insights (even if unproven) of complexity theory? It is not bad to have some collective beliefs about matters we cannot prove or sometimes cannot even formulate.

Here are some common beliefs about complexity and computation. Any important additions? objections?

1) NP =! P

Scott mentioned many reasons to believe it.

2) Traveling salesman requires exponential amount of time in the worse case

This is an even stronger conjecture but still highly believable.

3) NP-complete problems are infeasible

There are two levels for this question. The first is strictly in the classical case. It is the belief that the asymptotic paradigm, the polynomial/not polynomial distinction and the worst case paradigm are all justified. this is not a mathematical belief but a belief about interpretation of mathematical statements.

The second level is with quantum computation. If you believe QC are feasible this belief #3 is about the power of quantum computers or any physical device. So it is related to the topic of this post. Scott advocates 3) as a sort of “law of nature” which is a cute idea.

4) Proving NP=!P is hard (-in some formal way; or even infeasible.)

This is also an emerging insight of complexity theory. There is some indication that NP=!P cannot be proved in some restricted proof systems and maybe, people speculate, cannot even be proved at all.

A major difficulty is that proving that proving that NP=!P is hard looks as hard as (or even a little harder than) proving that NP=!P.

Of course, there are (as always since the problem was stated) a few people that try to prove NP=!P. (And this is very good.)

5) Factoring is not in P (or even factoring is exponential).

6) BQP =! BPP

7) factoring is not BQP-complete.

8) BQP does not contain NP-complete problems.

This is the mathematical conjecture Scott so strongly advocates. We do not have remotely as much evidence for this as for NP=!P, and naively it looks that QC gives much elbow room to try to attack NP-complete problems. As Scott wrote there is some evidence that this is impossible. Of course, part of Scott’s firm belief (and reasons) that BQP does not contain NP-complete problems is based on Scott’s firm belief that QC(running BQP) are feasible.

9) Graph Isomorphism is not in P

10) Graph isomorphism is not in BQP

Of course, if GI is in P then also GI is in BQP. The strange thing is that there are some facts that can point in the direction that GI is in P, (practically it looks an easy problem) and some facts that hint that GI is not in BQP. A decade ago it looked that GI may well be in BQP as what is needed seems similar to what is required for factoring.

11) Graph isomorphism is not in CO-NP

12) Graph isomorphism is not NP-complete.

Unlike 9-11. If 12 fails this will damage the complexity hierarchy.

13) RP = P

Randomization looks very powerful and yet this emerges as a strong prediction of complexity theory. The issue is that pseudo-random generators are also powerful.

If indeed RP=P what would be another way to express the power of randomization?

14) Computationally superior quantum computers are feasible

(Some people add cautiously: “in principle”, but this make the issue rather vague.)

15) It is infeasible to approximate MAX-CUT with a constant better than the one (0.878..) achieved by Goemans and Williamson.

This is a new insight. Previously, a lot of people tried to improve the constant. We know now that doing so is “unique-game”-hard. [So collective beliefs change occasionally.]

16) “Natural” optimization problems are feasible.

The rationale is this: take some optimization problem that is routinely carried out in nature. (Say, protein folding). If nature can do it, it must be feasible. If it looks NP-complete to us this is because there are some salient properties of the problem we do not know.

Personally, I believe in 1,2,3; I tend to believe in 5,6,8,12,13; I tend to disbelieve in 14 and am quite undecided on the rest.

The nice thing about all this is that people can have uncertainties and different beliefs. When you actually work on some problem the beliefs about the outcomes have a limited effect on the actual way of how the problem is attacked and studied.

Comment #30 February 22nd, 2007 at 11:51 am

Gil, I strongly disbelieve this:

11) Graph isomorphism is not in CO-NPUnder a plausible circuit lower bound assumption, Klivans and van Melkebeek have shown that coNP = coAM, in which case certainly GI is in coNP.

Indeed, I’m even undecided about this:

9) Graph Isomorphism is not in PThe fact that the problem seems easy in practice suggests that we simply don’t know enough about graphs yet.

What we have evidence for is that HSP over the symmetric group is not in BQP — but to me, that’s not at all the same as saying that GI is not in BQP.

Comment #31 February 22nd, 2007 at 12:07 pm

“Gil, I strongly disbelieve this:

11) Graph isomorphism is not in CO-NP”

Right, I forgot about their results. I think 11) was a reasonable item before KvM and now, I’d just delete 11 and its negation from the list.

“Indeed, I’m even undecided about this:

9) Graph Isomorphism is not in P”

As I said , I am also undecided on all matters regarding GI except the insight it is not NP-complete. I think quantum computation change, a little, the perception about GI, as it now may look similar to but more difficult than factoring. But sure, trying to prove that GI is in BQP; or even GI is in P is a very reasonable (while ambitious) research task.

Comment #32 February 22nd, 2007 at 2:02 pm

Highly inaccurate, no more than 300 of my 500+ posts refer to the problems with string theory! I guess your critics are right that you can’t be relied upon to meet the standards of scrupulous accuracy and lack of exaggeration that physicists are so known for….

Comment #33 February 22nd, 2007 at 2:15 pm

Let’s go back 20 years. Suppose you had asked a computer scientist in 1987 what the chances were that

a) PSPACE-complete problems had interactive proofs.

b) NP-complete problems had probabilistically checkable proofs.

c) prime factorization could be accomplished efficiently using a quantum mechanical physics experiment, but not using a digital computer.

I suspect all three of these developments would have been completely contrary to the intuition of most computer scientists; certainly (a) and (b) both greatly surprised me. I’ll let somebody else decide which of Scott’s arguments would have applied to these cases, but certainly a lot of them would.

So why do I believe that quantum computers can’t solve NP-complete problems. What’s the difference between this statement and (a), (b), and (c) above? The only answer I can come up with is general pessimism, i.e., although miracles do happen, they are much more likely not to happen. This isn’t a very good argument.

Comment #34 February 22nd, 2007 at 2:34 pm

Highly inaccurate, no more than 300 of my 500+ posts refer to the problems with string theory!Sorry, Peter — I stand corrected. (Usually I don’t even notice constant factors less than 2 until they’re brought to my attention…)

Comment #35 February 22nd, 2007 at 2:42 pm

Is there any evidence that Factoring is not complete for BQP?

Also, I think that among (a), (b) and (c) of Peter’s points, (c) is far more surprising than (a) or (b). PCPs lead us to confirm out intuitions rather than revise them (that even approximating NP-complete problems is often hard, in a very precise sense); apart from some applications in cryptography, they seem to have no strong algorithmic consequence. On the other hand, (c) shows “feasibility” of a problem that everybody believed to be hard, said hardness underlying a range of cryptographic protocols. It is only to be expected that a layman is more aware of QC, which might cause us to revise our intuitions about hardness, than of PCPs.

Comment #36 February 22nd, 2007 at 4:23 pm

Is there any evidence that Factoring is not complete for BQP?First of all, there is no clear notion of completeness for BQP. A probabilistic algorithm may be universal for quantum computing, but that is not the same thing because it will include probabilities that are not bounded away from 1/2. For example, probabilities derived from certain values of the Jones polynomial, with a certain artificial normalization, are universal in this sense according to Freedman, Kitaev, Larsen, and Wang.

Because of the somewhat perverse behavior of the “B” in BQP, I really doubt that any entirely deterministic problem could be BQP-complete. Let’s instead consider classes whose inputs and outputs are probabilistic or quantum states, call them ProbP and QuantP. It seems conceivable that some processes are QuantP-complete with respect to ProbP-reductions, but I would still be a bit surprised if such a construction exists.

Comment #37 February 22nd, 2007 at 5:31 pm

Well, Greg, for sure there’s a clear (and natural)

notionof completeness, it’s just unclear if there are any useful problems that are complete under this notion.I think “Factoring is complete for BQP” is something that is

intuitivelyfalse (unless BQP=P), but it’s hard to say if there’s anyevidencefor it. We shouldn’t be afraid of our intuitions (how else to guess at the truth?); the burden of proof is on those who challenge these intuitions.Comment #38 February 22nd, 2007 at 6:51 pm

Peter (Shor):

The only answer I can come up with is general pessimism, i.e., although miracles do happen, they are much more likely not to happen.Yes, that’s my argument 10!

IP=PSPACE, NP=PCP(log n), and factoring in BQP were, of course, three of the greatest surprises in the history of our field. On the other hand, while I was only six years old in 1987, I feel like if I were able to understand the questions then, I would’ve considered NP ⊆ BQP to be a vastly bigger surprise than all of them combined.

Comment #39 February 22nd, 2007 at 6:58 pm

Well, Greg, for sure there’s a clear (and natural) notion of completeness, it’s just unclear if there are any useful problems that are complete under this notion.The semantic definition of BQP means that there may well not be any BQP-complete problem with respect to P reduction, useful or otherwise. The Complexity Zoo says that there are oracles relative to which BPP does not have complete problems. I would suppose that the same is true of BQP, although I don’t know much about it.

Comment #40 February 22nd, 2007 at 7:01 pm

Do you believe that solving Pell’s equation and approximating the Jones polynomial are reducible to factoring?

If they aren’t, then factoring isn’t BQP-complete.

Perhaps another point is that, relative to an oracle, the period-finding problem certainly isn’t BQP-complete.

One last observation: it’s easy to make up BQP problems that don’t seem like they would reduce to factoring

under Karp reductions. For example: given k factoring instances, output the XOR of the second-to-leftmost bits of the factors.Comment #41 February 22nd, 2007 at 7:05 pm

Sure, BQP is neither known nor believed to have complete problems, but that only refers to non-promise problems (i.e. languages).

By definition,there’s a complete promise problem for BQP: given such-and-such a quantum circuit, does it accept with probability ≤1/3 or ≥2/3, promised that one of these is the case?By now, many other BQP-complete promise problems have been discovered, some of them having little to do with quantum on their face.

Comment #42 February 22nd, 2007 at 7:05 pm

I would’ve considered NP ⊆ BQP to be a vastly bigger surprise than all of them combined.Certainly a big part of it in my case is the feeling that BQP is realistic, while NP isn’t. I have had a long e-mail discussion with Gil that touched on the question of whether and how belief matters in research. (It’s a discussion that could be rather different among mathematicians than among scientists.) In my view, P vs NP is a great example of a problem where personal belief could be very important, in the sense that you may not learn much about the problem if you believe that they

areequal.If you believe that BQP is realistic and NP isn’t, then you have a set of non-mathematical beliefs with a purely mathematical corollary!

Comment #43 February 22nd, 2007 at 7:10 pm

Sure, BQP is neither known nor believed to have complete problems, but that only refers to non-promise problems (i.e. languages).In any case, factoring is not a promise problem.

By definition, there’s a complete promise problem for BQP: given such-and-such a quantum circuit, does it accept with probability ≤1/3 or ≥2/3, promised that one of these is the case?Yeah, duh, it means that I was talking nonsense in part of my answer. The same construction shows you a QuantP-problem that is complete with respect to ProbP-reduction, indeed ordinary P-reduction.

Comment #44 February 22nd, 2007 at 8:34 pm

“Sure, BQP is neither known nor believed to have complete problems”

And what grounds are there for this belief?

An oracle certainly isn’t acceptable as a basis for belief – there’s an oracle relative to which BPP doesn’t have complete problems, but we believe that it does (under natural circuit lower bound assumptions).

Comment #45 February 22nd, 2007 at 8:47 pm

Cheshire Cat: I certainly don’t believe in the nonexistence of BQP-complete problems anywhere near as strongly as I believe many other things in complexity. Ultimately, the argument boils down to (paraphrasing Peter Shor) “miracles don’t happen unless they do.”

Currently, the only semantic classes for which we have a proof or positive evidence of complete problems — IP, AM, MA, BPP, PostBQP, etc. — are the ones for which we have a proof or evidence that they coincide with a syntactic class. For other semantic classes — TFNP, SZK, BQP — it’s hard to imagine what syntactic class they could coincide with.

Comment #46 February 22nd, 2007 at 11:33 pm

Concerning BQP-completeness, there may be some subtle complexity issues that I am not aware of, but still it looks there are such notions.

There are some remarkable new type of quantum algorithms related to Potts models/Tutte polynomial etc, and universality results which (again, modulo some complexity theoretic subtleties) assets that these problems are as hard as possible for BQP (or “quantum universal” or, in a sense, BQP-complete). Relevant papers are by Kitaev Freedman and Wang, Freedman, Larsen and Wang, Aharonov Jones and Landau, Aharonov and Arad and the most recent quant-ph/0702008 by Aharonov, Arad, Eban and Landau.

I think that present collective insight/belief (item 7 on my list) is that factoring is not “quantum universal” or “BQP-complete”.

Comment #47 February 23rd, 2007 at 12:05 am

I agree with Peter Shor’s point regarding the surprising and counter intuitive developments he listed. As a matter of fact, it is not only that the three results Peter listed ran contrary to people’s intuition but the concepts themselves (interactive proofs; PCP proofs; quantum computers) came as complete surprises.

The concepts and results Peter mentioned along, of course, with the early notions and results concerning P/NP etc, show the definite nature of mathematical concepts and mathematical results.

Finding the correct interpretation is also, of course, important. Here is an example. In the late 70’s Khachian analyzed the ellipsoid method and proved that linear programming is in P. Since the ellipsoid method is practically quite bad the common belief in the mathematical programming community was that there is some sort of extreme tradeoff between practicallity of LP algorithm (the simplex was the only practical algorithm) and algorithms with good worse case behavior. I am not aware of people who regarded Khachian’s result as an invitation to find new practical LP algorithms. The situation dramatically changed with Karmarkar’s algorithm from the mid 80’s.

Comment #48 February 23rd, 2007 at 12:07 am

Peter, the issue you raise is really interesting, and deserves a response. The issue is this: given that we know how to prove several non-relativizing results, IP = PSPACE, MIP = NEXT, NP = PCP, etc, using some of the most sophisticated techniques in complexity theory, why should we take seriously the black box lower bounds for search, which show that relative to a random oracle NP is not contained in BQP. The way I think about this question is based on my (unpublished) paper with Sanjeev Arora and Russell Impagliazzo (the paper was presented at the 1993 structure in complexity theory conference and is not widely known outside the attendees of the conference). What that paper showed is that all the non-relativizing results mentioned above rely upon a single non-relativizing technique which was implicit in the Cook-Levin theorem from 1970. So rather than several sophisticated non-relativizing techniques, we have just one simply stated technique. Does this technique apply to BQP versus NP? The problem is that we have no idea how even start applying it to complexity classes such as P or BQP which, unlike IP or PCP, are not defined in terms of “proofs”.

Let me elaborate: the Cook-Levin theorem is usually stated as saying that SAT is NP-complete. If we restate it to assert what we can call the principle of local checkability — any language which has a proof of membership that can be checked in polynomial time also has a proof that can be locally checked with far smaller computational resources, e.g. log space — then the theorem does not relativize. The point is that the proof the the Cook-Levin theorem is one of the few proofs in complexity theory that actually looks inside the Turing Machine, as opposed to using simulation and diagonalization arguments. And this is exactly the principle that lies at the heart of proofs that IP = PSPACE, MIP = NEXP, NP = PCP, etc. In the paper, we also showed that if we restrict our attention to oracles that satisfy the principle of local checkability (which we have a right to, since we can actually prove this principle), then if P is not equal to NP relative to such an oracle then P neq NP (in real life), and if P is equal to NP relative to such an oracle then NP = co-NP. So in principle, we have had our our disposal, since 1970, a technique could get us past the relativization barrier to resolving P versus NP. So why haven’t we resolved P versus NP? Unfortunately we have no clue about how to use this technique in the context of P, even though it has been successfully used to establish several non-relativizing results about complexity classes defined in terms of proofs, such as IP, PCP, etc. What about NP versus BQP? Once again BQP is not a “proofs” complexity class, and it is difficult to see how to apply these techniques.

Comment #49 February 23rd, 2007 at 5:00 am

Umesh, the link to that paper on your homepage does not work.

Comment #50 February 23rd, 2007 at 6:31 am

About BQP complete problems. The natural notion of completeness for a probabilistic class (BQP, QMA, and I believe, BPP and AM) leads to approximation problems. So for BQP, approximating the value of the Jones polynomial is BQP-complete (and it is widely not realized that specifying exactly how good an approximation you need is crucial). So for the accepted definition of completeness, which doesn’t allow deterministic problems, you probably don’t get complete problems for these classes. (Probably because it’s quite possible that P=BPP or some similar unexpected thing happens.)

Now depending on whether I put on my mathematician’s hat (which says that definitions are set in stone) or my computer scientist’s or my physicist’s hat (which says that definitions can be wrong and in that case, they should be changed), we should either change the definition of completeness or define a new kind of completeness so that we can have complete problems for the classes.

Comment #51 February 23rd, 2007 at 6:52 am

People have long been using phrases like “BQP-complete promise problem,” or “QMA-complete as a promise problem.” Personally, I’ve long felt that the definition of completeness should simply expand to include promise problems — this would certainly be the right choice for quantum computing.

Ironically, the quantum computing arch-skeptic Oded Goldreich reached the same conclusion about promise problems from a different direction: see this survey article.

Comment #52 February 23rd, 2007 at 7:17 am

we should either change the definition of completeness or define a new kind of completeness so that we can have complete problems for the classes.Scott’s solution to this definitional mess is to to consider promise problems instead. But that formalism somewhat bothers me.

My solution is to change the complexity classes themselves. All of this nuisance could be avoided if you could remove the “B” from BQP, and instead consider complexity classes that process probability distributions or quantum states instead of bit strings. It might have been natural to call the result “QP”, except that it leads to a collision of notation in the case of “PP”, which has already been defined as a majority-vote class instead of as a probability-distribution-valued class.

So let me drive the point home with lowercase prefix letters. Define pP to be the class of functions from bit strings to real numbers in the interval [0,1], interpreted as indefinite states on the two-element set {0,1}, which can be realized by a uniform family of stochastic circuits. Likewise define qP the same way, using quantum circuits and a final measurement of the output qubit instead of stochastic circuits.

Then, trivial theorem: Quantum circuit evaluation is qP-complete relative to pP.

pP has an important functional counterpart which is defined as a class of functions from bit strings to probability distributions on bit strings. Meanwhile qP has several variations which may or may not merit separate names. In a fully quantum situation, qP should be defined as a class of TPCP maps from the Hilbert space of bit strings to a qubit; or in the functional case, TPCP maps from bit string space to bit string space.

Now, as Peter alluded, I have left out an important detail in the definitions of these classes. Namely, all of the classes should be extended with respect to approximability. In other words, if a function from bit strings to [0,1] is approximable by stochastic circuits, rather than directly evaluable, then it should still be made an element of pP. This part of the definition is analogous to the already-accepted precept that a unitary is considered quantum-computable if it can be approximated by quantum circuits.

Comment #53 February 23rd, 2007 at 7:19 am

I’ve long felt that the definition of completeness should simply expand to include promise problems — this would certainly be the right choice for quantum computing.Okay, Scott, before you become completely wedded to that conclusion, could you tell me what is wrong with my discussion concerning pP and qP. I thought that it was accepted that promise classes are anomalously large in some respects.

Comment #54 February 23rd, 2007 at 7:30 am

Greg: Yeah, I’ve long thought that we should talk more about distribution- and quantum-state-valued classes, but I never considered them a

replacementfor ordinary classes.How would one express what today is called BQP in your formalism?

As I’m sure you realize, if the change from existing notation is too drastic, then it has no chance of being adopted.

Comment #55 February 23rd, 2007 at 7:32 am

Let me add that this proposed formalism also cleans up a mess that arises with both advice and Merlins. Everyone knows that MA is technically different from ExistsBPP, and there is a similar distinction between BPP with Karp-Lipton advice and BPP with feasible advice. None of these distinctions arise with pP. If you add the natural existential operator to pP — Merlin chooses a witness to maximize the output in [0,1] — then you get a class, for the moment I’ll call it pNP, whose bounded-error reduction is MA. Also there is only one pP/poly, and it is the right thing.

Again, I would be eager to hear opinions as to whether anything is wrong with these definitions.

Comment #56 February 23rd, 2007 at 7:37 am

How would one express what today is called BQP in your formalism?Sort-of as I already explained. There is a class qP, and there is a “B” operator from fuzzy classes to boolean classes. B.qP = BQP and B.pP = BPP.

As I’m sure you realize, if the change from existing notation is too drastic, then it has no chance of being adopted.So let me say that what I care about is the correct definitions. I’ll let everyone else worry about how to change the notation, although I think that my notation is not so bad. I believe that the notion of fuzzy classes should make everyone happy. And I believe that they are better than promise classes, which I am told are too large in some respects and still too small in others.

Comment #57 February 23rd, 2007 at 7:49 am

I’ve never thought promise classes were too large

ortoo small — what exactly does that mean? There are some subtle issues with complete promise problems (which Goldreich discusses in his survey), but nothing too serious.Comment #58 February 23rd, 2007 at 8:27 am

I’ve never thought promise classes were too large or too small — what exactly does that mean?I don’t quite remember the warning concerning promise problems being too large, but it may have been this statement from Goldreich’s book: “There exist promise problems in NP cap coNP such that every set in NP can be Cook-reduced to them.”

As for promise classes being too small, I would point out that promise classes are a completely inadequate representation of the functional versions of randomized and quantum computing. Functional quantum computing is obviously a horse of a different color. But even with randomized computing, promise classes may address the fact that you don’t care about certain problem inputs, but they do nothing about the fact that an output distribution that may not satisfy any bounded-error condition can still be useful for a bounded-error end-purpose.

Comment #59 February 23rd, 2007 at 8:51 am

Scott, let’s say tomorrow someone proved that BQP contained NP. Would you then change your assessment of the probability of quantum computers being possible?

Comment #60 February 23rd, 2007 at 9:04 am

Scott, let’s say tomorrow someone proved that BQP contained NP. Would you then change your assessment of the probability of quantum computers being possible?I personally would wonder about the feasibility of BQP. It would still seem possible that quantum mechanics has

somefeasible computational value, but BQP could be an overstatement.Nonetheless,

so farclosure properties point to nothing less than BQP.Comment #61 February 23rd, 2007 at 9:59 am

Walt: Yes, after I’d digested the crow, I’d definitely wonder more about a breakdown of quantum mechanics. I’d also look closely at the

proofof NP⊆BQP, to see if it offered any hints about why BQP might not be feasible.Comment #62 February 23rd, 2007 at 10:25 am

I have a philosophical question. Why should NP ⊆ BQP make any difference as to whether quantum computation is feasible? A proof that BQP contains NP has absolutely no impact on the feasibility of the physics or engineering of quantum computers. If you had a proof that NP ⊆ BPP, would you begin to doubt the existence of randomness? And please don’t feel compelled to answer this if you don’t want the discussion to drift too far afield into philosophy. (Or is that what blogs are for?)

On another topic. We can get both promise and approximation problems which are complete for randomized classes, and there are apparently, according to Goldreich, some tricky aspects to defining promise problems right. At least, if the statement of Goldreich that Greg quotes is correct, my intuition would be that promise problems are

notcurrently defined properly with respect to completeness. I’m going to have to look at his paper. Somebody should investigate these issues and find the right way, if there is one, to define promise problems. Somebody should also figure out if there is a difference between promise problems (which I’m somewhat suspicious of) and approximation problems (which I trust more).This is the kind of foundational issue that mathematicians are better at than computer scientists, and which occasionally turns around and bites the uncautious.

Finally, I’d like to thank Umesh for his really interesting comment.

Comment #63 February 23rd, 2007 at 10:29 am

Yes, after I’d digested the crow, I’d definitely wonder more about a breakdown of quantum mechanics.I realize that this is just semantics, but a better word than “breakdown” is “censorship”. In GR, the censorship principle (which may not always be true, I’m not sure) says that singularities are surrounded by event horizons. It’s not really a breakdown of the rules, it’s a limitation within the rules.

Whether or not NP⊆BQP, it is conceivable that the hoped-for computational power of quantum mechanics is censored by some unknown law of thermodynamics. I rather doubt it — the skeptics have had no luck finding such a principle — but it is an important question. Of course in practice there is such a principle at the present time: Humans are just too clumsy to make quantum computers.

Comment #64 February 23rd, 2007 at 10:39 am

If you had a proof that NP ⊆ BPP, would you begin to doubt the existence of randomness?I might be suspicious of the mathematical definition of randomized computation, or maybe the definition of NP.

I’m going to have to look at his paper.Textbook, rather.

I think that you are on the same page with regard to approximation problems. Let me just note that we are talking about something different than approximation in the sense of numerical analysis, which is what the word means in a traditional class such as FPTAS. We mean approximation in the sense of trace distance between Markov maps, or quantum operations. This is both stricter than FPTAS because of linearity and positivity, and broader because of exponential expansion. Of course you know this as well as anybody, I am just noting that there is some overloading of ideas here. The part that I consider most fundamental is not the approximation axiom, although of course that is necessary, but rather passing to complexity of states and maps, as Scott also advocates.

Comment #65 February 23rd, 2007 at 10:57 am

Peter: Philosophical questions are par for the course here!

Why should NP ⊆ BQP make any difference as to whether quantum computation is feasible? A proof that BQP contains NP has absolutely no impact on the feasibility of the physics or engineering of quantum computers.Here’s an analogy. Let’s say you were examining a proposed engine that would operate at 95% efficiency. You studied all the gears, shafts, etc., and convinced yourself that it was just within the realm of engineering possibility. Now suppose someone proved that

actuallythe engine could be cleverly set up to operate at 300% efficiency. Strictly speaking, this result has nothing to do with your feasibility analysis. But wouldn’t you now be more suspicious that your analysis had overlooked something?Comment #66 February 23rd, 2007 at 11:01 am

I’m going to have to look at his paper.Textbook, rather.Goldreich makes all the relevant points in this survey paper, which is available for free on the web.

For NP∩coNP, I’m convinced that there’s a useful distinction to be maintained between languages and promise problems. I’m not quite convinced yet for BQP or QMA.

Comment #67 February 23rd, 2007 at 11:26 am

Goldreich makes all the relevant points in this survey paperOh, okay. My reference is this, section 2.4.1.

For NP∩coNP, I’m convinced that there’s a useful distinction to be maintained between languages and promise problems. I’m not quite convinced yet for BQP or QMA.Maybe. In the meantime, I think that my definitions of pP and qP are okay, except that I should have referred to stochastic or quantum Turing machines (so that you can have an input state that mixes input lengths), and except that I didn’t define the approximation equivalence.

Actually in the functional case, there are two notions of equivalence that may not be the same. The coarser equivalence, computational equivalence, actually has a shorter definition in existing notation. As I said, functional pP is approximately a certain class of Markov maps from bit strings to bit strings, namely those computable by a stochastic Turing machine in polynomial time. Two “languages”, i.e., Markov maps, are equivalent if a BPP machine cannot distinguish them with a polynomial number of oracle calls. Likewise for functional qP, which is a class of quantum operations on the Hilbert space of bit strings. qP has two versions, because we may or may not cut down the input and output from quantum states to classical states. In the stricter version, the equivalence is again indistinguishability by a BPP machine. In the looser version, it’s indistinguishability by a BQP machine that can make quantum oracle calls.

In the finer equivalence, you should replace the BPP or BQP verifier by an infinitely intelligent verifier, which however is restricted to polynomially many, polynomially bounded queries, and is required to produce a bounded-error report.

These constructions have elements in common with Levin’s concept of average-case complexity.

I realize that this is still not 100% rigorous, but I think that it is not bad.

Comment #68 February 23rd, 2007 at 11:20 pm

“Currently, the only semantic classes for which we have a proof or positive evidence of complete problems — IP, AM, MA, BPP, PostBQP, etc. — are the ones for which we have a proof or evidence that they coincide with a syntactic class. For other semantic classes — TFNP, SZK, BQP — it’s hard to imagine what syntactic class they could coincide with.”

Is there any result that formalizes this important intuition?

Comment #69 February 23rd, 2007 at 11:34 pm

Daniel: Not to my knowledge.

Comment #70 February 23rd, 2007 at 11:50 pm

Is there any result that formalizes this important intuition?Well, the point is that existing proofs of completeness of problems is essentially syntactic.

Comment #71 February 24th, 2007 at 6:29 am

Scott says:

I don’t follow this. What is the difference between complete problems for BQP (approximating the Jones polynomial at a root of unity) and complete problems for AM, MA, BPP? All of these are going to be promise problems, because they’re randomized classes [correct me if I’m wrong]. And I believe we have reasonable promise problems for all of these.

Of course, IP is defined as a randomized class, and it turns out to have determinstic complete problems. But this is (again) part of the miracle of IP=PSPACE.

Comment #72 February 24th, 2007 at 12:53 pm

I don’t follow this. What is the difference between complete problems for BQP (approximating the Jones polynomial at a root of unity) and complete problems for AM, MA, BPP?I think Scott means that AM, MA, and BPP have non-promise P-complete problems if they turn out to be equal to NP, NP, and P. Your point is that they have BPP-complete promise problems. Or in my conventions, the fuzzy-valued versions of AM and MA have pP-complete problems.

Note that the fuzzy version of MA is very easy to define. You have a pP-style stochastic Turing machine, to which Merlin supplies the certificate that maximizes the acceptance probability. That’s all that you have to say. You can, after that, apply the bounded-error operator “B” to make boolean-valued MA. AM has a similar definition, except of course that Merlin is a map from challenge strings to certificates.

Comment #73 February 25th, 2007 at 12:25 am

Because of the difficulty in the notion of BQP-complete, a better way, perhaps, to formulate my item 7 would be verbally and informally as:

The computational power of BQP is well beyond the ability to factor.

(Actually, one interesting item in my marathon e-mail discussion/debate with Greg was the role of “verbal” rather than formal/mathematical formulations of conjectures in mathematics/physics.)

Cheshire Cat asked for evidence. A strong evidence, in my opinion, is that factoring requires only log-depth polynomial size quantum circuits (along with a classical computer). This was proved by Cleve and Watrous. The insight from Boolean circuit-complexity suggests that it is unlikely that log depth gives you full power. There are other reasons to believe 7.

Comment #74 February 25th, 2007 at 10:01 am

The computational power of BQP is well beyond the ability to factor.All right, you can even say it rigorously this way:

Conjecture: BQP does not equal BPPFACTOR.

I am sure that this conjecture is true, that there are tons of problems in the set difference. I don’t even see a reason to expect other period-finding problems there such as discrete logarithm. Maybe a better conjecture is:

Conjecture: BQP does not equal BPPAHSP.

where “AHSP” means the abelian hidden subgroup problem as solved by the Simon-Shor-Kitaev algorithm. Even here Hallgren’s algorithm is a frustrating potential counterexample, because it is basically AHSP for a locally compact group (the real numbers) rather than a discrete group. Well, you could define AHSP to include Hallgren’s cases.

At this point it is good to wonder about the oracle protocol between BPP and AHSP. Of course it is fashionable to make the oracle a “promise oracle”, but a much better definition is to make it a fuzzy oracle that that returns randomized output. The oracle is handed an arbitrary subroutine that computes a putative hiding function, and then the AHSP oracle uses the known quantum algorithms to return an answer which is unhindered by promises but may be a fuzzy probability distribution.

You can see that the problem with this is that as long as you keep finding quantum problems that do have reasonable classical instances, you keep having to expand the oracle to make a conjecture that does not look stacked. A more sly observation is that most (or all?) convincing algorithms in BQP are in FH, i.e., they use only finitely many layers of non-permutation gates. I do not know if this is true of all of the wonderful Moore-Rockmore-Russell non-commutative Fourier transforms, but then, I also do not know important classical uses of those transforms. So you could ask this:

Conjecture: BQP does not equal BPPFH.

Again, you should let the FH oracle have fuzzy output.

I think that this conjecture is also extremely likely. Maybe the other regulars here can weigh in with reasons.

Comment #75 February 25th, 2007 at 10:04 am

Aargh! Stupid WordPress honored the superscript tags in the preview, but then killed them when I posted the comment. I meant BPP^FACTOR, BPP^AHSP, and BPP^FH.

Comment #76 February 28th, 2007 at 7:43 am

Hi Scott,

The comments in your letter seem reasonable, and when aimed at a general audience, I understand a certain amount of brevity.

The tag line on you blog, however, “Quantum computers cannot solve NP-complete problems in polynomial time”, is a much stronger statement, and one I can’t see how you can be justified.

I am a physicist, and I know how hand wavy you think we, as a group, are, and I do not mean to annoy you with this comment. It’s just intended to point out something you have expressed a little stronger than you may have intended.

Comment #77 February 28th, 2007 at 9:06 am

Alright, while I still strongly believe my former tagline, I’ve now toned it down, since I don’t want to make those with different religious beliefs uncomfortable visiting this blog. Hope everyone’s happy now.

Comment #78 February 28th, 2007 at 11:31 am

I’m not saying I disagree. In fact I was directly asked whether I thought quantum computers could efficiently solve NP-complete problems in a job interview last week…

Long story short, I share your beliefs (on quantum computers at least). I was just being a little pedantic

Comment #79 February 28th, 2007 at 11:33 am

Hmmm… I guess I should have said “… beliefs (on the ability of quantum computers to solve NP-complete problems, at least)”.