## Fighting Hype with Hype

I’ve been depressed all month about the oil spill. So what better to cheer me up than a flurry of comments and emails asking me to comment on an *Ars Technica* story by Chris Lee, reporting that it’s now been proven once and for all that quantum computers don’t help with NP-complete problems?

Now, just to really put the screws on any optimists out there, a new paper has shown that adiabatic computers are actually quite bad at hard math problems …

What [the researchers] have shown is that, when adiabatic quantum computers are used to solve NP-complete problems, the energy gap between the lowest energy state and the next state up is not well behaved. Instead, it narrows faster than exponentially, meaning the adiabatic quantum computing cannot, even in principle, solve NP-complete problems faster than a classical computer …

In the end, they conclude that NP-complete problems are just as hard on an adiabatic quantum computer as on a classical computer. And, since earlier work showed the equivalence between different variants of quantum computers, that pretty much shuts down the possibility of any quantum computer helping with NP-complete problems.

I don’t think anyone in the field will be particularly surprised by this. The failure of earlier work to show that quantum computers offered a speed-up on any NP-complete problem indicated that it was likely that it simply was not possible.

I’m heartened by the progress we’ve made these last ten years: from overhyped and misleading claims that quantum computers can solve NP-complete problems in polynomial time, to overhyped and misleading claims that they can’t.

The link to the paper from the article is broken, and the article doesn’t give the names of the researchers involved, but from the context, I’m pretty sure the article’s attempting to talk about this paper by Boris Altshuler, Hari Krovi, and Jeremie Roland, amusingly entitled “Anderson localization casts clouds over adiabatic quantum optimization.” This paper really is an interesting and important one—but alas, the *Ars Technica* story grossly misstates and exaggerates what it does.

For what I hope will be the last time, but I’m sure won’t: yes, almost everyone in the field believes it’s *true* that quantum computers can’t solve NP-complete problems in polynomial time. But we have no idea at present how to *prove* anything of the kind. In fact, we don’t even know how to prove *classical* computers can’t solve NP-complete problems in polynomial time (that’s called the P vs. NP question; maybe you’ve heard of it!). Nor do we even know how to prove a conditional statement, like “quantum computers can’t solve NP-complete problems in polynomial time unless classical computers can also.” Any such result would be the biggest advance in theoretical computer science at least since I was born.

So then what do Altshuler, Krovi, and Roland do? They consider a specific quantum algorithm—namely, the quantum adiabatic algorithm with linear interpolation—applied to random instances of an NP-complete problem, namely Exact Cover. They then argue, based on a combination of numerical simulations and perturbation theory approximation, that the spectral gap decreases exponentially (actually, like 1/n!), which would imply that the adiabatic algorithm generally requires exponential time to reach the ground state.

If that sounds pretty interesting, you’re right! But what’s the fine print? Well, let’s accept, for the sake of argument, Altshuler et al.’s claim that their conclusions about Exact Cover would likely generalize to 3SAT and other standard NP-complete problems. Even then, there are three crucial caveats, all of which the *Ars Technica* story ignores:

- Most importantly, the limitation (if it is one) applies only to one specific algorithm: namely the adiabatic optimization algorithm (with a specific interpolation schedule, but let’s ignore that for now). Now, some people seem to think a limitation on the adiabatic algorithm implies a limitation of quantum computers in general, since “adiabatic is universal”—a buzzphrase that’s caused a lot of confusion. In reality, what Aharonov et al. proved, in a beautiful paper six years ago, is that the adiabatic
*model of computation*is universal. But they were talking about something much more general than the adiabatic*optimization algorithm*. For example, the ground state of Aharonov et al.’s adiabatic process is*not*the solution to a combinatorial optimization problem, but rather a “history state” that encodes an entire computation itself. - The Altshuler et al. paper talks about
*random*instances of the Exact Cover problem—but the uniform distribution over instances is just one particular distribution. Even if the adiabatic algorithm doesn’t help there, it’s possible that there are other natural distributions over instances for which it exponentially outperforms (say) classical simulated annealing. - Finally, even given the above two caveats, Altshuler et al. only show that the adiabatic algorithm fails on random Exact Cover instances at a “physics level of rigor.” In other words, their argument relies on a “perturbative approximation” that seems plausible but isn’t proved. A cynic might retort that, at a “physics level of rigor,” we also know that P≠NP! But such a cynic would be unfair. I don’t want to knock Altshuler et al.’s contribution. For almost two decades, there’s been a spectacularly fruitful interaction between the physicists and the math/CS folks in the study of random constraint satisfaction problems. Indeed, many of the conjectures (or, in physics lingo, “results”) in this area that the physicists derived using their hocus-pocus methods, were later rigorously confirmed by the mathematicians, and I don’t know of any that were disconfirmed. Even so, the distinction between a proof and a “physics proof” is one that seems worth insisting on—
*especially*in theoretical computer science, an area that’s often far removed from conventional “physical intuition.”

In summary, while it feels like swimming through a burning oil slick to say so, I have to side with D-Wave about the *Ars Technica* piece (though my reasons are mostly different).

So congratulations, *Ars Technica*! Like *The Economist* before you, you’ve successfully cast clouds over yourself when reporting about stuff I *don’t* understand.

PS. I’m at STOC’2010 right now, in exotic Cambridge, MA. If you’re interested, here’s the talk I gave this morning, on “BQP and the Polynomial Hierarchy,” and here’s the talk my PhD student Andy Drucker gave, on “A Full Characterization of Quantum Advice.” And Lance, please stop looking over my shoulder!

Comment #1 June 6th, 2010 at 4:03 pm

Indeed, many of the conjectures (or, in physics lingo, “results”) in this area that the physicists claimed using their hocus-pocus methods, were later rigorously confirmed by the mathematicians, and I don’t know of any that were disconfirmed.I look forward to the new movie: “Dr. Aaronson, or How I Learned to Stop Worrying and Love the Physicists.”

Comment #2 June 6th, 2010 at 5:23 pm

aren’t there also a number of papers proving rigorous results of this type (exponential lower bounds for specific instantiations of the adiabatic algorithm because of exponentially small spectral gaps) for SAT and other NP problems?

Comment #3 June 6th, 2010 at 5:30 pm

Hmmm … many physicists believed that necessary-and-sufficient, efficient-to-compute, physically natural measures of quantum entanglement would eventually be found … the proof that separability was NP-hard came as an unexpected shock; thus there is at least

onekey instance in quantum information theory for which widely accepted physical intuitions failed.Comment #4 June 6th, 2010 at 6:33 pm

Luca: Yes, there are! By van Dam – Mosca – Vazirani, Reichardt, and others. But those exponential lower bounds were for specially-constructed hard instances. How the adiabatic algorithm behaves on random instances is a question that’s interested Ed Farhi and others in this area for a while.

Comment #5 June 6th, 2010 at 9:30 pm

This is only tangentially related to your post, but I wonder, is there any sense of how active the community to stop using petrochemicals is? I mean that at the scientific and engineering level. For example, there seem to be plenty of bright people building Web sites. Is there anything near the same incentive for petrochemicals? Should there be a mass movement for scientists and engineers to start turning over all their funding en masse toward this problem?

Comment #6 June 6th, 2010 at 11:24 pm

Can I be the giant gorilla?

Comment #7 June 6th, 2010 at 11:37 pm

Maybe they speak about this paper: http://arxiv.org/abs/0909.4766

Comment #8 June 6th, 2010 at 11:50 pm

Scott,

If I understand correctly, the exponential lower bound in

van-Dam-Mosca-Vazirani is for a classically easy problem (not a NP-hard problem).

In van-Dam-Vazirani, they “generalize the techniques to show a similar exponential slowdown for 3SAT” and claimed that “We exhibit a family of 3SAT instances for which quantum adiabatic optimization provably fails, since the spectral gap is exponentially small.” However, if you carefully study their manuscript, what they proved is that a SPECIFIC adiabatic quantum algorithm (namely, using the clause-violation based problem Hamiltonian for 3SAT) would have the exponential small gap because the cost function is “\epsilon-deceptive monotone”. As I pointed out in my recent paper , there are other adiabatic algorithms for 3SAT, e.g., based on the well-known reduction from 3SAT to MIS as the problem Hamiltonian. Then the cost function for their instances is not monotone (let alone \epsilon-deceptive), and one can no longer conclude that this adiabatic algorithm (using MIS-based problem Hamiltonian) would also have the exponential small gap. I did not elaborate this on my paper because Farhi et al. already showed that the exponential small gap could be overcome by different initial Hamiltonians.

In my paper , I described an adiabatic algorithm, called Algorithm 2, for Exact Cover in which the problem Hamiltonian is based on the reduction to MIS. And I pointed out that the argument in Altshuler et al. that their adiabatic quantum algorithm, Algorithm 1, failed with high probability for randomly generated instances of Exact Cover did not carry over to Algorithm 2.

As we all know, one algorithm fails (takes exponential time) for a problem does not imply that ALL other algorithms for the same problem will take exponential time. I think the confusion comes from the misunderstanding of “what is an adiabatic quantum algorithm?” Given a problem, there are three components (initial Hamiltonian, problem Hamiltonian, and

evolution path) that specify an adiabatic algorithm for the problem. A change in an component (e.g. problem Hamiltonian) will result in a different adiabatic algorithm for the same problem. At the risk of stating obvious, it is not sufficient to conclude that a problem is hard for adiabatic quantum computation/optimization by showing that there exists an adiabatic quantum algorithm for the problem (e.g. for a particular problem Hamiltonian) that requires exponential time.

Vicky

Comment #9 June 7th, 2010 at 1:18 am

it was my understanding that there were already linear time algorithms for factoring using quantum computing. If I am wrong there then my question is worthless, but if that is the case and p !=np and np != bqp is there any implication into a counter argument that factoring is a hard problem, a premise on which all our computer security is based. I dont know how much credence I lend to the ars article, but I always worry

Comment #10 June 7th, 2010 at 2:40 am

Geordie: Of course you can!

Comment #11 June 7th, 2010 at 2:54 am

B.: No, because the article refers to “researchers at Columbia University and NEC Laboratories.”

Comment #12 June 7th, 2010 at 3:00 am

Chris: I confess I didn’t understand your question. Yes, there were already fast quantum algorithms for factoring (not linear time, but close to quadratic time). But factoring isn’t believed to be an NP-complete problem. On the other hand, yes, a large fraction of modern cryptography is based around the assumption that factoring is hard, and if scalable quantum computers were built, then that assumption would fail. But it still wouldn’t be the end of the world, since people would simply switch to other cryptosystems that are probably secure even against quantum attacks. If you want something to worry about, worry about the Gulf of Mexico 🙂

Comment #13 June 7th, 2010 at 3:57 am

Scott,

My understanding is that most if not all of the known, practical asymmetric ciphers are thought to be vulnerable to variants of Shor’s algorithm. Is there an exception that you know of, or are you confident that the cipher makers would be able to come up with something if they had sufficient motivation?

Comment #14 June 7th, 2010 at 4:27 am

@Vladimir Levin is it really necessary to use the word tangentially?

Comment #15 June 7th, 2010 at 5:01 am

Evan: Look into lattice-based crypto — it really ought to be better-known among the nerd public than it is! Admittedly, lattice-based schemes are still less practical than RSA in terms of the key sizes and ciphertext sizes, but they’ve improved a great deal on those counts within the last decade. Plus, they enable amazing tasks like fully homomorphic encryption that we don’t know how to achieve with RSA-like systems.

And yes, my guess is that, if they had sufficient motivation, the cryptographers could make lattice-based schemes competitive enough to supplant RSA for many or most applications.

(And no, we don’t know how to break lattice-based cryptography with a quantum computer, despite almost a decade of serious attempts. Part of the explanation is that, while factoring and discrete log reduce to an abelian Hidden Subgroup Problem, finding short lattice vectors reduces to a

nonabelianHSP, which is much more complicated.)Comment #16 June 7th, 2010 at 10:07 am

I think what Chris is saying is something that was claimed in a reddit comment on the Ars story: if it were true that quantum computers couldn’t solve NP-complete problems in polynomial time, then the existence of Shor’s algorithm would mean that either factoring is in P or P=NP.

Of course, if the general consensus is already that factoring isn’t in NP, this isn’t terribly insightful….

Comment #17 June 7th, 2010 at 10:26 am

Aaron: It would be great if every Ars, Slashdot, and Reddit commenter repeated the following sentence three times:

Factoring is neither known nor believed to be NP-complete.In our current picture, factoring is

intermediatebetween P and NP-complete. Shor’s algorithm places factoring in BQP (quantum polynomial time). It doesn’t place all of NP in BQP—that would be vastly stronger. Thus, Shor’s algorithm has no implications whatsoever for the P vs. NP question, or the solvability of NP-complete problems more generally. Italsodoesn’t tell us whether factoring is in P (i.e., whether there’s a fast classical algorithm for factoring). It “merely” tells us that factoring is in BQP.Comment #18 June 7th, 2010 at 10:44 am

Aaron Davies, I thought it was already proven that either factoring is intermediate between P and NP-complete or P=NP. Am I wrong about that?

Comment #19 June 7th, 2010 at 10:57 am

sim: Yes, you’re wrong about that. If factoring is NP-complete under many-one reductions then NP=coNP. But factoring could be in P without causing

anycomplexity classes to collapse (except for contrived ones, e.g. the complexity class for which factoring is complete).Comment #20 June 7th, 2010 at 11:01 am

There is some strong theoretical evidence that BQP does not contain NP-hard problems, although it would be good to have more such evidence. But also, as far as I know, every published rationale that BQP contains NP-hard problems is bogus. Or even “almost best” solutions to NP-hard optimization problems — by the PCP theorem, that’s as hard as finding the outright best solution for many key NP-hard problems.

The proposals for why BQP might contain all of NP that I have seen have not explained why such an inclusion wouldn’t relativize. But it is known that there is an oracle A such that BQPA does not contain NPA.

Meanwhile factoring is contained in PNP ∩ coNP. In other words, factorizing is at the level of yes-no problems where there is both a short proof of yes (NP) and a short proof of no (coNP). This result goes back to the 1970s, and it is already strong evidence that factoring is not NP-hard. (Because then NP would be coNP-hard and the polynomial hierarchy would collapse.) It would also surprise people if BQP contains NP ∩ coNP, but on one point the flow of logic of conventional wisdom begins to reverse. As of 2010, the fact that factoring is in BQP is one of the major reasons that people don’t think that factoring is much easier than NP ∩ coNP.

Really the way that complexity theorists see it, or at least quantum complexity theorists, is that BQP is a realistic complexity class in principle, but NP and many similar complexity classes derived from it and inspired by it are not. Finding out that we’re wrong about which complexity classes are realistic would be a wild turn of events, much wilder than a surprising relation between two unrealistic classes (IP = PSPACE) or two realistic classes (SL = L). It is true that the discovery of BQP was somewhat wild in that same sense. But you should not expect to discover the unicorn and the platypus just because you discovered the platypus and it was very surprising.

Comment #21 June 7th, 2010 at 11:25 am

Greg, thank you for that very interesting post. Your last paragraph in particular was wonderfully interesting and enjoyable … and perhaps it would be even

moreinteresting and enjoyable if I understood better some of its context.Would you please consider posting some supplemental remarks regarding the distinction between a “realistic complexity class” as contrasted with an “unrealistic complexity class”; are there technical criteria that you have in mind?

I wouldn’t ask, except that neither Google nor the ArXiv’s full text search are much help … for “realistic complexity class” Google in particular refers me right back here to good old …

Shtetl Optimized. 🙂Comment #22 June 7th, 2010 at 11:51 am

John: One definition of “realistic complexity class” would simply be any subclass of BQP! 🙂 Of course, that definition presupposes that BQP is the “true” complexity class of the physical universe, which seems extremely plausible but certainly isn’t obvious

a priori.A different issue is that we might not want to call (say) NP ∩ BQP a “realistic class,” even though it satisfies the above definition. This is because it’s hard to imagine a “picture of physical reality” in which the feasible problems would be precisely those in NP ∩ BQP, neither more nor less. For example, shouldn’t a class of realistically-solvable problems be closed under complement?

Along those lines, years ago Dave Bacon and I proposed a

necessary conditionfor C to be a “realistic complexity class”: namely, it must be the case that C^{C}= C. This condition is satisfied by L, P, BPP, and BQP, as well as presumably “unrealistic” classes like PSPACE. On the other hand, it probablyisn’tsatisfied by NP, RP, or NP ∩ BQP.Comment #23 June 7th, 2010 at 12:10 pm

Thank you Scott … your and Dave’s “necessary condition” is very much the illuminating sort of criterion I was looking for … hopefully it’s what Greg’s post had in mind too!

Comment #24 June 7th, 2010 at 12:19 pm

Scott, thx for the answer 🙂

Comment #25 June 7th, 2010 at 12:53 pm

By the way, y’all might be interested in a Sudoku question that I posted today to MathOverflow. It was partly inspired by Scott’s post today, but I decided to post it to MathOverflow instead.

Comment #26 June 7th, 2010 at 5:32 pm

The complement of which language (as an example) in NP ∩ BQP is not believed not to be in said class?

Also I don’t see why a realistic class that does not encompass all of “reality” should be closed under complement.

Comment #27 June 7th, 2010 at 5:51 pm

BTW, i was recently procrastinating and thinking about coNP and certificates.

A certificate for the complement of the Clique problem, which asks whether a given graph G does _not_ contain a k-clique, could be a set of missing edges E where every subset of k nodes in G uses at least one edge in E.

Does the following problem, the problem of computing E, go by any name?

Problem: Given a complete Graph G and an integer k, find the smallest edge subset E, of G, such that every subset of k nodes of G contains at least one edge in E.

I wanted to see what E looks like for instances where k=n/2. I used brute force to take a peek at n = 6, 8, 10, 12 but it got really slow really fast.

How do you think E changes with n? Probably in a non-polynomial-time-computable way, but it would be interesting to see a few Es.

Comment #28 June 7th, 2010 at 6:01 pm

Also, where would this problem live? The set E doesn’t seem to be something that’s polynomially verifiable. So it would live in NEXP? It ‘s probably also in EXP, since it would only be in P if P = NP.

Comment #29 June 7th, 2010 at 6:24 pm

Correction, it can only live in P if coNP = NP.

Comment #30 June 7th, 2010 at 10:03 pm

As of 2010, the fact that factoring is in BQP is one of the major reasons that people don’t think that factoring is much easier than NP ∩ coNP.Correction: I meant, that factoring is in BQP is taken as evidence that factoring

iseasier than some other problems in NP ∩ coNP. (Because for other reasons, BQP does not look like it contains that intersection.)Comment #31 June 7th, 2010 at 10:25 pm

If people are interested in discussing some of these sorts of issues, apparently Dr. Altshuler, Dr. Choi, and myself will be presenting 3 rather different sides of the same coin at QAMF (http://www.qi10.ca/QAMFworkshop/) in July in Vancouver.

I’ve been quite surprised by how little has been done in the way of algorithms, and have found that many common assumptions can’t be reasonably assumed (for better and for worse), so I’m hoping that people can expand upon a few simple ideas I’ve had, which seem to work not too badly on their own. Unforunately, these ideas take months to simulate on thousands of computers, and getting reliable simulation results for them still requires much manual work, so progress is very slow in testing them.

Also, hopefully nobody goes cross-eyed from my high-detail, trippy visualizations of these computations. 🙂

Comment #32 June 8th, 2010 at 4:02 am

I don’t understand your second objection. Sure, random instances of the Exact Cover problem could be much easier than the hardest instances, but they can’t be harder. Thus, the article actually proved a stronger statement than that this algorithm can’t solve the hardest NP problems in reasonable time.

Comment #33 June 8th, 2010 at 7:08 am

jonas: Duhhh, thanks! While writing point #2, I somehow switched to making the converse of the point I wanted to make. It’s fixed now.

Comment #34 June 8th, 2010 at 8:20 am

The only thing I know of that can efficiently solve NP-Complete problems are the P-CTCs (quantum post-selection closed timelike curves) described in the recent arxiv paper by Seth Lloyd et al. (paper 1005.2219).

Also, so far, it looks like the best way to build a scaled up and robust quantum computer would be via some variant of the new ELEDs (entangled-light-emitting diodes) described in this paper:

C.L. Salter, et al., “An entangled-light-emitting diode”, Nature 465, pp. 594-597.

Comment #35 June 8th, 2010 at 5:40 pm

Charlie: Post-selection is enough to solve NP-complete problems, the CTCs aren’t necessary. It’s simply that you can simulate a CTC with post-selection.

Comment #36 June 9th, 2010 at 1:21 pm

Joe, you are correct because Scott showed that post-BQP = PP. Now, it is shown that PP also = post-IQP in this paper:

M.J. Bremner, R. Jozsa, D.J. Shepherd, “Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy”, arxiv paper 1005.1407

Comment #37 June 10th, 2010 at 12:02 pm

Charlie, the arXiv articles you point to (arXiv:1005.1407v1 and arXiv:1005.1744v1) already are very interesting from a purely mathematical point of view. To appreciate that this topic is

alsoof (potentially) first-class scientific and engineering significiance, it is illuminating to re-read an ancestral article: the 1953 article of Metropolis, Rosenbluth, Rosenbluth, Teller and Teller (MRRTT) “Equation of State Calculations by Fast Computing Machines.”The MRRTT article is a scientific classic (14,000 citations so far) whose mathematical structure is notably similar to the “collapse of the polynomial hierarchy” articles, in the sense that all of these articles are concerned with the simulability of dynamical systems, in all of the permutations of strong, weak, and approximate “simulability”.

The MRRTT article is built upon a great mathematical result (nowadays called “Metropolis sampling”), and it extends that result by concretely demonstrating what science and engineering both require, namely,

repeatablesampling protocols that include explicit methods forverifyingthe conclusions drawn from that sampling.It is precisely because the MRRTT trajectory-sampling methods are repeatable and verifiable in the science-and-engineering sense, that these methods have come to dominate fields as diverse as computational fluid dynamics, combustion theory, and synthetic biology.

Missing so far (AFAICT) from the growing set of “collapse of the polynomial hierarchy” articles are sampling protocols that are repeatable and verifiable (with PTIME resources) in the traditional sense of science-and-engineering.

It is not entirely clear (to me) whether this lack of repeatability/verifiability is a merely technical difficulty that is soon-to-be remedied, versus reflecting a more fundamental aspect of simulability in QIS … that’s why I read these articles with great interest, and with enormous respect for the difficulty of these issues.

Comment #38 June 10th, 2010 at 8:25 pm

This paper about the control of light with light in today’s issue of “Nature” is interesting because it implements a single atom as a quantum-optical transistor:

M. Mucke, et al., “Electromagnetically induced transparency with single atoms in a cavity”, Nature 465, 755-758 (10 June 2010).

Comment #39 June 11th, 2010 at 2:30 pm

Hello,

I have recently noticed my computer alerting me I have “low disk space”. My question for M.I.T. is this:

1) How do I tell the “Low Disk Space” from the rest of the disk and how do I write to it?

It seems clearly I am not making the best use of my entire disk if I am leaving low disk space.

–Thanks,

Benjahul

Comment #40 June 12th, 2010 at 2:22 am

Benjahul,

First off, the question of “Low Disk Space” is still a very active area of research in quantum computing and computational complexity, so unfortunatley you will not yet find a straight forward solution to this problem.

I must also applaud your deep intuition of theoretical computer science, for by your question #1 you have hit upon on one of the key issues in the foundations of Low Disk Space Theory: The Low Disk Space Decidability Problem. This is a decision problem, in which the input is a region of the disk, and we seek to determine whether or not it is Low.

Indeed, you are correct in assuming that you are most likely not making the best use of your entire disk. Luckily, in the absence of an efficient general algorithm for Low Disk Space, there are many methods that seek to optimize disk use in a probabilistic manner.

Comment #41 June 12th, 2010 at 3:39 am

Job: You defined E to be the smallest edge set so that any k-clique contains at least one vertex of E. Taking the complement F, we get that F is the largest edge set which does not contain a k-clique. Turan’s theorem (http://en.wikipedia.org/wiki/Turan%27s_theorem) tell’s us that this edge set is the complete (k-1)-partite graph whose parts are as equal as possible. By taking a complement, we get that E is a collection of k-1 disjoint cliques of roughly equal size.

This seems counter to the idea of your post. If k is less than sqrt(n), then your excluded set will contain cliques larger than k. This means you would be trying to certify no k-cliques by showing a graph contains no j-cliques for some j > k. This seems to run into some problems…

Comment #42 June 12th, 2010 at 11:56 am

After thinking this over more, Job’s approach is really to certify that none of E is contained in the graph, which would be to say the graph is contained in F. So the approach would really boil down to proving a graph has no k-clique by showing it is (k-1)-partite (ie by finding a (k-1)-coloring). While this would certainly certify being k-clique free, it isn’t necessary.

Comment #43 June 13th, 2010 at 5:50 pm

What i was wondering is, given that E is a function of n alone (rather than a function of a graph G), how complex it is to compute E, and how complex it is to verify that a given E is valid.

For example, there is a single E for n = 4, n = 6, n = 8, etc.

Suppose E is easy to compute, then Bob can find and give Alice the subset E_{n}, of G’s complement, to guarantee that G does not contain an n/2 sized clique. Alice would compute E herself, in polynomial time, to verify that it is valid. It would be a polynomial time certificate.

We can map the various E to integers, so we have a function f(n) = e. I’m wondering what this function looks like.

I guess an assumption that i’m making is that the complement of

anygraph G of n nodes that does not contain an n/2 sized clique will contain an instance of E_{n}. This seems reasonable.Comment #44 June 14th, 2010 at 4:06 pm

As I said, the set E to avoid a k-clique is given by k-1 cliques of size roughly (n/k), from Turan’s theorem. For k=n/2 and n>=6, this would be two triangles and (n/2 – 6) independent edges.

Your “reasonable assumption” is false. Consider the case n=12. Let G be a complete 4-partite graph with 3 vertices in each part, which does not contain a 6-clique. Then the complement of G is four disjoint triangles. It does not contain a copy of E_{12}, which has two triangles and three independent edges.

Comment #45 June 15th, 2010 at 2:39 pm

To Job and Andy,

Have you considered 1 picometre + E Orders of magnitude for length in E notation shorter than one metre include :

Comment #46 June 16th, 2010 at 2:28 pm

Recent Peter Shor interview on Quantum Computing in Dr. Dobb’s Journal:

http://www.drdobbs.com/blog/archives/2010/06/talking_quantum.html;jsessionid=Z3YN00JBXWLOFQE1GHPSKH4ATMY32JVN?cid=RSSfeed_DDJ_ALL

Comment #47 June 16th, 2010 at 11:34 pm

Hyperstig:

Cool. But please don’t keep us in suspense! How many Higgs Bosons are there?

Comment #48 June 18th, 2010 at 1:40 am

Raoul I am not you or Hyperstig but the latest data error rates suggest Supersymmetric extensions of the Standard Model (so called SUSY) predict the existence of whole families of Higgs bosons, as opposed to a single Higgs particle of the Standard Model. Among the SUSY models, in the Minimal Supersymmetric extension (MSSM) the Higgs mechanism yields the smallest number of Higgs bosons: there are two Higgs doublets, leading to the existence of a quintet of scalar particles: two CP-even neutral Higgs bosons h and H, a CP-odd neutral Higgs boson A, and two charged Higgs particles H±.

There are over a hundred theoretical Higgs-mass predictions.[10]

Comment #49 June 18th, 2010 at 1:41 am

Suddenly most of the predictions entail predictions of other predictions to follow, sadly.

Comment #50 June 20th, 2010 at 1:56 pm

If C is the class of problems that humans can solve, then C is certainly doable, but C^C seems much harder.

Comment #51 June 23rd, 2010 at 1:03 am

Is it viable, no?: Are you sure you spelled that right?

Comment #52 June 25th, 2010 at 8:38 pm

Yucch, what a mess. I notice my own comments haven’t shown up here lately (maybe this one will work) which I figured was due to some kind of filtering.

Scott’s article about BQP vs PH was really interesting though I couldn’t understand it much. It reminded me of another topic I couldn’t understand, Valiant’s holographic algorithms. So I figure the topics must be related ;). Does anyone know if there has been any attempt to “port” quantum algorithms to the (classical) holographic framework? And are there probabilistic holographic algorithms? The little reading I’ve done only describes deterministic ones.

Comment #53 June 27th, 2010 at 7:01 am

As Scott remarks, the conclusion that “quantum computers can’t solve NP-complete problems in polynomial time” is not very remarkable, even though “we have no idea at present how to prove anything of the kind”.

Perhaps the lack of proof is because the focus on “polynomial time” is misleading; what one really needs to ask is whether there is a NP-complete problem that is not “algorithmically computable”.

Let me explain.

The general consensus is that the roots of the PvNP problem trace back to Goedel’s ‘lost’ letter to von Neumann.

However, has anyone seriously considered whether the solution of the PvNP problem can also be traced back to Goedel – in particulkar to Theorem VII of his seminal 1931 paper on undecidable arithmetical propositions?

In the proof of this Theorem (every recursive relation is arithmetical), Goedel introduces an intriguing function, named after him as the Goedel Beta-function.

Goedel uses this to show that, for any recursive relation of the form x0 = f(x1, …, xn) defined by the Recursion Rule, we can define an arithmetical relation F(x1, x2, x3) such that:

(i) if f(k1, …, kn) = k(n+1) then PA proves: [F(k1, …, kn, k(n+1))];

(ii) PA proves: [(E1 x(n+1))F(k1, …, kn, x(n+1))].

(where the symbol `[E1]’ denotes existential uniqueness; and the square brackets are used to denote the PA-formula which interprets as the arithmetical relation that is denoted by the expression inside the brackets under a sound interpretation of PA.)

Now, it is implicit in Goedel’s definition that F(x1, x2, x3) has the property:

(a) There is a deterministic Turing machine which, for any given natural numbers k, m and all 0

Comment #54 June 27th, 2010 at 7:05 am

(continued)

(a) There is a deterministic Turing machine which, for any given natural numbers k, m and all 0

Comment #55 June 27th, 2010 at 7:09 am

(Sorry: seems I abused the HTML code)

Now, it is implicit in Goedel’s definition that F(x1, x2, x3) has the property:

(a) There is a deterministic Turing machine which, for any given natural numbers k, m and all 0 lessthan i lessthan (m+1), will output the natural number n on input i if, and only if, F(k, m, n) holds;

(b) There is no deterministic Turing machine which, for any given natural numbers k, m and all 0 lessthan i, will output the natural number n on input i if, and only if, F(k, m, n) holds.

In other words, it follows from the instantiational nature of the constructive definition of the Goedel Beta-function that a primitive recursive relation can be instantiationally equivalent to an arithmetical relation where the former is algorithmically decidable over the structure N of the natural numbers, whilst the latter is algorithmically verifiable, but not algorithmically decidable, over N.

Ergo, P != NP.

Of course there is a caveat: we may need to assume that the standard interpretation of PA is sound.

Moreover, this argument cannot be formalised in ZF since functions and relations are defined extensionally as mappings. Hence ZF cannot recognise that a primitive recursive relation may be instantiationally equivalent to, but computationally different from, an arithmetical relation; where the former is algorithmically decidable over N whilst the latter is instantiationally decidable, but not algorithmically decidable, over

Comment #56 June 27th, 2010 at 5:42 pm

Bhupinder Singh Anand,

Kindly remind us what PA means.

Comment #57 June 29th, 2010 at 10:18 am

Joe, PA is Peano Arithmetic. But the idea of a uncomputable NP-complete problem is a contradiction in terms. NP-complete means the problem is both NP-hard and in NP. An uncomputable problem is not in NP by definition. Any problem in NP can be solved deterministically in exponential time by trivial brute force. The question is whether they can also be solved in polynomial time.

Comment #58 June 29th, 2010 at 11:24 am

Bhupinder: I agree with asdf’s point, which negates your specific argument for using theorems about undecidability to settle P vs. NP. But you may be interested to know that, in 1998, Michael Freedman [the Fields Medal winning & topological quantum computing researching one… as opposed to the many other Michael Freedmans out there 🙂 ] sketched out how one might use undecidability to distinguish the classes P and NP.

M.H. Freedman “Limit, Logic, and Computation”

Proc. Nat. Acad. Sci.95(1) 95-97 (6 Jan 1998), available online atProc. Nat. Acad. Sci.and Freedman’s Microsoft Station Q home pageAlas, I’m pretty sure this hasn’t been a fruitful research programme given that I’m pretty sure Freedman never published further on the topic and the paper above was only cited 4 times, and none of those papers which cited it were extensions of its ideas.

But perhaps other readers know of similar approaches.