I’m no expert, but aren’t there problems in the theory of lattices that are NP-hard, with worst case as hard as average case, and hard to find approximate solutions for, all at once? Presumably the smug thing would be to ask GR for any approximate solution to any such problem of his choosing, properly generated from a seed…?

]]>If it is far enough down the search tree, then it does not shake the appellation “fake”. No one at the demo had in mind a 100-yard relay race in which the qubits only run the last four feet. Remember, they don’t even have 16 fungible qubits, they only have 16 *Ising* qubits. Is there room to encode more than the four last cells of a Sudoku in so little space?

Besides, you can’t solve a full-board Sudoku with just a search tree, because the search is impossibly large. Most of the work is pruning by early contradiction. That’s what makes Sudoku fun: It’s a combinatorial search that looks a trillion times harder than it really is. Branching in the search is a last resort, and moreover it has to be encoded though the pruning. I do not know how to distill any interesting Sudoku question down to 16 bits or 16 qubits, much less 16 Ising qubits.

*Who knows? It’s really a question for Geordie.*

He has already thrown up his hands with the answer that no one can convince a committed skeptic. I see an opening to do the work for him. If we can know ourselves that solving a Sudoku with 16 Ising qubits is a travesty, then in particular it was so in the Orion demonstration.

]]>Well, they could be calling the Orion only when they get far enough down in the search tree. Who knows? It’s really a question for Geordie.

]]>But there is a more immediate problem with the Sudoku demonstration, apart from what Rose is promising in the future. How can a 16-qubit computer solve Sudokus? According to a basic incompressibility theorem, 16 qubits cannot store any more classical information than 16 bits. Even a single free row of a Sudoku requires 19 bits of storage, because 9! > 2^18. How in the world can a 16-bit computer solve Sudokus? Moreover, this does not even count any overhead that you would need to translate a Sudoku to the Ising model.

Apropos of this issue, the Scientific American article asked, “And how exactly would users know that it was the quantum computer rather than a human or ordinary computer answering their queries?” Rose’s answer was that there is no way to convince a committed skeptic. But that is not the right version of the question. My question now is, how could the Sudoku demonstration have been anything other than fake?

Scott, do you have any ideas on that?

]]>No, I have not seem him say “typical”, although I would not be surprised if he changed his message to now say it that way. I just searched the company site, his blog, and his interview with Pontin, and I do not see any relevant occurences of the word “typical”.

His line the entire time has not been that computer scientists argue with artificial instances of problems, it’s that they split hairs by demanding either exactitude or too much accuracy. But that is just not true. In one dramatic case, max clique or max independent set, an adapted PCP theorem says that you can’t see the elephant in the room.

Besides, there isn’t any general theorem that assures you that inapproximability is always atypical. I haven’t heard of strong results on the matter in either direction. How much patience is due to somene who demonstrates the trivial, promises the impossible, then whittles the promise so that it may or may not be impossible?

]]>This sounds like an empirical claim to me. There are hoary old arguments about large multiplicative constants and large polynomial degrees, but these are reasons why the complexity theorists’ conception of efficient is *controvertial* (but essentially only for non-specialists). This is different from it being *wrong*, which is equivalent to saying that there are problems which are deemed to be “difficult” (and always will be), but which are easy in practise — an empirical claim.

In his 3:10 post above, Scott basically says that he’s ready to be refuted on empirical grounds. Even if he only says this because he expects Quantum Computers to violate the Strong Church-Turing thesis, it seems that he’s being quite fair by your standards, whatever your skepticism about the usefulness of the complexity theorists’ definition of “efficient”.

In any case, the whole touble is that D-Wave has yet to demonstrate very convingly that its product “can easily solve difficult problems”, whatever consequences you would take such a demonstration to have.

]]>What computer scientists tend to mean by “approximate” in these cases is something very specific about how great the approximation is, and they

tend to mean something that is very close to exact.

Emphasis mine. Essentially Rose has been saying all along that the only reason that NP-hard optimization problems might be out of reach for quantum computers is that computer scientists split hairs. They have only shown that the single very best solution is NP-hard, or in the case of the PCP theorem, they only show that solutions a tiny fraction off from the very best are NP-hard. After all, Rose described their computer-to-be this way:

The system we are currently deploying, which we call Trinity, is a capability-class supercomputer specifically designed to provide

extremely rapid and accurate approximate answersto arbitrarily large NP-complete problems

Now the PCP theorem is a non-trivial business, because the gap in the PCP is not stable under general polynomial-time reductions. Some NP-hard problems have a sizable PCP gap, others only a small gap, and others don’t have a gap at all. Since I am new to this great topic, I had trouble finding a really juicy example.

But Rose found it for me, when he specifically cited the maximum independent set problem as one that users can solve on his computers. In a striking paper, Johan Håstad showed that finding a maximum independent set (equivalently a max clique) on a graph with N vertices is NP-hard even within a factor of N^{1-epsilon}. In other words, any computer, short of one that can fully and exactly solve NP-hard problems in polynomial time, will sometimes miss the elephant in the room when looking for a maximum independent set.

For instance, according to Håstad (if I have read the paper correctly), if the graph has N vertices, the largest clique may have N^{4/5} vertices, but the largest findable clique may only have N^{1/5} vertices. Again, “findable” means any talent short of being able to fully and exactly solve NP-complete problems in polynomial time. I do not think that being off by a factor of N^{3/5} is either “very close to exact” or “extremely accurate”.

]]>