## Thanksgiving Special: D-Wave at MIT

Some people think I have a vendetta against D-Wave Systems and its questionable quantum computer claims (see here, here, here, here, here, here, here, here, here for context). But actually, nothing could be further from the truth. I keep trying and trying to change the subject! Wouldn’t you all rather hear about Wolfram, I say? Or unparadoxes? Or my #1 topic du jour, nothing whatsoever?

Apparently you wouldn’t. From my inbox to my comments section to the hallway, the masses have spoken, and what they want to know is: *did I attend D-Wave’s presentation at MIT on Monday, and if so what did I think?*

Yes, I attended, in body though not in mind. You see, Monday was *also* the day of the STOC deadline, so if our guests from D-Wave (Mohammad Amin and Andrew Berkley) were expecting a ferocious skeptic, they instead got a bleary-eyed zombie with visions of MA_{EXP}, P/poly, and 7:59PM EST cavorting in his head.

This meant that Ed Farhi, Isaac Chuang, Peter Shor, and Lorenza Viola had to do most of the questioning. As it turned out, they did a vastly better job than I could have.

As others have pointed out in stronger terms, I’m not a physicist. (On the other hand, the gentleman linked to in the previous sentence is *not* correct about my being paid by the NSA to discredit Canadian quantum computing efforts: it’s actually the GCHQ and the Mossad.) As such, I can’t directly evaluate D-Wave’s central claim to have built an adiabatic quantum computer, nor have I ever tried to do so. All I can do is point out the many things D-Wave has said to the press (about NP-complete problems, for example) that I *know* are false, its history of making dramatic announcements without evidence, and its contemptuous attitude toward scientists who have asked for such evidence. For me, that’s more than enough to destroy D-Wave’s credibility on the claims I *can’t* directly evaluate. After all, *the burden of proof is not on me; it’s on them.*

However, other people have not been satisfied with this line of argument. “We don’t care who the burden the proof is on,” they say. “We just care whether D-Wave built an adiabatic quantum computer.”

But my physicist colleagues don’t suffer from the same argumentative limitations that I do. At the group meeting preceding the talk, Farhi announced that he didn’t care what the press releases said, nor did he want to discuss what problems quantum computers can solve (since we academics can figure that out ourselves). Instead he wanted to focus on a single question: is D-Wave’s device a quantum computer or not?

What followed was probably the most intense grilling of an invited speaker I’ve ever seen.

It quickly emerged that D-Wave wants to run a coherent quantum computation for microseconds, even though each of their superconducting qubits will have completely decohered within nanoseconds. Farhi had to ask Amin to repeat this several times, to make sure he’d gotten it right.

Amin’s claim was that what looks like total decoherence in the computational basis is irrelevant — since for adiabatic quantum computation, all that matters is what happens in the basis of energy eigenstates. In particular, Amin claimed to have numerical simulations showing that, if the temperature is smaller than the spectral gap, then one can do adiabatic quantum computation even if the conventional coherence times (the t_{1} and t_{2}) would manifestly seem to prohibit it.

The physicists questioned Amin relentlessly on this one claim. I think it’s fair to say that they emerged curious but severely skeptical, not at all convinced by the calculations Amin provided, and determined to study the issue for themselves.

In other words, this was science as it should be. In contrast to their bosses, Amin and Berkley made a genuine effort to answer questions. They basically admitted that D-Wave’s press releases were litanies of hype and exaggeration, but nevertheless *thought* they had a promising path to a quantum computer. On several occasions, they seemed to be struggling to give an honest answer that would still uphold the company line.

Two other highlights:

- I asked Amin and Berkley whether they could give any evidence for any sort of speedup over classical simulated annealing. They laughed at this. “It’s sixteen qubits!” they said. “
*Of course*you’re not going to see a scaling effect with sixteen qubits.”I said I understood perfectly well (though I wondered silently whether the dozens of journalists covering D-Wave’s demo understood the same). But, I continued, surely you should be able to see a scaling effect by the end of 2008, when your business plan calls for 1024 qubits?”Well, that’s what it says in the press release,” they said.Forget about the press release, Farhi interjected. How many qubits are you

*actually*going to make?Amin and Berkley shrugged; they said they’d just try to make as many qubits as they could.

- Even though it hadn’t exhibited any sort of speedup, Amin and Berkley steadfastly maintained that their 16-qubit device was indeed a quantum computer. Their evidence was that simulations of its behavior that took quantum mechanics into account gave, they said, a better fit to the data than simulations that didn’t. On the other hand, they said they were
*not*able to test directly for the presence of any quantum effect such as entanglement. (They agreed that entanglement was a non-negotiable requirement for quantum computing.)There was a Feynmanesque moment, when Ike Chuang asked Amin and Berkley an experimental question so simple even I understood it. Ike said: if you’re indeed seeing quantum effects, then by running your computer at higher and higher temperatures, at some point you should see a transition to classical behavior. Have you tried this simple control experiment?Amin and Berkley said that they hadn’t, but that it sounded like a good idea.

For a theorist like me — accustomed to talks ending with “if there are no questions, then let’s thank the speaker again” — this was exciting, heady stuff. And when it was over, I still had almost three hours until the STOC deadline.

Comment #1 November 22nd, 2007 at 12:31 pm

(though I wondered silently whether the dozens of journalists covering D-Wave’s demo understood the same)The theme of this post has been to ignore the dubious hype from Geordie Rose et al and concentrate on the physics. That is a reasonable perspective, if not the only reasonable perspective in my opinion. But then this parenthetical contradicts it. There is absolutely no need to wonder on this point. The journalists at that “demo” thought that they saw a demonstration of that 16-qubit device solving a Sudoku, among other feats. We have seen no evidence that that actually happened, or that it’s even possible. That is, that’s possible for any 16-qubit device to solve or help solve a Sudoku in any meaningful way. So at that point, why bother wondering what the journalists understood? I really thought that you wanted to take such questions off of the table.

Their evidence was that simulations of its behavior that took quantum mechanics into account gave, they said, a better fit to the data than simulations that didn’t.So therefore a hydrogen atom is a quantum computer for the same reason.

Comment #2 November 22nd, 2007 at 2:11 pm

Greg, the reason that parenthetical remark

isparenthetical is precisely that it contradicts Ed Farhi’s rule.Incidentally, there’s one incident I forgot to relate.

Just for your sake, I asked Amin and Berkley how D-Wave encoded a Sudoku puzzle into 16 qubits. They laughed and said they had no idea, that they weren’t involved with that. There was mention of a parody newspaper Caltech students distributed around MIT, which contained Sudoku puzzles with all but one of the squares filled in. Then Ed got angry at the digression into something so “irrelevant,” and we moved on.What I’m finding, more and more, is that the arguments you and I find persuasive are simply not persuasive to most people (even though they should be). It would be as if a company claimed it had an algorithm that could compress a random string, you and I pointed out that 2

^{n}> 2^{n-1}, and everyone dismissed that as an irrelevant piece of theory. All they want to know is: how well does the compression algorithm work onmydata? And you try to explain to them that if a company could be so egregiously, howlingly wrong about such a fundamental point, then there’s little reason to expect their algorithm will do anything useful for anyone. And they don’t buy it. “Of course that’s what an academic would say!”Such exchanges are, of course, incredibly frustrating to us — because they reveal that most people don’t inhabit a world where the truth or falsehood of abstract propositions actually matters, actually has any sort of teeth. They don’t understand that you can refute a complicated theory by attacking at its weakest point; they think you have to understand every detail first.

Now you might ask: if people don’t understand such basic things, then why will they accept an argument based on decoherence and spectral gaps, rather than on the number of bits needed to encode a Sudoku? I don’t know. I just don’t know. But empirically, the former seems to have more effect.

Comment #3 November 22nd, 2007 at 2:18 pm

Hi Scott,

I’m an undergrad studying CS and pure math, and an avid reader of your blog. Could you make a post, or a brief reply to this comment, addressing the question: “What is the best way for undergraduates to become involved with quantum computation/information theory?”

Is the best way to focus our time on complexity theory and wait for graduate school to learn about the quantum side of things? Should we parallel our computer science studies with physics? What books should we be reading (I’ve started Griffith’s book)? What areas should we be focusing on? Any suggestions would be much appreciated.

Comment #4 November 22nd, 2007 at 2:41 pm

Matt: It all depends on what you’re interested in! To do original research in quantum algorithms and complexity, you don’t need to know

anything whatsoeverabout physics. On the other hand, knowledge of physics is sometimeshelpfulin suggesting new ideas (as we saw with the quantum adiabatic algorithm, and the recent Farhi-Goldstone-Gutmann NAND-tree algorithm). It’s also extremely helpful for understanding what your physicist colleagues are talking about.Personally, I came entirely from the math/CS side. Quantum computing for me was just another model of computation — one with complex numbers called “amplitudes” instead of probabilities — that had the additional attractions of (1) being relatively new and unexplored, and (2) having something to do with the deepest workings of the universe. It was only after I’d spent a couple years working on quantum query complexity, lower bounds, etc. that I started learning anything about the physics. What I found, then, was that my knowledge of quantum information was like a secret decoder ring for understanding what the physicists were telling me (Hamiltonian = instantaneous unitary, Feynman path integral = the BQP⊆PP simulation, etc.), and that without that computer science frame on which to hang things I’d be completely lost.

So: if you like CS, take CS courses! If you like physics, take physics courses! With the obvious exception of your graduation requirements, don’t take anything just because someone tells you you should.

As for books, I like Nielsen&Chuang and Preskill’s lecture notes. There might be conventional QM textbooks from which one can learn a great deal, but I don’t know as I haven’t read them.

Comment #5 November 22nd, 2007 at 2:45 pm

In D-Wave’s defense, in theory you could encode any given sudoku puzzle into a single bit, one state for the unsolved version and one for the solved version. As long as a peripheral such as a graphics card can decode this signal…

Given that they have 16 qubits their QC is able to solve 32768 sudoku instances. Imagine how many it’ll be able to solve with 1024 qubits! 😀

Comment #6 November 22nd, 2007 at 4:21 pm

They don’t understand that you can refute a complicated theory by attacking at its weakest point; they think you have to understand every detail first.In all fairness, I think that Farhi understands that principle perfectly well. The problem is that his notion of the weakest point depends in his expertise rather than ours. So it would be fine if he said that your objections, or mine, could well be important, but that he wants to see the physics instead. He knows at a lot and he is perfectly entitled to his own reasons to be upset or happy with D-wave. But I still think that an explanation of the D-wave Sudoku demo is important, because I want to know if D-wave is honest. If Farhi feels that that is irrelevant, not only for him but in general, then I think that that position is too impatient.

But I also I thought that you had the same sentiment as Farhi about the Sudoku demo, namely that their science is more important than their honesty. Or at least that you care far more about the former than the latter. Even so, thanks for asking them — it’s interesting that they know nothing about the Sudoku demo. You would think that someone at the company does!

Comment #7 November 22nd, 2007 at 4:35 pm

Scott, the reason that people ask you and not other academics for an opinion on D-Wave is very simple: you are just a much better writer/communicator. And you would do a great service to humanity if you help expose the truth about D-Wave to general public, including their investors. If D-Wave turns out to be a scam, it may have a long-term negative effect on funding of legitimate projects in quantum computing.

Comment #8 November 22nd, 2007 at 5:46 pm

Hi Scott,

What are you results about MA_EXP?

Comment #9 November 22nd, 2007 at 6:14 pm

Tales from the Crypt:

Beware. I lost my job at Dirac Q-Systems Ltd. in Tunbridge Wells when e-mail to my bonny girl became quantumly entangled with e-mail to my wife. My Mrs. wouldn’t hear my explanation involving parallel universes.

Comment #10 November 22nd, 2007 at 7:00 pm

It seems perfectly reasonable to me that the physicists at Dwave don’t know anything about how the algorithms are implemented. Personally if I were them I’d be curious about how it all works, but I can also believe that they’re not that interested in how the algorithms people cook up a demo that the public can understand (in some superficial sense of the word).

The person who heads Dwave’s algorithms effort is Bill Macready (see: http://www.zoominfo.com/Search/PersonDetail.aspx?PersonID=3072102&QueryID=5b8f7d5a-cf09-4942-a231-21a62da6ff0a). He will be able to answer any questions on how Soduku/image matching/etc was “encoded” into the 16/28-qubit machines.

The one detail I have on the image matching is this: the 28-qubit machine allegedly solves the maximum common subgraph problem for graphs that can be encoded using 28 bits. How then does Dwave “solve” the image feature matching problem using just 28 bits, for images that are large and have many features (such as those that Dwave used in their SC demo)? Apparently they “cheat” and break the overall problem into many small maximum common subgraph problems (of a size that can be encoded in 28 bits). Each small MCS problem is “solved” on the QC, and then the solutions are somehow combined classically. I suspect there is a lot more classical work going on than quantum work!

This is entirely believable, although it is misleading to the public to tell them that the quantum computer has solved the image matching problem on some large image, when in fact the QC has only been used to solve a really small part of it.

However, I find this to be a small issue in comparison with the question: does the 28-qubit Dwave quantum computer actually quantum compute the solution to MCS, or is it just classical annealing?

Comment #11 November 22nd, 2007 at 7:34 pm

Matt asks:

“What is the best way for undergraduates to become involved with quantum computation/information theory?”Scott suggests:

“As for books, I like Nielsen&Chuang and Preskill’s lecture notes.”That is a fine recommendation IMHO.

To my knowledge, these two textbooks are unique in treating measurement

beforedynamics. Which is natural from a QIT point of view (because dynamics is a special case of measurement, while measurement is not a special case of dynamics).A further suggestion would be to look at Joe Harris’ textbook

Algebraic Geometryalong with a physics-oriented textbook on differential geometry like Martin’sManifold Theory.The rationale being (1) pretty much any technical field has a natural formulation in terms of geometry, and therefore (2) hypothesis like P!=NP will someday (soon?) be understood as geometric theorems.

So you might as well learn some modern geometric lingo! :;

Comment #12 November 22nd, 2007 at 7:46 pm

Scott,

IQC got the 30-minute version (+15 minutes of grilling) today, so I’m very glad that you posted this digest of your longer experience. In the interest of continued grilling, some questions for you:

1. You said Farhi wanted to address “Is it a QC” but not “What can a QC do?” How the %&*! do you do that?

Especiallyin contexts like AQC, is there some way to define a QC thatisn’toperational?2a. Your speakers agreed that entanglement is non-negotiable — but isn’t the necessity of entanglement for speedup still a conjecture? especially in AQC?

2b. Are the statements “Total decoherence in the computational basis is irrelevant” and “Entanglement is a non-negotiable requirement” compatible in some way that I’m missing?

Comment #13 November 22nd, 2007 at 7:59 pm

Greg & Scott,

Attacking the weakest point is fine if you just want to prove that the final conclusion is wrong. It doesn’t help much if you want to figure out whether they’re right about

something.Yes, the statements about solving NP-complete problems (and almost certainly the demos of Sudoku) are crap, and that drum needs to be beaten at appropriate intervals. But D-Wave has made several other intriguing and controversial statements that

aren’tobviously wrong… and which could be very useful/interesting even if the whole chain of logic has a flaw somewhere.Comment #14 November 22nd, 2007 at 8:35 pm

In all fairness, I think that Farhi understands that principle perfectly well.Greg: I never meant to suggest he didn’t. I was talking about the “average person,” whose standards of proof physicists somehow seem much more in tune with than computer scientists and mathematicians.

Comment #15 November 22nd, 2007 at 9:17 pm

Robin: Good questions! A few quick responses:

1. Indeed, as soon as Eddie said the issue was whether or not they’d built a QC, I predicted that the discussion would become about what’s

meantby a QC, and that’s essentially what happened.2a. Well, there’s still a remote possibility that one could go beyond BPP by using a QC that’s in a separable mixed state at every time step. On the other hand, we have no real evidence that there’s anything interesting one can do in this model. Furthermore, the model is extremely artificial — since the same unitaries that let us leap from one specific separable mixed state to a completely different one, should also let us easily make entangled states!

2b. I have no idea! This seems like the perfect question to ask Eddie and Ike a few weeks from now.

Comment #16 November 22nd, 2007 at 9:22 pm

What are you results about MA_EXP?Ben: It’s my paper with Avi Wigderson on “Algebrization: A New Barrier in Complexity Theory,” which actually involves 39 complexity classes at last count (MA

_{EXP}is one of them). We should have a preprint on the web “any day now.”Comment #17 November 22nd, 2007 at 9:25 pm

FYI the slides from the MIT presentation are here http://dwave.wordpress.com/2007/11/20/d-wave-talk-at-mit/ .

Robin: these are good questions. An answer to #1 might be that it is still very much an open question as to how “AQC” performs when the sweep time is less than what is required to satisfy adiabaticity (when the procedure becomes explicitly heuristic), which is the regime is which any real large-scale AQC will be operated. So even if the system supports AQC, in practice it isn’t clear how to quantify the benefit from operating it in the regime that matters.

For 2b. yes these are compatible. The argument is in the slides but it boils down to simple textbook quantum mechanics. The density matrix of the system becomes diagonal (T_2) quickly IN THE ENERGY BASIS. In the readout basis you need to rotate this density matrix. This rotation can give non-zero off-diagonal elements which are signatures of coherence in the readout basis. These are robust and do not “go away”. They are an equilibrium property of the system and are protected by an energy gap. The way Mohammad characterizes this is as follows:

“In the gate model, there is no Hamiltonian constantly present except when you want to apply some gates. So if there is no energy spacing induced by some Hamiltonian, you are always in classical limit (i.e. large T limit). That is why after the decoherence time your qubits become classical bits. But such an argument does not hold for the case where the wavefunction is an eigenstate, or even mixture of a few eigenstates of a strong Hamiltonian that can create energy spacings larger than T. This is really the source of confusion for a lot of quantum information theorists. For them the Hamiltonian is just a generator for unitary operations and does not mean anything more. So after T_2, the qubits become classical and there is no wavefunction to apply a unitary operation to.”

Finally, the variety of demos we’ve run (including sudoku, image matching, etc.) are not “crap”. They use a novel hybrid approach to integrating QCs into classical solvers. In hindsight it is pretty obvious that to make any QC useful it needs to be integrated with the best known classical techniques regardless of what quantum algorithm it’s embodying. And while I’ve said this 10^87 times I’ll say it again: what we’re doing is explicitly heuristic and has no global optimality guarantees. While you can use the system we’re building on decision problems it is natively an optimization solver for quadratic unconstrained binary optimization problems (I’ll post my SC07 talks shortly on rose.blog).

Comment #18 November 22nd, 2007 at 10:05 pm

Shouldn’t the spelling be “Algebraization”?

Comment #19 November 22nd, 2007 at 10:10 pm

Avi and I had a long debate about what to call it, and we almost reached consensus but not quite: he

stillspells it “algebraization.” My main complaint is that it’s not clear how to pronounce that: as “algebra-ization”? “algebrae-ization”? “algebryzation”?Comment #20 November 23rd, 2007 at 2:10 am

How about “algebraic relativization”?

Comment #21 November 23rd, 2007 at 2:23 am

Finally, the variety of demos we’ve run (including sudoku, image matching, etc.) are not “crap”. They use a novel hybrid approach to integrating QCs into classical solvers.I didn’t use the word “crap”, and I can only speculate as to what the Sudoku solver really is. But I will say that the demo of the Sudoku solver on YouTube looks dishonest. It does not look consistent with the quantum computer that you described. It will continue to look dishonest unless you (1) name the people who did the Sudoku solver, and (2) explain the actual approach used, whether or not it is “novel” or “hybrid”. Only then, conceivably, could it look okay. Even if neither you nor I ever said another word about it, it would still look dishonest.

Comment #22 November 23rd, 2007 at 2:27 am

Yes, the statements about solving NP-complete problems (and almost certainly the demos of Sudoku) are crap, and that drum needs to be beaten at appropriate intervals. But D-Wave has made several other intriguing and controversial statements that aren’t obviously wrong… and which could be very useful/interesting even if the whole chain of logic has a flaw somewhere.But looking for diamonds in the garbage is a very different quest from appraising a stone that lies among diamonds.

Comment #23 November 23rd, 2007 at 2:30 am

Greg: Robin made the Soduku comment that Geordie is complaining about. (See Robin’s comment above: “the statements about solving NP-complete problems (and almost certainly the demos of Sudoku) are crap”.)

I don’t see why it’s important to name the people who built the Soduku demo, but I do agree with you on (2.) – until I see how the Soduku problem was implemented (encoded as an optimization problem that a 16-bit AQC could solve), it doesn’t demonstrate anything about the Dwave machine.

Geordie: You have said now and before that for Soduku you used the AQC to run a “subroutine” that helps solve the problem, but that doesn’t help us understand anything – what I think everyone here would like to know is what this so-called subroutine is that your AQC is computing.

Comment #24 November 23rd, 2007 at 8:06 am

I think for D-wave need award Nobel prize, if they even proof that they adiabatic quantum computer is 2 times faster than the same classical computer but without entanglement and superposition. If running time is few microseconds then I think it’s must to be possible calculate diferents between say one and two microseconds for quantum and classical computer respectivly. But 2^16 or 2^14 is about 10000, so if they quantum computer is quantum then at least it probably must be 100 times speed up over classica computer. So 1us and 10ns would must be resonable diferent. And if this diferent not exist then how they can hope that with more qubits qq wouldn’t be more noisy than with 16, 28 qubits?

Comment #25 November 23rd, 2007 at 9:04 am

Science journalists are not representative of the lay public – any normal guy who does a technically-oriented job (fixing cars, for example) or any curious kid understands that one should try to figure out principles of how things works, not just exploit a phenomena for practical purposes.

But the science journalists never have to do a research by themselves – they may think that they are very good at explaing it – but how do they know that they are not dumbing stuff down, and producing vague but bombastic gorp

All PR stuff (not just the one produced by D-Wave) plays on lazyness of journalists. Journalists have short deadlines and PR men make their work “easier” by feeding them pre-digested stuff and ready-made conclusions that journalists can just lift verbatim and glue together by weak filler of their making – it is a quck, easy way of delivering an article on time, a technical subject that one is not very strong in.

And I really hate the condescending tone that some journalists resort to whenever reporting about science – just because these folks are dumb like a paddle it does not follow that everybody else is too.

Comment #26 November 23rd, 2007 at 11:05 am

How about “algebraic relativization”?Too long. Try saying “non-algebraically-relativizing techniques will be needed” (or is it “algebraically non-relativizing techniques”?). Or another phrase of which we need some variant: “non-commutatively non-algebraically-relativizing techniques.”

Comment #27 November 23rd, 2007 at 12:19 pm

Maybe you can use a TLA instead

Comment #28 November 23rd, 2007 at 12:56 pm

Robin made the Soduku comment that Geordie is complaining about.Yes, I realize that.

I don’t see why it’s important to name the people who built the Soduku demo, but I do agree with you on (2.) – until I see how the Soduku problem was implemented (encoded as an optimization problem that a 16-bit AQC could solve), it doesn’t demonstrate anything about the Dwave machine.It’s important to name names because it’s a question of honesty, not just technology. Reporters came out of that room thinking that a quantum computer solved a Sudoku. No one has proposed any meaningful way for a 16-qubit quantum computer to help solve a Sudoku, but somehow the Sudoku was solved. Conclusion: The demonstration looks dishonest.

In response to that, no one should be interested in humdingers from Rose about “novel hybrid approaches”. Until and unless the PIs of the Sudoku solver come forward, and they explain their work, the cloud of dishonesty hangs over him personally. In fact, unless witnesses come forward to explain otherwise, we don’t know that anyone at D-wave wrote a Sudoku solver at all; the demo could have been a fabrication.

Comment #29 November 23rd, 2007 at 1:05 pm

I’d like to add to Greg’s comments that the standard 9×9 Sudoku is extremely easy to solve with 0-1 linear programming packages, such as CPLEX, or with constraint propagation techniques. I can elaborate on details if anyone is interested. So, all that PR based on Sudoku is extremely fishy…

Comment #30 November 23rd, 2007 at 1:12 pm

Scott,

Thanks for the answers!

Far as I can tell, 2a is the question worth $64M, because it’s closely related to two really interesting questions: “What gives a QC its power?” and “What does it mean to say something is `quantum’?”.

A necessary and sufficient condition for a device to exceed BPP — or to achieve BQP — would be awfully nice. Since that’s kind of a pipe dream, I’d settle for a necessary condition, and the best shot seems to be entanglement.

I’m a bit skeptical though, because I don’t know of a really good

reasonfor entanglement [between various qubits] to be necessary — just a general conviction that entanglement is the most reliable signature of “nonclassicality”. DQC1 proves that you don’t need alotof it. My real worry is that entanglement is fundamentally about locality, and emphasizing locality within a quantum computer seems a little procrustean. The more I think about it, the more I prefer contextuality, but that’s hard to study.Incidentally, when you say “

the model is extremely artificial — since the same unitaries that let us leap from one specific separable mixed state to a completely different one, should also let us easily make entangled states,” the immediate naive counterargument is “Well, yeah… if you can prepare arbitrary input states.” How robust is your statement w/r.t. computational models like NMR and DQC1 that have strong preparation restrictions?Comment #31 November 23rd, 2007 at 1:52 pm

“Ike said: if you’re indeed seeing quantum effects, then by running your computer at higher and higher temperatures, at some point you should see a transition to classical behavior. Have you tried this simple control experiment?”

If I understand correctly, D-Wave implements its supposedly-quantum computer using superconducting flux qubits. So isn’t it that as you raise the temperature of the setup, you’ll pass the transition temp of the superconductor and they will stop superconducting and thus are no longer quantum. But you can’t tell if this transition to classical behavior is just simply because their qubits don’t superconduct anymore, or because of some more fundamental quantum -> classical transition?

Comment #32 November 23rd, 2007 at 2:13 pm

Robin, I disagree with you about 2a being a $64M question — maybe a $64 question. Whether entanglement is necessary to go outside BPP strikes me as just one of many, many interesting questions about the “boundary between BPP and BQP” that can be, and have been, asked.

(Incidentally, I can tell you my conjecture: yes, entanglement is necessary. It’s just that we haven’t managed to prove it yet.)

DQC1 is a different animal — that really

doesseem to provide more power than BPP, but then again, it also lets you prepare entangled states (and probably even more to the point, has no restrictions on the allowed operations).Comment #33 November 23rd, 2007 at 2:29 pm

Geordie,

I apologize for (and retract) the word “crap”, which wasn’t very appropriate for a distinguished and civilized salon like Shtetl-Optimized. I’m generally content to hitch my rhetorical wagon to Greg on this topic (though he should not be tarred with my poor choice of language).

Howver, in the interest of clarifying what I meant, I should have used two distinct words: I think suggestions that an AQC device will provide efficient solutions to NP-complete problems are at this point in time

incredible, and that demonstrations of Sudoku solving areinsubstantial. That is, I don’t believe the former, and I don’t think that the latter provide convincing evidence for anything.Moving on (because science is more fun). In the context of Q#1 (“A QC

iswhat a QCdoes“), I agree with most of what you say — but not necessarily that any real large-scale AQC will be operated in the non-adiabatic regime. The whole point of demanding a polynomial gap is to make operation within the adiabatic regime at leastpossiblewithin reasonable time. Almost everybody else who thinks about AQC is worried, not about the timescale regime, but about the noise regime (more on this in a minute).Moreoever, a device operating outside of the adiabatic regime is not an AQC, at least in this specific context. I think this is what you’re alluding to by putting AQC in quotes, but I also think it’s sufficiently important to justify making the point explicitly — if a QC is what a QC does, then a device operating outside of the adiabatic regime is only an AQC if it is computationally equivalent to an AQC.

This is a conjecture that I think you’ve backed away from by focusing on the device’s potential to act like a new and powerful heuristic.

Thisconjecture — that D-Wave’s device could provide a speedup comparable to a new heuristic for NP-complete problems — is quite credible to me… though I still don’t know of any convincing evidence for it. At the risk of stating the obvious, I’ll point out that the heuristic conjecture is unlikely to enrage most QC theorists, since AFAIK it doesn’t move complexity class boundaries, nor contradict BBBV.However, it’s also incompatible with calling the device a “quantum computer” (e.g. here), which (again, at the risk of stating the obvious) is what irritates assorted computer scientists.

Comment #34 November 23rd, 2007 at 2:47 pm

I’d like to add to Greg’s comments that the standard 9×9 Sudoku is extremely easy to solve with 0-1 linear programming packages, such as CPLEX, or with constraint propagation techniques.This is an interesting remark about the inherent complexity of Sudoku. It’s low by design, so that it can be fun to solve Sudokus with pencil and paper, and so that Sudokus can be generated by software with some ad hoc difficulty rating. Moreover, the “constraint propagation” software technique is not very different from the way that most people solve Sudokus without software. You make a 9x9x9 table of which numbers can appear in which cells, and you eliminate contradictory choices until the table stabilizes. Then you find the most economical place to guess, and extend the method to a branched search. (Also called a depth-first search.)

But in one respect, even this remark is overly fancy. Off-the-shelf Sudoku solvers exist. Libraries of solved Sudokus exist. If you want to demonstrate that your quantum computer can solve Sudokus, but you are willing to cheat, then you can grab these off-the-shelf solutions. You don’t need to consider how they work. You could even just solve a few yourself and type that in.

Again, what is much harder is to find any meaningful way for a 16-qubit device to help solve the Sudoku. Just as it’s much easier to make an omelette yourself than with the help of a 4-year-old.

Comment #35 November 23rd, 2007 at 3:01 pm

sw,

at higher temperature becoming thermal noise and all entanglement going to moon…

But I don’t think that this temperature regulation can answer somthing if they quantum effects very weak, becouse at higher temperature becoming higher resitivity etc and result can be misleading and not the same as suspecting “this teory”.

Sudoku puzle very easy for conventional computers (I donwload one…), so probably there is some clever algorithm. And if dwave quantum computer is not just probabilitic random bits generator, but some combination of classical anyling and “quantum effects”, then it’s possible that if quantum effects don’t working, classical anyling doing his job well.

I don’t know yet any of quantum computers, which giving even 2x speed up, so why dwave is so optimistic?

Comment #36 November 23rd, 2007 at 3:07 pm

Geordie,

Re: noise and decoherence, your answer doesn’t actually address my question. I asked about the statement “Total decoherence in the computational basis is irrelevant”, and your point is that decoherence in the energy basis is irrelevant.

I’ll take the risk of answering my own question here:

no, those statements are not compatible, because decoherence in the computational basis necessarily kills (1) entanglement between qubits, or (2) entanglement with a reference system. Anyway, the question is moot because D-Wave is not claiming that there is strong decoherence in the computational basis; they’re claiming that there is strong decoherence in the energy basis, which is clearly not incompatible with massive entanglement.One of the reasons for confusion is that Mohammad’s talk only covered the dynamics of a single qubit — despite repeated pressure (at IQC) to present or discuss some kind of data for multi-qubit systems. For a single qubit, the computational basis can always be chosen to be (or not to be) the energy basis. As Ray Laflamme pointed out at the talk, there are a

lotof people with 1 qubit — and the physics of 3 (or, god forbid, 512) qubits is generally quite different.The problem with textbook quantum mechanics is that it’s often oversimplified. In particular, decoherence is (a) not necessarily in a particular basis, and (b) not generally in the energy basis (“Deconstructing Decoherence”, by Anglin/Paz/Zurek, discusses related things). If your system’s Hamiltonian dominates, then the

bestpointer basis will be usually be close to the energy eigenbasis. However, you’ll notice there are several caveats in there, most importantly the fact that there may benopointer basis that is indefinitely stable.Even if we grant all the necessary assumptions, there’s still one very worrying thing about extending the analysis presented in Mohammad’s talk to large devices. You don’t just need a gap; you need a low density of levels just above the gap. This is true for random Hamiltonians, but the same arguments (e.g. central limit theorem) don’t work for the measure-zero subset of computationally interesting ones.

(full disclosure: I only play a quantum information theorist on TV; my background is in decoherence theory).

Comment #37 November 23rd, 2007 at 3:08 pm

My question is whether the Sudoku demo was provided to intentionally generate some hype about some possibilities and capabilities of QCs which are at best unlikely and with this mislead the general population in order to gain some funding and attention.

Comment #38 November 23rd, 2007 at 3:10 pm

demonstrations of Sudoku solving are insubstantialOn the contrary, if that Sudoku demo truly is as dishonest as it looks right now, than that would be very substantial, for negative reasons. People who lie to reporters have taken a very serious step, and should be treated differently afterwards.

Again, I want to say this carefully. I have no specific evidence of how exactly that Sudoku was solved. I saw a virgin Sudoku, called a “SudoQ”, I heard Rose say “the quantum computer spits out the answer”, and then I saw a solved Sudoku. I can’t think of an

honestexplanation. Conceivably there is one, but what is it?Comment #39 November 23rd, 2007 at 3:13 pm

Greg,

Just for yet another gemstone analogy, it might even be that we’ve got a diamond necklace with a half dozen paste gems scattered throughout. So, we could:

(1) chuck it,

(2) see if there are any diamonds, salvage them, then chuck it,

(3) replace the glass chips with diamonds, to get a nice shiny diamond necklace.

Option 3 is only sensible if there’s a preponderance of gems in the chain, which is open to [extensive] argument.

Comment #40 November 23rd, 2007 at 3:30 pm

Scott,

$64! Aaaagh! I feel so… cheapened! Although… are those Canadian or US dollars?

grinBack to science: yes, there are lots of interesting questions about the boundary between BPP and BQP, but this one is special, and I wouldn’t sell it off for O($10^2). If the entanglement conjecture is true, then we have a physics characterization of a quantum computer — rather than the inherently CS-based characterization “It’s a QC if it solves problems out of BPP”. The first question most people ask about putative quantum hardware is “Does it entangle?” even though the conjecture is still unproven! This is powerful evidence that either (a) the conjecture is important or (b) people are dumb. I tend (uncharacteristically) toward the former, in part because the conjecture has spawned useful advances in classical algorithms, e.g. matrix product states.

We don’t know

whyquantum computers are more powerful, and if we could find a necessary condition, that would be very enlightening — at least for me. And, yeah, I know it’s not sufficient (see Gottesman-Knill), which means my precious physics-based characterization of QCs isn’t complete. Baby steps… baby steps…Comment #41 November 23rd, 2007 at 4:13 pm

If there is no legitimate suffix “ization” for the noun “algebra,” then we are at liberty to decide how to spell it. We can create any spelling that would make the word easy for the average and below-average person to speak. I believe that this is not the case here. If you do not spell the word as “algebraization,” then you will appear illiterate to older readers. Younger readers, raised on television, action movies, and video games, will not know the difference.

Comment #42 November 23rd, 2007 at 4:22 pm

see if there are any diamonds, salvage them, then chuck it,I can’t say that it’s completely unreasonable. But I would like scientists in general, and us in particular, to devote more attention to questions of integrity. We owe it to science journalists. There has been an attitude that it just doesn’t matter whether “SudoQ” was a bogus demonstration, it’s “irrelevant”, because only journalists should care. In my view, that is deeply unfair to outsiders. Even for insiders, it’s a dubious kind of benefit of the doubt.

Comment #43 November 23rd, 2007 at 4:25 pm

I’ve been following this for awhile and enjoying Scott’s concise deconstructions, but there is one thing that I still don’t understand about all of this.

There is frequent reference made, during discussion of D-Wave, to the quantum computer which D-Wave claims to have built being “adiabatic”. What does this mean? My only intuition of what it means for a process to be “adiabatic” is in the context of atmospheric air parcels (i.e. packets of hot air) so when I first heard this I assumed it was marketing buzzword gibberish. However it seems to have real meaning to the people here. What is the difference between a quantum computer and an adiabatic quantum computer?

(Wikipedia still doesn’t actually have a page on “adiabatic quantum computation”, although their quantum computation article links to the nonexistent page. Wikipedia also references some paper which if you follow the citation rabbit hole leads to this paper, which seems to be defining adiabatic quantum computation but I’m not sure I understand it. From that paper “adiabatic quantum computation” doesn’t sound so much like the general computers like quantum computers as I understand them are, and sounds more like a physical process for solving a custom problem, like the soap-bubbles-to-solve-Steiner-trees thing Scott wrote about once. But maybe I’m missing the point…)

Comment #44 November 23rd, 2007 at 4:43 pm

If you do not spell the word as “algebraization,” then you will appear illiterate to older readers.I’m willing to take that risk.

Comment #45 November 23rd, 2007 at 4:45 pm

Coin: A quantum computer is “adiabatic” if at every time t, it’s in the ground state (or close to the ground state) of whatever Hamiltonian H

_{t}is currently being applied.(I’m sure the physicists have other ways of putting it, but that’s the definition I know…)

Comment #46 November 23rd, 2007 at 4:49 pm

What is the difference between a quantum computer and an adiabatic quantum computer?The more fundamental notion, in my view, is the adiabatic algorithm (or algorithms) in quantum computing in general. Adiabatic algorithms are a quantum variant of simulated annealing. When you solve an optimization problem with simulated annealing, you interpret the optimization objective as an energy, then impose an artificial temperature, which you then decrease slowly enough that the simulation converges to the least-energy state with good probability. You can have simulated annealing algorithms either classically or quantumly.

So anyway, what would an adiabatic variant look like? An adiabatic process is one in which the energy functional changes, but the energy does not. In particular, you can have a zero-temperature adiabatic process that maintains a system in its ground state. This is not meaningful for a discrete classical system, because there is no way for the ground state to evolve continuously. But you can do it quantumly, because even in a finite system, the set of pure states is a connected manifold.

So that is an adiabatic quantum algorithm (at zero temperature). The system begins in the ground state of an energy or Hamiltonian whose ground state is known. Then you evolve the Hamiltonian while frequently measuring the energy. (You have to choose a path of Hamiltonians whose measurements are computable.) The act of measuring the energy, if you do it frequently enough, maintains the ground state.

An adiabatic quantum

computeris an SPD devoted to an adiabatic quantum algorithm. But since there are adiabatic quantum algorithms that are universal for QC, with polynomial overhead, you can call it a computer rather than an SPD.An issue with D-wave — setting aside questions of integrity in their interactions with outsiders — is that when they call their device an adiabatic quantum computer, all three words are rather metaphorical. It’s not clear that it’s really quantum, or a computer at all, and it looks more like annealing than adiabatic evolution.

Comment #47 November 23rd, 2007 at 4:54 pm

Greg,

In case you wondering, I completely agree (re: post #42). That’s why I lauded the efforts that you, Scott, and Umesh have made to keep the discussion honest. We also have a duty (IMHO) to perform the best science and technology possible — which means salvaging diamonds in addition to pointing out cubic zirconia.

Comment #48 November 23rd, 2007 at 4:56 pm

but the energy does notI mean, rather, the entropy.

Comment #49 November 23rd, 2007 at 5:07 pm

Greg,

Great explanation of AQC. But… there are a couple of points where I’m not sure if I agree. Can you help me out?

1. Is an adiabatic algorithm really “a quantum variant of simulated annealing”? SA uses temperature, and therefore stochasticity, at a fundamental level (thermal fluctuations are essential to get you out of local minima), whereas ideal AQC is a deterministic evolution at zero temperature (until the final readout).

2. Re: “the energy functional changes, but the energy does not,” the energy itself can change, as long as you stay in the ground state, no?

3. [potentially ignorant question] Do you really have to measure the energy as you go along? I thought the adiabatic theorem kept you in the ground state with high probability anyway. And… if you

doinduce a Zeno effect by measuring energy at rapid intervals, doesn’t that actually let you run faster than the adiabatic threshold?Pardon my ignorance, please.

Comment #50 November 23rd, 2007 at 5:08 pm

Oops. Re: my 2nd question, I should have checked for updates before I posted. Please ignore.

Comment #51 November 23rd, 2007 at 5:25 pm

Is an adiabatic algorithm really “a quantum variant of simulated annealing”?Yes, I think so. There is first of all a classical variant of simulated annealing in which the energy functional also changes continuously. This is actually important in practice — sometimes you want to smooth the optimization objective in order to grease the annealing, until the end when you roughen the objective to exactitude. AQC is exactly the zero-temperature version of this generalization.

SA uses temperature, and therefore stochasticity, at a fundamental level (thermal fluctuations are essential to get you out of local minima), whereas ideal AQC is a deterministic evolution at zero temperature (until the final readout).It certainly doesn’t use deterministic evolution, it uses quantum evolution. Quantum states in general are the non-commutative generalization of classical probability distributions. See my notes on this theme (posted on my home page). It is true that the state in AQC is non-stochastic in the sense that it has no entropy, but in every other respect it is just like an adiabatic transition in classical annealing.

2. Re: “the energy functional changes, but the energy does not,” the energy itself can change, as long as you stay in the ground state, no?The entropy, rather. To be even more precise, an adiabatic process is one that does not exchange entropy with the environment.

3. [potentially ignorant question] Do you really have to measure the energy as you go along?Yes, of course. If you evolve the Hamiltonian for which the system is supposed to stay in the ground state, you have to let the system know some how. In the strict sense of algorithms, “you” (the classical controller) have to measure the Hamiltonian. In an adiabatic SPD, the evolving Hamiltonian could instead be measured “automatically” — i.e., by physics — but that’s not really different.

Comment #52 November 23rd, 2007 at 5:25 pm

OK, for those who are interested: the algebrization paper is now available from my papers page. Enjoy!

Comment #53 November 23rd, 2007 at 5:32 pm

Saving the best for last:

And… if you do induce a Zeno effect by measuring energy at rapid intervals, doesn’t that actually let you run faster than the adiabatic threshold?Yes. The important point is that measuring the Hamiltonian is the part that carries a computational cost. It is the adiabatic analogue of thermal mixing steps in simulated annealing. The algorithmic work in either case is proportional to the ratio: rate of measurement or mixing / rate of change of H or T. In the case of AQC, the mininum of this ratio is inversely proportional to the energy gap (between the ground state and the next state). So the bad news is that the generic AQC scheme has an exponentially small energy gap, just as the generic simulated annealing scheme has an exponentially long mixing time.

Comment #54 November 23rd, 2007 at 5:58 pm

Greg’s description of AQC is fairly accurate. Bill and I wrote a short introduction to AQC last summer which is here http://dwave.files.wordpress.com/2007/08/20070810_d-wave_quantum_annealing.pdf , which ties together the variety of ways in which free energy minimization can be used to drive computation.

Comment #55 November 23rd, 2007 at 6:43 pm

Actually, Scott, I have a question about your paper with Avi. In comparing two complexity classes X and Y, you go out of your way to give X and Y different oracles. What if instead you defined an oracle to be algebraic if its input is a pair (p,x), where p is a prime and x is a string over Z/p, and if its output is given by a low-degree polynomial that depends only on the length of x? Could you then say that you consider separations and inclusions relative to the restricted class of algebraic oracles?

Or, to the contrary, do you really need to give the two classes two different oracles?

Comment #56 November 23rd, 2007 at 7:07 pm

Greg, I’ll just paste the explanation from pages 9-10 of our paper:

When we examine the above definition [of algebrization], two questions arise. First, why can one complexity class access the extension ~A, while the other class can only access the Boolean part A? And second, why is it the “right-hand class” that can access ~A for inclusions, but the “left-hand class” that can access ~A for separations?

One answer is that we want to define things in such a way that every relativizing result is also algebrizing. This clearly holds for [our definition]: for example, if C

^{A}is contained in D^{A}, then C^{A}is also contained in D^{~A}, since D^{A}⊆D^{~A}. On the other hand, it is not at all clear that C^{A}⊆D^{A}implies C^{~A}⊆D^{~A}.A second answer is that, under a more stringent notion of “algebrizing result,” we would not know how to prove that existing interactive proof results algebrize. So for example, while we will prove that PSPACE

^{A}⊆IP^{~A}for all oracles A and extensions ~A of A, we do not know how to prove that PSPACE^{~A}=IP^{~A}for all ~A.A third answer is that, for our separation results, this issue seems to make no difference. For example, in Section 5 we will construct oracles A,B and extensions ~A,~B, such that not only P

^{~A}=NP^{~A}and P^{~B}≠NP^{~B}, but also NP^{~A}⊆P^{A}and NP^{B}⊄P^{~B}. This implies that, even under our “broader” notion of an algebrizing result, any resolution of the P versus NP problem will require non-algebrizing techniques.We list as one of our open problems to prove (for example) that IP

^{~A}= PSPACE^{~A}for all algebraic oracles ~A.On the other hand, we also cite a survey article by Fortnow, where he

doesshow that IP^{~A}= PSPACE^{~A}for all “algebraic oracles” ~A, but only under a much more subtle definition of “algebraic oracle” — where you first take a low-degree extension, then reinterpret the low-degree extension as a Boolean function, then take a low-degree extension ofthatBoolean function, etc. For our purposes (e.g., proving algebraic oracle separations), Fortnow’s definition seems much too difficult to work with.Comment #57 November 23rd, 2007 at 7:45 pm

Dear Robin,

I usually try to avoid discussions on blogs. But I am happy to see that you understand the physics behind my talks and raise relevant questions.

The important points that I tried to address in my presentations, and I think you are among those who got it right, is three-fold:

1. Classical limit in physics is large temperature limit and not long time limit. There are many systems that stay in fully coherent superposition states forever as long as T is small. A good example is superconducting condensate. The coherence of the condensate does not get destroyed at long times nor at large sizes.

2. The decoherence (T_2) time scale in qubits is just the time scale that makes the density matrix diagonal in the energy basis. It is not the time scale that makes the qubits classical. A density matrix that is diagonal in energy basis can be non-diagonal in commutation basis and therefore can have superposition, entanglement, etc. Of course in the absence of a strong Hamiltonian since the energy spacings of the system is necessarily smaller than T, the system is in classical limit. In such a case, as soon as decoherence time is passed the qubits become classical. This is the case for gate model quantum computation because there is no Hamiltonian to stabilize the state of the system.

3. In adiabatic quantum computation AQC, on the other hand, such a Hamiltonian exists. Therefore in AQC it is irrelevant to ask if the computation time is shorter or longer than T_2. The relevant questions, as you also mentioned, are whether there is a strong Hamiltonian (stronger than noise) and how small is the temperature.

In my presentations I tried to answer these questions. The single qubit data I showed was not to experimentally justify that our multi-qubit system is e.g. in entangled state (which is actually a very hard experimental question). Instead, I showed that we have a method to extract every term in the Hamiltonian experimentally as well as the noise strength that corresponds to such a term. This way, you can easily see that the first condition, i.e., having a well defined Hamiltonian, is satisfied. Second I showed that we have a very clear way to determine the temperature of our sample and it is 20 times smaller than the energy gap between the ground state and excited state of our system.

This line of thought was phrased by Seth Lloyd in Eddie Farhi’s the group meeting as “convincing evidence”, which I think was the highlight of the meeting and I am surprised that Scott did not mention anything about it. When we were asked for further evidence, we asked the audience for suggestions for experiments. Isaac Chuang suggested raising the temperature, but the conclusion was since the relaxation time of our system is very short, one would end up measuring thermal distribution of the final Hamiltonian which doesn’t give you any information about the evolution. Once again Seth stated that there could be different levels of evidence, and the evidence that we have could be the lowest level but still an evidence.

Once again I really prefer to have discussion via email or direct discussion than blog. If you have any further question please feel free to send me an email. I will be happy to answer your questions.

Scott – You told me in MIT that you want the public don’t get misinformed about quantum computation. I think you are misinforming them about what went on in Eddie Farhi’s group meeting. I understand that you are not a physicist, but there were physicists in the audience and one them who understands adiabatic quantum computation better than anyone else in the world is Seth Lloyd. I hereby request you to acknowledge to your audience hearing at least two times from Seth Lloyd phasing our evidence as “convincing evidence”.

Comment #58 November 23rd, 2007 at 8:00 pm

Mohammed Amin: it is good to see your name in this discussion. According to Scott, you said that you were not involved in the Sudoku demonstration. Now, D-Wave is not all that big of a company, maybe a few dozen employees. Do you know who did the Sudoku demo?

If you would prefer to reply by e-mail, that is fine; my e-mail address is listed on my web page.

Comment #59 November 23rd, 2007 at 8:02 pm

Yes, Seth Lloyd was at the meeting, and he did indeed talk about different levels of evidence, with the results you presented the lowest level but “still an evidence.” I apologize for omitting that from my narrative of the meeting; I didn’t know that you considered it the highlight.

Comment #60 November 23rd, 2007 at 8:08 pm

Scott- First of all Seth Lloyd did not say “still an evidence” but said “convincing evidence”. Second I don’t understand how this conclusion of Seth Lloyd in a session that was all about the evidence that we presented cannot be the highlight of the meeting.

Comment #61 November 23rd, 2007 at 8:24 pm

Greg,

Thanks for the clear explanation. However… I’m still skeptical on one point. You say that measurement is necessary because “

If you evolve the Hamiltonian for which the system is supposed to stay in the ground state, you have to let the system know some how.”But — the Hamiltonian is exactly the thing that the system “knows”! If you change the Hamiltonian from H to H’, the system is darn well going to know about it; it’s going to start evolving as

e-iH’tinstead ofe-iHt.Now, as I recall the adiabatic theorem, its import is that you stay in an eigenstate of H provided that the transit rate is slower than the inverse gap. Nothing in there about nonunitary dynamics like measurement. Furthermore, if I let the system evolve via

e-iHtfor a random and unrecorded time T>>1/gap, then the result is a dephasing channel in the energy basis — which ispreciselythe effect of performing a measurement in the energy basis.So, I’m having a lot of trouble buying the argument that you have to measure, simply because (1) AFAIK, proofs of the adiabatic theorem don’t require nonunitarity, and (2) I can simulate a measurement in the energy basis just by closing my eyes and waiting.

Are we possibly having a physics/CS miscommunication here, wherein you’re taking “Hamiltonian” to mean an abstract Hermitian operator rather than the actual operator that is instantaneously generating the evolution of your quantum computer? I’m aware that you can

substituterepeated measurement of an operator H for unitary evolution according to H… but I can’t see where the measurement isnecessary.Comment #62 November 23rd, 2007 at 8:35 pm

Mohammad,

Thanks for your comments! I’ll look forward to the opportunity to chat with you at leisure sometime when our paths cross. For now, I’ll just make two very quick comments and leave it at that.

* I’m a little worried about how “quantum” and “classical” are getting tossed around (in many many places, not just your post!). For instance, there are [many] systems that are definitely nonclassical, yet do not support various quantum information tasks.

* T1 and T2 are well-defined for single qubits. For multiple qubits, however, it’s not so simple: it’s definitely not safe to say “The whole system will decohere in

itsenergy basis after the single-qubit T2″. So, there’s another question (in addition to “How big is H?” and “How big is T?”), which is “Does the decoherence model scale?”Thanks again for the talk at IQC!

Comment #63 November 23rd, 2007 at 9:36 pm

Are we possibly having a physics/CS miscommunication hereI would say so, although I meant to address both sides of it. First, you can use adiabatic QC as an algorithm with some qubits that don’t have a Hamiltonian. In this case you have to invent one and act it on the qubits.

Now, as you say, if you have a physical Hamiltonian H, letting it evolve by exp(-iHt) for an uncertain period of time has the same effect as measuring H. I completely agree with this mathematical point, but I interpret it differently: This evolution for a random period of time

isthe measurement of H. The environment is learning the value of H! If the environment measures H, then of course you don’t have to.In my notes I define a quantum operation M called a hidden measurement. I.e., if you have some quantum state and I measure some operator H without telling you the value, what is the posterior state for you? As you say, the hidden measurement of H is exactly the time evolution exp(-iHt) rho exp(iHt) averaged over a long period of time. But there is also a somewhat subtle converse, which you can infer from Stinespring dilation or mutual information or by other means: regardless of how you arrive at the hidden measurement operation M, the environment does know the measured value.

Comment #64 November 23rd, 2007 at 9:58 pm

Greg,

Okay, I think we’re in agreement on all the major points. However, in the cause of scientific discourse, I’m going to press on one rather interesting interpretational detail.

You say “

If the environment measures H, then of course you don’t have to.” This implies that somebody has to. What’s interesting in this case, though, is that the system remains in the ground state at all times — which means that everybody (well, you and the environment) knows exactly what that measurement will revealbeforeit’s made… and the environment never develops any mutual information with the system (as implied by the fact that the whole process is isentropic).What this means is that you can derandomize the process. If you let the system evolve for a

certainperiod of time (T) in between the N adjustments of H (by epsilon = 1/N), then as long as T is chosen so that the O(epsilon) errors in amplitude for the individual adjustments don’t add up coherently (i.e., don’t choose T to be too short, or really close to 2*Pi), the final probability of ending up out of the ground state is still O(N*epsilon^2).This is the content of the adiabatic theorem, IIRC. Basically, the

e-iHtevolution takes you in very small spirals around a big slow trajectory forH(t)— which works even if you have perfect knowledge of the clock (which is what was acting as an environment in the hidden-measurement model).Comment #65 November 23rd, 2007 at 11:29 pm

Av Avi knows best.

Comment #66 November 23rd, 2007 at 11:57 pm

I agree, Robin, that is an interesting point of interpretation. In general, if a quantum operation is a convex sum of unitaries, and if its application does not create mutual information for some adiabatic reason, then there should be a way to derandomize it, because you don’t learn anything from looking at which unitary term was employed. Neat. If I were looking for gems in the crap, then your remark certainly counts, in the context of an otherwise dreary conversation about D-Wave. ((1) Admittedly Scott’s new paper is another gem, rather more substantial than a good side remark. (2) Dreary though it may be, the D-Wave discussion could still be interesting for negative reasons.)

which means that everybody (well, you and the environment)In these discussions, there are the qubits, there is you, and then the environment

iseverybody else.Comment #67 November 24th, 2007 at 3:45 am

Greg, I think your interpretation of Hamiltonians as measurement is a bit odd, and don’t understand how you are finding measurements in a closed (unitary) system.

Scott, I can’t believe you used “algebrize”, aack! What about A-relativize for short?

Comment #68 November 24th, 2007 at 9:28 am

Sam: Yeah, I thought of that. But

(1) I think that computer scientists have been

muchtoo eager to coin acronyms, and that this has damaged the public perception of our field relative to physics (which has cool names like “quark”, “supersymmetry”, “black hole”…),(2) we already use “irrelativize” for another purpose, and

(3) we didn’t have the luxury of proposing a word in isolation, and phrases like “non-A-relativizing techniques” grated on my ears.

Comment #69 November 24th, 2007 at 12:12 pm

Greg, I think your interpretation of Hamiltonians as measurement is a bit odd, and don’t understand how you are finding measurements in a closed (unitary) system.To answer the second question first, the whole universe is a closed unitary system as far as we know. Certainly the laws of chemistry can be placed in a closed unitary system, and we human beings are chemical entities. So it isn’t my proposal to find measurements in a closed unitary system, it’s implicit in the postulates of quantum mechanics. It just isn’t explained very well. My notes are meant to address that concern, and they begin to do that, but sadly they are far from finished.

In a simplified model, if you make a unitary operator on a joint system A tensor B that entangles their state, then you can typically say that A measured B. It really comes to the same thing, because if A is then subject to decoherence and rendered classical, she can still remember a measured value.

Now, there is a more subtle version of this in which elapsed time is a quantity that can be measured, or an uncertain variable and therefore a source of decoherence. I agree that it’s odd, but it is indisputably part of the story. If you include this source of decoherence, then the world does indeed measure the Hamiltonians of its objects eventually.

Comment #70 November 24th, 2007 at 12:18 pm

“What I’m finding, more and more, is that the arguments you and I find persuasive are simply not persuasive to most people (even though they should be).”

Try to improve the arguments! My hunches regarding D-wave may very well be similar to yours, Scott, but the initial arguments against D-wave were not persuasive. Worse, some of them were generic enough to apply to any company with ordinary behavior trying to build quantum computer technology or other similar daring endeavors.

“It would be as if a company claimed it had an algorithm that could compress a random string, you and I pointed out that 2**n > 2**(n-1), and everyone dismissed that as an irrelevant piece of theory.”

If “random strings” refer to uniformly distributed 0-1 strings of length n then you are right. But “random strings” can have other meanings. It can be a random string drawn from the internet.

“All they want to know is: how well does the compression algorithm work on my data? ”

This is quite reasonable.

“And you try to explain to them that if a company could be so egregiously, howlingly wrong about such a fundamental point, then there’s little reason to expect their algorithm will do anything useful for anyone.”

Maybe D-wave was egregiously howlingly wrong on some crucial theoretical points. It can be useful to point them out. But the claim/hope that QC will be able to solve NP complete problems which found itself into the media is not as terrible as 2**(d-1) >= 2**d and is not that relevant to D-wave activities. Moreover, they quickly withdrew from it.

“They don’t understand that you can refute a complicated theory by attacking at its weakest point; they think you have to understand every detail first.”

Well, the reality is that you cannot refute a complicated theory (especially a developing theory) by attacking at its weakest point. But sure enough, attacking at its weakest point is a good strategy.

Comment #71 November 24th, 2007 at 1:12 pm

This is addressed to John Sidles. John, I’m studying algebraic geometry, so I find your speculation that there might be a purely geometric formulation for P!=NP intriguing. That being said, why the recommendation for Harris’ “Algebraic Geometry”? I happen to have a copy, and I don’t see anything about it that is particularly noteworthy. A good, solid text, of course. Am I missing something? Are there any sections worthy of special attention in this context? For the beginner, I’d recommend something like Fulton’s “Algebraic Curves” for a general concise overview of the relationship between algebraic entities like radicals and geometric properties such as genus.

Comment #72 November 24th, 2007 at 1:40 pm

Perhaps I can put a sharper edge on Gil’s argument by pointing out a rather obvious point:

words mean different things to different people. D-Wave is in an unfortunate position between Scylla (the scientific community) and Charybdis (business & venture capital). The Tech Review Q&A with Jason Pontin (see link in post #33) makes this very clear. Terms like “approximate solution” and “solving NP-complete problems” are being used “as businesspeople would use it, and not as computer scientists use it” (to quote Pontin). Some would say that the same is true for D-Wave’s use of “fault-tolerant” and “quantum computer”.The easy way to deal with this is to simply consign them to the deepest pits of scientific wrongness. After all, if the device

doesend up providing really good, fast, approximate solutions to TSP, we’re entirely justified in saying “That’s irrelevant; it’s not a QC and it doesn’t solve NP-complete problems.”Or, we could try to clarify these nomenclatural discrepancies in a way that’s as educational as possible for non-specialists. I think both the Tech Review Q&A and several of Scott’s posts do a decent job of this. But we could do still better. I think the world is still lacking good, readable answers to “If a computing device isn’t classical, is it a quantum computer?” and “What sorts of useful problems

couldthis thing help solve, without violating any theorems or widely-believed conjectures?”In other words, instead of just explaining what these terms (e.g. “random string” or “approximate solution”) mean (rigorously) to a computer scientist, we could also analyze their relationship with the definition a man-on-the-street would assign.

And… Geordie, I know D-Wave

hasposted high-level answers to these questions. But I can’t afford to take them as authoritative, only because you have an explicit financial interest in the answers.Comment #73 November 24th, 2007 at 3:17 pm

ScentOfViolets is a *great* on-line name!

I am definitely not an expert on algebraic geometry … I use Harris’s textbook precisely because it is a “good, solid text.”

For a quantum system engineer, it is quite a bit of fun to read Harris, and recognize old friends under unfamiliar names … our group is preparing a review article that does this pretty systematically.

For quantum simulation to be computationally feasible, we need to describe quantum trajectories in some compressed format … the typical approach in engineering is to project the trajectory onto some lower-dimensional manifold.

So it is natural to ask, what is the geometry of these manifolds? For reasons of algorithmic efficiency, the manifolds used in practice invariably are algebraic varieties … hey, that’s why its a good idea to read Joe Harris’ book!

Following up these ideas leads to a review article of (seemingly) infinite length. And there is one loose end left over … a cryptographic loose end … which (like every other carbon-based life form) we wrote up as a 2008 STOC submission (plus Victoria BC is such a beautiful place to visit).

Since “geometric cryptography” is well outside our QSE Group’s main line of research, I don’t mind posting the abstract here, as an example of how naturally the subjects of algebraic geometry and quantum information theory can be merged. The basic ideas are reasonably accessible to undergraduates.

——–

This article describes algorithms for cryptographic key exchange that can be implemented by numerical quantum simulation. We consider a quantum system that is described by a total Hamiltonian of which only Alice knows half, and only Bob knows the other half. We further stipulate that the system is subject to Markovian noise processes, of which (again) only Alice knows half, and only Bob knows half. And finally, we stipulate that Alice’s Hamiltonian and noise processes have a sparse representation in a basis that is known only to Alice, and Bob’s Hamiltonian and noise processes have a sparse representation in a basis that is known only to Bob. Starting with a random quantum state, Bob and Alice collaborate to evolve a quantum trajectory, in alternating infinitesimal steps, that is governed by the net Hamiltonian and noise model, exactly as though they were collaborating in simulating the trajectory for purely engineering purposes. This quantum trajectory is assumed to be public knowledge. To extract a shared secret key from the public trajectory, Alice projects the trajectory onto a Kahler submanifold that is an algebraic variety of product-sum form in her basis set; Bob similarly projects the trajectory onto a Kahler submanifold that is an algebraic variety of product-sum form in his basis set. Alice therefore knows the difference vectors between the public trajectory and her private Kahler projection; these difference vectors are her key. Bob similarly knows the difference vectors between the public trajectory and his Kahler projection, and these difference vectors are his key. Provided that both Alice and Bob choose to model their noise processes as covert measurement processes, it can be shown that these two difference vector trajectories are anti-correlated, and these anti-correlated trajectories constitute Alice and Bob’s shared information. This method is of interest because it establishes the geometric equivalence of a set of problems in physics, engineering, and information theory.

————-

There’s not a lot more to tell … the body of the article is mainly references, and will pose no mysteries to anyone who has read Nielsen and Chuang on the one hand, and (say) Joe Howard’s book on the other hand.

Just to remark, the resulting scheme bears a family resemblance to Ajtai-Dwork lattice cryptosystems.

More generally, from a geometric point of view, can’t pretty much *any* algorithm be regarded as a projection, from a large-dimension space of random strings, to a low-dimension space of proofs?

So perhaps, ScentOfViolets, with P!=NP having been naturalized, relativized, and now algebrized (which would be the best Bob Dylan song ever) it will fall to your student generation’s turn to geometrize it. Good luck!

Comment #74 November 24th, 2007 at 3:45 pm

Mohammad: While I have great respect for Seth, the fact that he found an arguement compelling isn’t in and of itself a reason why we should. Science has no authorities, and all that.

As a result, I can’t really see the point in arguing over the exact phrase.

Comment #75 November 24th, 2007 at 4:16 pm

Many thanks for your kind reply and your pointers. I take my screen name from the nineteenth century diffusion problem, which I am told by my old Thermo professor was how it was colloquially known at the time.

Comment #76 November 24th, 2007 at 4:55 pm

Joe: Thank you! Seth was one of ten or more people who expressed their opinions at the meeting, and in my zombified state I failed to take notes of everything that was said.

Incidentally, “still an evidence” was actually a quote from Mohammad’s own comment. I don’t remember Seth’s exact words, but am perfectly willing to believe he used the phrase “convincing evidence.”

Comment #77 November 24th, 2007 at 7:33 pm

Robin –

Terms like “approximate solution” and “solving NP-complete problems” are being used “as businesspeople would use it, and not as computer scientists use it” (to quote Pontin). Some would say that the same is true for D-Wave’s use of “fault-tolerant” and “quantum computer”.I doubt business people would talk in such terms at all, they would rather talk in terms of specific applications, like protein folding, for example. But the question is not anymore about used words, it’s about probable blatant dishonesty is their quantum computing “demonstrations”, such as Sudoku. And it has recently come to my mind, why would any high-tech start-up run such intensive and costly PR years before the technology is to be ready for customers? They could have been quietly filing patents and doing demos for their investors, to annouce their breakthrough when everything is already developed and tested. But no, they are getting loud now, when any real question about performance can be answered by mentioning that it’s “only 16 qubits yet”, so wait for the miracle a little bit more… And I have one possible explanation for that: somebody wants to get acquired (or even publicly traded) long before the technology can be really tested… Somebody wants to wash their hands…

Comment #78 November 24th, 2007 at 8:01 pm

Why does everybody keep going on about the sudoku? Just because the guys at the talk (who are physicists, not computer scientists) couldn’t write down the formula that was used to map the thing onto their hardware from the top of their head?

To me this seems the most boring and simplest of their demos. Their system solves optimization problems subject to constraints. Sudoku: optimization = find right answer, constraints = existing numbers on the grid combined with basic algebra. Only that in this case an approximate answer isn’t good enough. But with a sufficiently small sudoku, it might even fit into their 16 qubits without being broken into smaller pieces by classical hardware first.

All those demos were just supposed to show how easy it is to port applications to their hardware, not how much faster they become (which they didn’t anyway, and in the case of sudoku likely never will, because it’s very efficiently solvable on classical computers).

Comment #79 November 24th, 2007 at 8:23 pm

sanktnelson, i think because a variable grid-sized Sudoku is known to be NP-Complete and NP-Complete problems aren’t known (and are unlikely) to be efficiently solved by Quantum Computers (BQP vs NP is an open question).

If Quantum Computers were able to solve NP-Complete problems efficiently they would be worth that much more, so the demo at present is misleading (intentionally or not) about the capabilities of Quantum Computers, even though there’s nothing amazing about solving a few instances of a 9*9 sized Sudoku (my kitchen sink can do that).

That’s my interpretation of all this.

Comment #80 November 24th, 2007 at 9:08 pm

hmm, for some reason I was under the impression that sudoku is not np-complete, but was just thrown in because it’s a mathematical problem that people actually can relate to. Well, seems I was wrong.

That doesn’t change my original point though: All their demos were not to show efficient solving of NP-complete problems, or efficient solving of anything, for that matter. It just shows how easily different problems (with some underlying mathematical similarity) can be adapted to use their hardware. The latest demo at SC07 is completely in line with that. And even Geordie Rose has repeatedly stated that the final test of their technology will be when they are actually faster at solving anything. Which is not at 16 and not at 28 qubits. Maybe at 512, but who knows?

Comment #81 November 24th, 2007 at 9:36 pm

Why does everybody keep going on about the sudoku? Just because the guys at the talk (who are physicists, not computer scientists) couldn’t write down the formula that was used to map the thing onto their hardware from the top of their head?No, it’s because no one has proposed any meaningful way to map a full-sized Sudoku onto their 16-bit hardware. 16 bits is simply too few to provide any real help. And it’s because the company CTO, who did the demo, refuses to explain it. It’s as if he pulled a barge pole out of a top hat and insisted — without explaining how — that it’s new technology rather than old-fashioned cheating.

The only relevance of Amin and Berkley is that they said that they know nothing about it. Someone at D-wave must know about it, but apparently not these two guys.

But with a sufficiently small sudoku, it might even fit into their 16 qubits without being broken into smaller pieces by classical hardware first.It was a full-sized Sudoku, and no one has even proposed a way to break it into smaller pieces so that 16 bits provide any real help. These are not even 16 fully programmable bits; they are in a fixed Hamiltonian that can only be used with a penalty factor. It is difficult to use them for more than 6 bits. Again, Rose says in the video, “the quantum computer spits out the answer,” but no one has proposed an honest meaning of that statement.

Comment #82 November 24th, 2007 at 9:37 pm

Scott, you passed up a good chance to introduce “dramatization” and “dramatize” into TCS lingo.

“It can be shown that a solution to NP !C P cannot dramatize, since NP^A~ C P^A, where A is Merlin and A~ is his drama-queen of a daughter Mary obtained by taking his low-degree extension and… The prover (Merlin) then wants to convince Arthur that f(x) = 0 for any x in {0, 1}^n but, and here’s the catch, he’s allowed to make any use of Mary he wants though at the risk of having her talk his ear off at no one’s benefit…”

YOUR LOSS!

Comment #83 November 24th, 2007 at 10:26 pm

The SudoQ mystery… solved:

I have never posted on Scott’s blog before and I am not even a regular reader but I feel obligated to post this time because of all the cofusion surronding the Sudoku demo. I am one of the software engineers working in the applications group at D-Wave and yes… I did the sudoku demo!

Before going further, although irrelevant, I cannot stop myself from pointing out that out of all the amazingly smart people posting on this blog, just one (sanktnelson, Comment #78) actually comprehended the purpose of the sudoku demo.

Anyways… I just wanted to say (although I know that you guys are not just going to take my word for it) that the demo was genuine and is not all that hard if you think about it. If you remember from Geordie’s february demo, the 16-qbit QC is able to solve small maximum independent set (MIS) problems. So all I did was convert the sudoku into an MIS problem and device a strategy to break large MIS problems into chunks that the 16-qbit hardware can accept. There are numerous (not neccessorily smart) ways of doing that. This combination of algorithmic breaking down and use of hardware to solve small problems is perhaps what Geordie was referring to as “hybrid” approach.

Note that this also explains why Amin and Andrew may not know about the sudoku demo. We in the applications group view the hardware as a black box (although it is getting more and more gray) that can solve MIS problems. We dont know how the hardware guys solve it.. and the hardware guys dont know what kind of problems are we using the hardware for. It is to display this ease of use that was the primary intent of the demo (the table plannar and sudoq), that is, how real world problems can be solved on the hardware even though they are different from the native language of the hardware which is MIS.

Comment #84 November 24th, 2007 at 10:38 pm

I did the sudoku demo!All right, this is a start. I am going to guess that you are Varun Jain?

So all I did was convert the sudoku into an MIS problem and device a strategy to break large MIS problems into chunks that the 16-qbit hardware can accept.This too is a start, but there is a lot left to explain. You converted the Sudoku into an MIS problem. So the Sudoku became a graph, with how many vertices? A straight conversion might use 729 vertices, to represent the 9 possible values in each cell, minus perhaps the cells that are set in advance. Was that your conversion? What was the graph and how many vertices did it have?

Now, an MIS problem in the 16-qubit computer would have to be a subgraph of the 4×4 grid, right? How many vertices did each of these chunks have? And, the central question: How do you convert the Sudoku graph into chunks?

Comment #85 November 24th, 2007 at 10:41 pm

Let me add also that if as you say, the conversion was “not all that hard”, then the simplest way to clear the cloud of suspicion entirely would be to post the software to the Internet. If it’s not hard, then it’s unlikely to contain any trade secrets.

Comment #86 November 24th, 2007 at 10:51 pm

Greg, I dont know if you have, but if you have worked in any commercial industry, then I am sure you know NDA’s and why I or anyone at D-Wave cannot post the software or details about the software online. What I can tell you is that there are numerous ways based on branch-and-bound to reduce problems like SAT and MIS into small, manageable chunks. Infact, branch-and-bound is the basis of the most successful SAT solvers.

Comment #87 November 24th, 2007 at 10:53 pm

The term PseudoKu just occurred to me, i wish i had been able to use it earlier.

Comment #88 November 24th, 2007 at 11:03 pm

It’s not all that hard, but there is an NDA so you can’t explain it? Gee, now it is back to sounding bad. Your employer could easily release you from that NDA for a question as simple as basic MIS conversions. Besides, it is a particularly aggressive use of NDA if the software is just a demo intended for journalists.

What I can tell you is that there are numerous ways based on branch-and-bound to reduce problems like SAT and MIS into small, manageable chunks.I know that. But I also estimated that these “chunks” are so small that they are trivial, and that your software front end would then be doing all of the real work. That would not count as an honest demonstration. As I said, it’s like making an omelette with the help of a four-year-old. It’s not honest to say that he made the omelette, if all that he actually did is fetch the eggs and onions.

I also said that I was not there and I do not know what really happened. Either you can explain it without violating NDA or you can’t.

I grant that it is a good first step that you claim authorship.

Comment #89 November 24th, 2007 at 11:51 pm

I’m going against my better judgement in interjecting here, but…

“What I can tell you is that there are numerous ways based on branch-and-bound to reduce problems like SAT and MIS into small, manageable chunks. Infact, branch-and-bound is the basis of the most successful SAT solvers.”That seems a bit condescending. I’m pretty sure most people reading this blog are aware that classical heuristics exist for approximating solutions to NP-hard problems. I don’t see how mentioning these comparatively elementary facts adds anything to discussion of how D-Wave has achieved

quantumspeedups over the extant methods.Comment #90 November 25th, 2007 at 12:04 am

Concatenating one last bit to my post:

“…or whether the quantum hardware had any non-trivial role in finding the solution.”Comment #91 November 25th, 2007 at 12:47 am

Greg, the software at D-Wave is a combination of state-of-the-art and numerous novel ideas. I would be very surprised if anyone would authorised to publish it online or talk about its details. But your omlette anology is a little miscued. Here, the four-year-old is not just fetching the eggs and onions, but making a miniature omlette so to speak… hence giving evidence that when grown up, he/she will be able to make a full fledged omlette (sorry… couldnt think of anything better)

Tyler, I never meant to even suggest anything “quantum”. I was just explaining how was sudoku processed before it was sent to the hardware. To repeat, I am a member of the applications group and I will not even pretend to know how D-Wave’s hardware works. Infact, my knowledge about quantum processing, AQC and physics is no more than that of your average computing science grad. And I dont know where did you even get the idea that D-Wave is claiming to get quantum speedups over existing methods. As far as I know, they are NOT claiming any such thing. Infact, from what I remember, in february demo, Geordie mentioned that currently the hardware is about a 100 times slower than conventional machines. Anyways, its not my intent nor responsibility to comment on the hardware aspect. I just wanted to try to clear up some confusion about the sudoku demo from the software perspective.

Comment #92 November 25th, 2007 at 1:02 am

Varun,

First, I agree that I should have been more careful with my objection. D-Wave isn’t claiming quantum speedups, at least not yet. But they did hype the sudoko-demo as a demonstration of “the first commercially viable quantum computer.” It would seem that if the device in question didn’t do anything that differed appreciably from the already developed classical methods, then doing such a thing is a bit disingenuous. I think that this is somewhat close to Greg’s objection, but he can correct me if I’m wrong.

Comment #93 November 25th, 2007 at 1:18 am

Greg, the software at D-Wave is a combination of state-of-the-art and numerous novel ideas.No, you said it was “not all that hard if you think about it”; now you say that it is state-of-the-art and novel. From the beginning, the demonstration looked dishonest, while this looks like a smokescreen.

How sophisticated can these prep methods really be, and yet allow the conclusion that a quantum computer solved the Sudoku? Because, the outside wisdom is that solving a Sudoku by computer is only moderately complicated, on the level of a student project in an algorithms course. The more sophisticated it is, the less work is left for those 16 qubits. Even with off-the-shelf optimization methods, no one has proposed a way in which the contribution from the qubits is more than trivial.

If you wrote the code, then you would know whether the 16 qubits contributed trivially or non-trivially to the solution of that Sudoku. But you keep coming back with evasive answers. It was supposed to be an important demonstration, worthy of newspaper reports. But now the central question of how the qubits were actually used is walled off by NDA. We’re supposed to guess from hints because it’s “not all that hard”, but on the other hand it’s hard enough that it’s proprietary.

It occurs to me that, if you really are Varun Jain, then you have almost your whole career ahead of you, and you don’t deserve the task of dishing up evasions. It’s good that you came forward, but Rose shouldn’t make you cover for him. As I keep saying, since I wasn’t there, maybe there is an innocent explanation. But innocent or not, you deeply deserve permission to tell us what it is.

Comment #94 November 25th, 2007 at 1:24 am

It would seem that if the device in question didn’t do anything that differed appreciably from the already developed classical methodsThe objection is as follows: Clearly, and as Rose and Jain have both admitted, they need a classical computer to convert the Sudoku into subproblems that fit into 16 or fewer bits. In what way did this non-quantum preprocessor do less than all of the work to solve the Sudoku? Never mind what the qubits did. Even if the qubits stood up and tap danced, it would not count as a quantum solution if the preprocessor’s work was tantamount to a full solution.

Comment #95 November 25th, 2007 at 2:16 am

If you wrote the code, then you would know whether the 16 qubits contributed trivially or non-trivially to the solution of that Sudoku. But you keep coming back with evasive answers.I thought I was clear in my answer, but if thats the exact sentence you are looking for then yes the 16-qbits did contribute non-trivially.

It occurs to me that, if you really are Varun Jain, then you have almost your whole career ahead of you, and you don’t deserve the task of dishing up evasions. It’s good that you came forward, but Rose shouldn’t make you cover for him.I am sorry… but to suggest that Geordie might be impersonating me or asking me to cover for him is, simply put, outrageous.

Its saturday night and I didnot have much to do so was reading the blog and decided to contribute because of the accusations made against the sudoku demo specifically, as I had contributed to it. I have shared whatever I could and its unfortunate that I couldnot help answer your questions.

Comment #96 November 25th, 2007 at 2:19 am

Obviously, they knew the solution to the Sudoku in advance, since it was a trivial problem. So it is not going to be easy for them to argue that the “quantum computer” solved it. Perhaps one can make a similar objection to factoring 15= 3 * 5 — without looking very closely at the NMR pulse sequence and techniques, you’d have no idea whether the quantum computer had really solved the problem. Here all those kinds of details are private, plus there is an additional layer of hype, plus the “quantum computer” model seems itself to be a bit vague. Myself, I’d never be convinced until I saw all those details, and the chance of that is apparently zero.

On the other hand, D-Wave claims that they have a very modular system in which the “quantum computer” is a black box. So it should be possible to reveal the necessary details in the reduction without compromising the main secrets.

Certainly, tap-dancing qubits would convince me (of anything).

Comment #97 November 25th, 2007 at 2:38 am

I thought I was clear in my answer, but if thats the exact sentence you are looking for then yes the 16-qbits did contribute non-trivially.So you say, but no one has plausibly described such a non-trivial contribution. We all understand that 16 bits can in principle do an MIS on a very small graph, but that does not look like a non-trivial contribution to solving a Sudoku.

But if you want to just leave it that, that you as the programmer can vouch that the 16 qubits did no more than solve a sequence of MIS problems that is vastly smaller than the Sudoku, that is at least a partial (and negative) answer.

to suggest that Geordie might be impersonating me or asking me to cover for him is, simply put, outrageous.I’m not suggesting either. I’m saying that the Sudoku demo looks like a shabby exaggeration, even more so if it is locked up in non-disclosure agreements. I’m saying that it’s really Rose’s responsibility and you shouldn’t imitate his behavior, whether or not he asked you to.

Comment #98 November 25th, 2007 at 2:48 am

Perhaps one can make a similar objection to factoring 15= 3 * 5 — without looking very closely at the NMR pulse sequence and techniques, you’d have no idea whether the quantum computer had really solved the problem.The difference is that 15 = 3*5 is an honest representation of that quantum computer’s capabilities. They didn’t claim that their computer can factor a 50-digit number with “help” from a classical preprocessor. They also certainly shouldn’t lock up their NMR sequences with non-disclosure agreements; and as far as I know, they didn’t.

Comment #99 November 25th, 2007 at 2:58 am

But if you want to just leave it that, that you as the programmer can vouch that the 16 qubits did no more than solve a sequence of MIS problems that is vastly smaller than the Sudoku, that is at least a partial (and negative) answer.umm… if you have to solve an MIS problem with X nodes and you have a solver that can solve MIS problems with Y (

Comment #100 November 25th, 2007 at 3:00 am

~REPOSTING~

But if you want to just leave it that, that you as the programmer can vouch that the 16 qubits did no more than solve a sequence of MIS problems that is vastly smaller than the Sudoku, that is at least a partial (and negative) answer.umm… if you have to solve an MIS problem with X nodes and you have a solver that can solve MIS problems with Y (Y is much less than X) nodes then I dont see any other way but to use the solver to solve a sequence of Y-node MIS problems and re-construct the answer to the original X-node problem. If you have a way to solve the X-node problem with the Y-node solver in one shot then you may want to talk to the clay math people about your million dollars :/

Comment #101 November 25th, 2007 at 3:07 am

I dont see any other way but to use the solver to solve a sequence of Y-node MIS problems and re-construct the answer to the original X-node problem.The question is not ways that you see, the question is what you actually did. You almost, but not quite, say that this is what you did. And another very important question is: What were X and Y in your Sudoku solver? Also, how many times, N, did your classical solver hand a subproblem to the qubits? I hope that you won’t tell me that these three integers are a trade secret, even just to solve a Sudoku.

Comment #102 November 25th, 2007 at 3:17 am

“if you have to solve an MIS problem with X nodes and you have a solver that can solve MIS problems with Y (Y is much less than X) nodes then I dont see any other way but to use the solver to solve a sequence of Y-node MIS problems and re-construct the answer to the original X-node problem.”Presumably the solver in this instance is the quantum processor. From what I can gather it is essentially being fed “baked” subgraphs by a classical device to operate on, unless the reconstruction prodecure is also carried out by the quantum processor. Barring the latter being true, it is difficult to see what kind of non-trivial contribution qubits could make here, especially given that the entire can be done classically.

Comment #103 November 25th, 2007 at 3:19 am

I agree, knowing X, Y and N would be very helpful, especially Y.

“They didn’t claim that their computer can factor a 50-digit number with “help” from a classical preprocessor.”

And neither is D-Wave. D-Wave is claiming that their quantum computer can solve subroutines of an unspecified but certainly quite small size, in order to *help* a classical computer solve a 9×9 sudoku puzzle that the classical computer could solve in a microsecond and that I could probably solve in my head. (I don’t think exaggerating your argument helps any more than it does theirs. Maybe factoring 225=15^2 with classical assistance is a better analogy for D-Wave’s claim than factoring 50-digit numbers.)

Comment #104 November 25th, 2007 at 3:21 am

The question is not ways that you see, the question is what you actually did. You almost, but not quite, say that this is what you did.I thought I made it clear in my very first post. Here’s an excerpt:

So all I did was convert the sudoku into an MIS problem and device a strategy to break large MIS problems into chunks that the 16-qbit hardware can accept.Your second question:

And another very important question is: What were X and Y in your Sudoku solver?X varies with every sudoku problem… but is almost always more than 50 and typically is in the hundreds. Sorry again but I am not sure if I am allowed to tell what Y is. N also varies with the sudoku problem and how quickly the answer can be found. I cannot even guess what N would be typically, but if I have to guess, I would say in hundreds or even thousands. Basically, larger X in most cases means larger N.

Comment #105 November 25th, 2007 at 3:24 am

“Barring the latter being true, it is difficult to see what kind of non-trivial contribution qubits could make here, especially given that the entire can be done classically.”

Tyler, “nontrivial” can have different meanings. I can certainly understand claiming that in factoring 15=3*5, the quantum computer did something nontrivial, even though it can obviously be done classically. Complexity theorists study “parameterized nontriviality,” and in this case the unknown (secret?) number Y is that parameter.

Comment #106 November 25th, 2007 at 3:30 am

Ah, so Y is a secret. That’s too bad, since it is so important. Can we play twenty questions?

1. Is Y = 5?

Varun, thanks for participating here, despite the slightly hostile audience.

Comment #107 November 25th, 2007 at 3:31 am

Sorry, the question should have read,

1. Is Y < 5 or > 5?

< signs are eaten by the blog, it seems.

Comment #108 November 25th, 2007 at 3:32 am

As are equal signs.

Comment #109 November 25th, 2007 at 3:37 am

Will Y scale with an increasing number of qubits?

Comment #110 November 25th, 2007 at 3:40 am

“Complexity theorists study “parameterized nontriviality,” and in this case the unknown (secret?) number Y is that parameter.”Hopefully the complexity theorists can carve out plausible answers to whatever quantum procedure (is that even the correct term to use?) was used to solve the problem. The method I see above looks like to be indiscernable from a classical heuristic with co-processor offloading, and of course the details of what went on are trade secrets. Oh well.

Comment #111 November 25th, 2007 at 5:37 am

And neither is D-Wave. D-Wave is claiming that their quantum computer can solve subroutines of an unspecified but certainly quite small size, in order to *help* a classical computer solve a 9×9 sudoku puzzle that the classical computer could solve in a microsecond and that I could probably solve in my head.No, that is what Rose and Jain are implying now, in this thread. It isn’t what Rose implied in the demo. The screen had a button that said, “quantum solver”; after pressing it, Rose said “the quantum computer spits out the answer”.

I agree that Jain describes something different. Instead of a quantum computer spitting out any answer, he describes a classical computer that does 99% of the real work, and offloads trivial sub-problems to a special purpose device with quantum features. The SPD finds maximum independent sets in very small graphs.

Now, this story has not yet been confirmed, but I agree that Jain’s answers sound honest, if incomplete. If they are really the truth, then the demo certainly wasn’t.

Also I owe this comment to Varun Jain: Thank you for your information.

Comment #112 November 25th, 2007 at 5:49 am

Hi Scott/Greg,

Thanks for your help in response to my earlier post, there’s still a couple things I’m confused on so I’d like to return to this for a moment if it’s okay.

So I think I more or less understand what you’re describing about what an Adiabatic Quantum Computer would be doing if you just sat down and built one. However I’m a little bit confused because this seems very different from “Quantum Computers” as they’ve been presented to me elsewhere. In QCs as I’ve understood them up to now, in terms of the actual algorithms that run on that QC, everything that happens (I thought) is basically just operations on the probability distributions of the |0> and |1> states for the qubits– operations which can be decomposed into a series of Hadamard and Toffoli gates or whatever. This is very different from the paradigm of the AQCs you’re describing– there’s no way I know to describe a Hamiltonian or an “energy state” for the execution of such a normal-QC algorithm, or in general any way to discuss the “energy” of a bunch of qubits in such a QC at all.

This disjoint between what QCs are doing as I’ve heard it described in the past and what AQCs are doing as you describe them here makes perfect sense to me if this is just because AQCs and normal QCs are different machines doing different things; however, the thing that’s confusing me a little is that I keep seeing references (here and elsewhere) to “the quantum adiabatic algorithm”, and the wording makes it sound as if adiabatic quantum computation is not just something you get from designing a computing device governed by a hamiltonian– but is

alsoa behavior which you can inspire a “normal” quantum computer to perform if you just feed it the right algorithm. Does this “quantum adiabatic algorithm” terminology imply that there is actually some way to implement quantum adiabatic computation as an algorithm on a “general” quantum computer? Or is the terminology simply meant to imply that there is a class of algorithms, called “quantum adiabatic algorithms”, which can be run on an adiabatic quantum computer?If the answer to “can you implement a quantum adiabatic algorithm as a ‘program’ on a normal quantum computer” is “yes”, then how do such things as “the Hamiltonian governing the system” and “the ground state” gain meaning in the context of the simpler operations one normally sees quantum algorithms composed of?

And if the answer is “no”, then does this mean it is possible that since they’re not technically doing the same thing, then AQCs might not have the same complexity-class properties as QCs– for example, is it possible that the set of problems computable by an AQC in polynomial time is not strictly equal to BQP?

I hope these questions aren’t too convoluted, thanks again…

Comment #113 November 25th, 2007 at 6:13 am

Coin,

Yes, you can implement the adiabatic algorithm on a standard quantum computer (e.g. with Hadamard and Toffoli gates).

But

on top of that, the adiabatic algorithm also suggests new architectures for quantum computers, which might have better fault-tolerance properties.It’s a theorem that QC and AQC are polynomially equivalent — i.e. they both lead to the same complexity class BQP. See this paper by Aharonov et al.

I actually agree with the D-Wave folks that, if we ever see practical quantum computers, there’s a good chance their “native” computational model will be very far from the gate model. But whether that means adiabatic QC, cluster-state QC, anyonic/topological QC, or something else entirely I couldn’t venture a guess.

(Let me stress that all the architectures mentioned above are proven to be universal — meaning that, again, they all lead to the same complexity class BQP.)

Comment #114 November 25th, 2007 at 7:06 am

I think must be some formula about sudoku complexity, maybe it’s some hypercubes, which are solved faster than (2^n)^0.5. Since is sudoku have 81 number or about then it’s 10^81 variants, but about 26, 27 numbers was filled. So about 10^56 possible variants to try. Even in square root it would be 10^28 steps. But if sudoku puzzle is like some hyperqube like I was sow in some papers, then it’s possible that to solve sudoku need not (2^n)^0.5=10^28 steps, but say (2^n)^{1/9}= (10^56)^0.1= 398107 steps. I realy would like to know about complexity of sudoku, when about 25 number are filled.

Also I doubt that 16 qubit quantum computer is about 100 times slower than Pentium. It’s probably thousands or maybe even milions times slower. But for sudoku solving still possible that need very little computation power to solve it if sudoku complexity scals exponentionaly with numbers, but is very easy for small numbers. Maybe even to solve it probabiliticaly, when quantum computer always giving wrong probabilistic answer and after few 100-1000 guesses software giving good answer…

Comment #115 November 25th, 2007 at 7:21 am

I would also like to thank Varun for clarification. Varun, I’m especially greatful that you said this:

Infact, from what I remember, in february demo, Geordie mentioned that currently the hardware is about a 100 times slower than conventional machines.It would be so nice to have this mentioned in all those popular articles on D-Wave…

Comment #116 November 25th, 2007 at 9:16 am

It is very nice to see a discussion with

somelight in it! So let me express thanks to all who have posted.For me, the most interesting questions is one that has not been asked on this thread, much less answered, and I would be interested in people’s comments on it.

Q: Can the workings of D-Wave’s computer be efficiently simulated with classical resources?

This question is interesting because (AFAICT) it is perfectly feasible for the following to be the case: “The D-Wave computer’s dynamics are

notsimulatable with classical resources, even though its noise levels are high enough to quenchallhigh-order quantum correlations.”If that assertion were both physically true, and backed-up by innovative QIT analysis, that would imply that we are all living in a much more interesting world of quantum information processing, than we previously imagined.

In this world (hopefully) people would lose interest in the too-blunt question “Is the D-Wave device a

quantumcomputer?” The trivial answer being “Of course it is, because every physical computer is a quantum computer.”In particular, this would be a world in which D-Wave’s business plan conceivably made engineering sense! That is why engineers are so focused on issues of quantum simulation feasibility.

Comment #117 November 25th, 2007 at 9:42 am

As a followup to the above, upon parsing COIN’s post, I want to acknowledge that he/she appears to be asking pretty much the same question, about whether D-Wave’s noisy device, can be efficiently simulated with classical resources.

Pointers to articles that discuss this question would be very welcome!

Comment #118 November 25th, 2007 at 9:48 am

“I would also like to thank Varun for clarification. Varun, I’m especially greatful”

Mmm that’s the first typo ever I personally noticed in Scott’s writing Is the controversy with D-Wave affecting you, or is this the preliminary sign of an Ilyssa-like transformation ?

Comment #119 November 25th, 2007 at 9:51 am

Oops sorry, My mistake, It is I who must be undergoing the transformation. The message was Stas’s not Scott’s !

Comment #120 November 25th, 2007 at 10:23 am

Here more precisly sudoku formula: 9!*9^9=3.6288*10^14. 9! becouse in one block numbers can’t be the same. And 9^9, becouse there is 9 block, but this number can be smaller… And if about 25 numbers already was filled then number can be even more smaller. 27 numbers can fill 3 blocks. So then need 9!*6^9= 3.6569943*10^12 variants or steps. (2^16)^0.5=256. 10^12/256~10^10. If processor working on 10 GHz then all calculation can be done in one second. But this number can be, another variant: 9!*9!=1.3*10^11. And more precisly 9!*6!=261273600 steps for classical computer and about 2612736 steps for 16 qubits quantum computer. If one step is doing per 10 ns then it’s enough time for both quantum and classical computer solve such sudoku puzzle per 1 seconds at 1GHz.

Comment #121 November 25th, 2007 at 12:06 pm

John,

Your question has been brought up, though not quite so explicitly — see, e.g., Posts #12 and #15. Assuming that you mean “D-Wave’s future computers” rather than current technology, this is the fundamental question, because it equates precisely to “Is this device’s computational power within BPP?”

Unfortunately, it’s also incredibly hard to answer — we don’t even know whether P=NP, right? And, yes, we have a good guess… but the point is that we don’t really know what can be classically simulated. This is a point that came up in the discussion after Mohammad’s talk at IQC:

(1) It’s not really fair to say something is “quantum” just because one particular classical model doesn’t simulate it.

(2) On the other hand, we can’t say “Prove that there’s no classical simulation,” because nobody can do that.

So we end up back at “Is it a QC?” <==> “Does it solve problems thought to be outside BPP?” If you’re not happy with that (I’m not), then you can try to show some

physicalcriterion for non-simulatability. The best guess for that right now is [entanglement over O(poly(n)) qubits] + [arbitrary control] (see #30,32,40).If that conjecture (“O(n)-qubit entanglement is necessary for nonsimulatability”) is true, then your conjecture would be false — but AFAIK nobody knows how to prove/disprove it.

Comment #122 November 25th, 2007 at 12:31 pm

Robin, arbitrary entanglement across m qubits would let us factor O(m)-digit numbers, which we already don’t know how to do in BPP for any m>>log

^{3}n.Comment #123 November 25th, 2007 at 12:52 pm

Varun,

Thanks again for chipping in!

Like Greg, I’m frustrated by the limitations on what you can say. I understand why you have restrictions — but I hope you also understand our frustration (us = academic community). We have no interest in reverse-engineering your device, or figuring out inner workings — but we want to figure out whether your approach scales. Knowing how X, Y, and N scale is important to do that. There are protocols that fit your description that are grossly nonscalable — for instance, I could just pass 2-vertex instances of MIS to the 16 qubits (“Are these vertices connected, or not?”). That’s Greg’s “4-year old + omelette example”.

Comment #124 November 25th, 2007 at 2:05 pm

In QCs as I’ve understood them up to now, in terms of the actual algorithms that run on that QC, everything that happens (I thought) is basically just operations on the probability distributions of the |0> and |1> states for the qubits– operations which can be decomposed into a series of Hadamard and Toffoli gates or whatever.That is a very good description, except that you left out a fundamental new feature: The distribution on the bit strings is not an ordinary distribution, but rather a

quantumdistribution. Quantum mechanics rests on a generalization of probability theory that you can either call non-commutative probability or quantum probability. This generalization is the source of extra power for quantum algorithms for certain problems. It isn’t because we are after faster gates or more bits. On the contrary, if I live to see a useful quantum computer, I might expect it to have, say, 100K good qubits and a 1MHz clock. Such a computer would be no better than an Apple II for problems that don’t have a quantum algorithm. E.g. sorting or matrix multiplication. But for problems that do have a quantum algorithm, like factoring, it could beat the entire Google data center.Does this “quantum adiabatic algorithm” terminology imply that there is actually some way to implement quantum adiabatic computation as an algorithm on a “general” quantum computer?Yes there is. It’s not at all obvious in the context of this discussion, but once you learn quantum probability, it’s actually the easier half of the equivalence, provided that the Hamiltonian is computable.

how do such things as “the Hamiltonian governing the system” and “the ground state” gain meaning in the context of the simpler operations one normally sees quantum algorithms composed of?Basically, by simulation. A classical computer can simulate a classical physical system, e.g., planetary orbits. Likewise a quantum computer can simulate a quantum physical system, e.g., molecular orbitals. The task of simulating a quantum system was Feynman’s original motivation for proposing quantum computation.

does this mean it is possible that since they’re not technically doing the same thing, then AQCs might not have the same complexity-class properties as QCsPeople were worried that the set of feasible problems for an AQC is strictly smaller than BQP, but they found a reverse simulation that shows that it isn’t. It looks a lot like the theorem that simulated annealing is a universal algorithm in classical computation. (And sometimes both algorithms deserve the more derogatory name, “the universal solvent”.)

Comment #125 November 25th, 2007 at 2:33 pm

I understand why you have restrictions — but I hope you also understand our frustration (us = academic community). We have no interest in reverse-engineering your device, or figuring out inner workings — but we want to figure out whether your approach scales.This is one question, but not the main one for me. I want the truth behind the fiasco press conference in February. My motivation is personal integrity, not scalability of hardware. There is very little reason to believe that the plan that they describe — energy minimization without worrying about coherence — scales. The truth would have to be better than they describe in order to scale. My interest is in finding out how much less is true than was said in the press conference.

If Varun Jain is describing what really happened, then it means that there wasn’t quite zero truth in the act of pressing the “quantum solver” button and saying that “the quantum computer spits out the answer”. It was maybe 20% true. And I don’t mean to say that I am highly suspicious of what Varun says. Not at all; he sounds honest.

The reason that I am hedging and basically being a jerk about it is a conversation I had on David Bacon’s blog. Part of the conversation went exactly the same way as the one here: Rose said, well you could convert Sudoku to MIS and you could chop that into smaller MIS. When I asked him whether I had his word that that is the way that the Sudoku was solved, he said no, it was all hypothetical; actually they do something “considerably more sophisticated”.

Given this behavior from the CTO, we still have to take Varun Jain’s answer as provisional, for several reasons. For instance, we don’t know what happened to the little subgraphs that he says were passed to the 16 qubits, since he says that the hardware isn’t his department.

There are protocols that fit your description that are grossly nonscalable — for instance, I could just pass 2-vertex instances of MIS to the 16 qubits (”Are these vertices connected, or not?”). That’s Greg’s “4-year old + omelette example”.If the protocol described is correct, then it couldn’t have been much different than that. The 16 qubits only have near-neighbor interactions, and (according to Rose’s description), there exist graphs with as few as 7 vertices that can’t be embedded. MIS with 6 vertices is trivial even compared to Sudoku, and is much less work than figuring out how the graph fits into the 4×4 grid. (Of course, graph embedding is also NP-hard, but complexity theory is an overly fancy description of any part of this story.)

Comment #126 November 25th, 2007 at 2:37 pm

Even simpler than raising the temperature (which might be more difficult than it sounds), you need to check whether you are actually gaining anything by doing adiabatic evolution: http://scienceblogs.com/pontiff/2007/11/a_simple_experimental_challenge.php

Comment #127 November 25th, 2007 at 4:43 pm

Robin Blume-Kohout Says: …

The point is that we don’t really know what can be classically simulated.That is an excellent point, and one which accounts for much of my own interest in complexity theory, and quantum information theory, and this D-Wave thread in particular.

To phrase this point in engineering terms, “Humanity’s ability to simulate quantum systems with classical resources has been improving (roughly) exponentially. How much longer can this improvement continue? What are the fundamental limits, both mathematical and physical?”

This 21st Century “Moore’s Law of quantum system simulation” has striking similarities to the Moore’s Law of computer performance (which has a fine Wikipedia page): the metrics are vague, the reasons for the law’s empirical truth are unclear, the economic and resource implications are large, no one knows how much longer the law will continue to hold, and if the law

doescontinue to hold for another generation, the consequences for humanity—already profound—will be incalculable.Comment #128 November 25th, 2007 at 5:44 pm

And as a practical postscript to the above, isn’t the continued exponential increase in quantum simulation capabilities (arguably) the most realistic hope of young QIT researchers for more QIT jobs to be created?

Seriously, is there any other comparably realistic hope?

Comment #129 November 25th, 2007 at 6:24 pm

Greg/Scott, thanks for the clarifications!

Comment #130 November 25th, 2007 at 6:34 pm

“Seriously, is there any other comparably realistic hope?”I would think that the job market in QIT and QC all depends on whether quantum computing capabilities can scale up to the point of providing advantages in more general problems. If it stays in the current narrow crypto/sim application space, then you’re probably right. If it can expand into more areas then job opportunites for researchers will be ubiquitous. I’ve heard some noise around about research into using quantum algorithms for image processing…

Comment #131 November 25th, 2007 at 7:53 pm

Tyler Pietro says:

If QIT can expand into more areas then job opportunites for researchers will be ubiquitous.Tyler, when it comes to job creation, I encourage you not to define QIT and QC too narrowly, or to underestimate the opportunities for cross-disciplinary fertilization. Because if QIT is defined broadly, then the opportunities for job-creation are indeed remarkably great.

Here is a mathematical assertion that is constructed to illustrate this point: Slater determinants are Grassmannians presented via the Plucker embedding as algebraic varieties that are Kähler manifolds having an Einstein metric.

By construction, the starting link “Slater determinants” points toward the vast literature on computational chemistry … which at the present time is an enormously vigorous engine of job and resource creation.

Every link following “Slater determinant” points to a Wikipedia page that provides an undergraduate-level entré to a vast mathematical literature … the literature on Kähler manifolds alone is about as large as the entire literature on quantum measurement theory, for example.

Amazingly, there is apparently almost no cross-referencing between these two communities (and I would be *very* grateful for counter-examples).

QIT is among the disciplines best-poised to bridge this disciplinary divide, and challenges like “Can the D-Wave computer be efficiently simulated by classical resources?” are IMHO important and well-posed challenge problems that advance this bridging objective admirably.

All of the above is true,

providedthat the QIT community does not define itself narrowly.There is admittedly some risk of “interdisciplinary overload” but … well … welcome to the 21st Century.

By the way, the above mathematical assertion derives largely from help and advice to our QSE Group from Joshua Cantor, who deserves most of the credit for the parts that are right. Any parts that are mathematically wrong or misleading are mine, for sure.

Comment #132 November 25th, 2007 at 8:59 pm

Scott,

Can you (or somebody) clarify the common usage of the notation “log”? The last formal math education I had was in the days when calculators were replacing slide-rules (a very brief window in time). Back then “log” (un-subscripted) was supposed to be reserved for base-10 (owing to its importance in practical calculation, I imagine). Since base “e” had “ln”, any other log base was supposed to be specified by the subscript.

Yet much of the literature in CS, information theory, and even other areas, appears to be using “log” to be base-2. If I recall correctly, when I read Shannon’s famous paper, I was convinced he was using “log” to mean base-2, and that was written in the heyday of the slide-rule. I know some authors attempted to introduce “lg” to specify base-2, but it apparently never caught on.

Comment #133 November 25th, 2007 at 9:02 pm

John Sidles appeared to be talking about exponential increases in the ability to simulate quantum systems, and Tyler DiPietro expanded this to solving general problems using quantum strategies such as image processing.

For a more general application of quantum-augmented computing strategies there’s the question not only of whether quantum computing scales but whether, as a general rule of thumb, the exact answers and nothing else that (to my knowledge) quantum algorithms tend to give are useful in these information processing tasks. Eg, in factorising L=M*N you need to get either M and/or N, whereas I recall reading that proteins generally don’t attain the true global minimum energy folded configuration but a sufficiently stable local minimum partially based on intermediate folding steps (sorry no cite). Certainly the limited image processing stuff I’ve seen is more “2-D exact database recognition/retrieval” rather than dealing with the complexities of real image modelling (maybe I’m out of date though).

This is not to disparage quantum computing as an worthwhile subject of study and development, merely that IF it turns out to be good precisely at solving problems no-one deeply cares about incredibly quickly it won’t become widespread.

Comment #134 November 25th, 2007 at 9:30 pm

John,

Thanks for the insightful comment and links. My only response is to say that I don’t intend to narrowly define QC or QIT, I was only noting that such a thing happening was a possibility. I would like nothing more than to see QC scale-up and expand into more general purpose computation.

Jack,

From what I know, he most common logarithm-base in computer science is indeed base 2, but in quantum computing you also encounter Euler’s constant quite frequently (complex vector space and all that).

Comment #135 November 25th, 2007 at 11:49 pm

Jack: Normally we don’t even bother to specify a base, since all logs are equivalent up to constant factors (e.g. to get from log

_{10}(n) to log_{2}(n), just multiply by 3.32). If we do specify a base it’s usually base 2.Comment #136 November 26th, 2007 at 1:13 am

A somewhat different subject which came to mind from Scott’s comment #113:

I’ve seen a lot of papers saying that the cluster state model of QC is equivalent (in that it can efficiently simulate standard QC). Yet cluster QC uses gates from the stabilizer group meaning that it can be efficiently simulated classically, and by implication cannot efficiently simulate standard QC (for the time being)? What point am I missing about cluster state QC?

Comment #137 November 26th, 2007 at 1:39 am

Alex, the point you’re missing is that cluster-state QC does not only use gates from the stabilizer group: the original Raussendorf-Briegel paper used arbitrary 1-qubit gates. I think you

cando it with only 1-qubit stabilizer gates, but in that case you’ll need to start with a cluster state that’s not a stabilizer state.(It’s been known for a long time that stabilizer gates actually

areuniversal, given certain non-stabilizer initial states as ancillas.)Comment #138 November 26th, 2007 at 2:08 am

Thanks Scott

I didn’t know about the arbitrary rotations, should paid closer attention. More questions (please excuse my ignorance):

1- Isn’t a cluster a graph state (almost by definition), and isn’t a graph state always a stabilizer state (again almost by definition)?

2- have there been any results on the complexity of preparing those ancilla states

Comment #139 November 26th, 2007 at 3:30 am

“Alex, the point you’re missing is that cluster-state QC does not only use gates from the stabilizer group: the original Raussendorf-Briegel paper used arbitrary 1-qubit gates. I think you can do it with only 1-qubit stabilizer gates, but in that case you’ll need to start with a cluster state that’s not a stabilizer state.”

A minor correction: Cluster-state QC uses *measurements* that are not Pauli measurements. It does not use any gates at all, aside perhaps from the stabilizer gates to prepare the cluster/graph state.

1. Yes, a cluster state is a graph state, and a graph state is a stabilizer state (with a special form).

2. Graph states can be prepared in time = # of edges, assuming perfect stabilizer gates (controlled-phase gates). (The depth of the circuit used to prepare the graph state can be = max degree of a vertex. Note: “time” = “circuit size” ~ depth * n.) General stabilizer states on n qubits can be prepared certainly in time n^2, and small factors can be shaved off this (Scott mentions this in one of his papers).

Comment #140 November 26th, 2007 at 4:57 am

Alex, don’t get hung up on definitions. If you want to do universal QC using 1-qubit stabilizer measurements only, then you’ll need an initial state that’s

likea cluster state but is outside the stabilizer set.The states used in cluster-state QC can be prepared in linear time. (Dan Gottesman and I showed that arbitrary stabilizer states can be prepared in n

^{2}/log(n) time, and that this is tight.)Sam, I think of a measurement as a gate followed by a standard-basis measurement.

Comment #141 November 26th, 2007 at 8:26 am

Actually you just need single qubits prepared in the state 1/sqrt(2)|0>+(1+i)/2 |1> to be added to the graph, one for each pi/8 gate you want to produce, so the time is constant. The corrections for these gates will be a Clifford group operator, which can be applied later (by changing one Pauli basis measurement to another). So the complexity of creating these states is constant.

For the teleportation based non-Clifford group gates used in fault-tolerant computation the ancilla state must grow in size as the encoding does, and so there the answer is less trivial (I point you back to Scott’s comment above).

Comment #142 November 26th, 2007 at 11:25 am

Just a comment from left field (a solid-state physicist)– to note that the ‘classical limit is high temperature, not long times’ claim is a somewhat dicey. A classic Ph.D orals question in solid-state is to note first that the particles in a superconducting electron pair are physically traveling at high velocities in opposite directions (i.e., k2 = -k1) and that a superconductor has a finite coherence length. Therefore, an electron pair that is contributing to the condensate at time t will have broken apart and flown away at time t + delta– and, therefore, no longer contributes to the condensate. The question is, in what sense, now, do the electron pairs form a condensate?

The answer is that, at each instant in time, one can find enough electron pairs to form a condensate– since electrons are all identical, it doesn’t matter exactly which ones you use. However, this argument relies on having a lot of electrons in the ensemble– if you had only 16, it wouldn’t work.

Comment #143 November 26th, 2007 at 12:51 pm

Matt, an article that establishes links between QIT, solid-state physics, and quantum simulation efficiency, is

Classical spin models and the quantum stabilizer formalism.Comment #144 November 26th, 2007 at 1:15 pm

John: Presumably the entire PEPs and matrix product state literature provide the kind of link you are talking about.

Comment #145 November 26th, 2007 at 3:27 pm

Joe says:

Presumably the entire PEPs and matrix product state literature provide the kind of link you are talking about.PEP is a new acronym to me, but as for matrix product states, these state-spaces too are simply algebraic joins.

From both a quantum information theory point of view, and a geometric point of view, it is mysterious why so many different algebraic quantum state-spaces work so well in practical simulations—this is the flip side of the earlier statement on this thread, to the effet that “we don’t know very much about what quantum systems can be simulated.”

The growth in the quantum simulation literature reflects this empirical success: it obeys Moore’s law remarkably well. For example, here is the growth in references to density functional theory (DFT, which is a quantum chemistry technique):

DFT articles in Inspec:

((“density functional” or “quantum density”) WN KY)

9 for 1965-1969

33 for 1970-1974

262 for 1975-1979

719 for 1980-1984

1747 for 1985-1989

3557 for 1990-1994

7819 for 1995-1999

14695 for 2000-2004

———————–

9035 for last two years

The above growth rate is about 23% per year, sustained over forty years … which is pretty remarkable track record.

If it continues for another 40 years … well … it’s about one article published

per secondon DFT (and its successors).This neatly solves the QIT job problem, but at a price: humanity won’t have time for any activities other than writing and reviewing articles on quantum simulation and quantum information theory.

More seriously, these numbers reinforce that if capabilities for simulating quantum system with classical resources continue to improve exponentially, the consequences for humanity will be profound.

So it is a very important question: How long will it be, before simulation techniques begin to hit fundamental QIT limits? The possibility of learning these limits is one reason why QIT is so interesting to me personally, and so important to everyone practically.

Comment #146 November 26th, 2007 at 4:00 pm

If quantum computer is imposible then nothing to do with quantum simulation also becouse then it’s means that all tryings to simulate quantum systems are wrong and need search another method to simulate quantum systems and which then probably don’t scals exponentionaly with number of particles.

Comment #147 November 26th, 2007 at 6:00 pm

Really qv you ought to try to phrase things in a more readable manner, the topic is complex enough as it is.

Comment #148 November 27th, 2007 at 7:10 am

QV says:

“If quantum computer is imposible then nothing to do with quantum simulation …”Suppose we take “do” to be an active verb, and parse QV’s assertion as “If quantum computing is either technically infeasible, or feasible but only marginally more powerful than classical computation, then quantum simulation has no useful or interesting applications.”

We can then observe that QV’s assertion neglects the already-huge, and exponentially growing, impact of quantum simulations in everyday life.

This enormous impact represents the confluence in quantum simulation of three inexorable trends.

The first is the trend toward technologies that are smaller, colder, faster, more efficient, and more sensitive. As more technologies press against quantum and thermodynamic limits, the need for quantum analysis and simulation has become more widespread.

The second is the technical necessity of simulation: as technologies become more complex, “simulate it before you build it” becomes a practical necessity (this is true for both classical and quantum system engineering).

The third is the social utility of simulation: as communities become globalized, simulation emerges as a powerful social tool for binding them together, in a way that is simultaneously creative, disciplined, and productive.

Companies like Boeing and Intel find that simulation technologies serve the optimization, confidence-building, and social binding purposes very admirably—teams around the globe commit to simulated 787s and Penryn processors, preliminary to committing to fabricate them, and everyone has confidence that the airplanes will fly and the processors will compute.

Advances in both classical and quantum simulation therefore are beginning to play a unifying role in scientific, engineering, and team-building, that has become absolutely central to pretty much all modern enterprises.

This is a point that enterprises like Boeing, Intel, and IBM have grasped far more thoroughly than the universities, IMHO.

Comment #149 November 27th, 2007 at 7:42 am

As a follow-up to the above, and as a contrast to D-Wave’s rather opaque press releases, it is fun to listen to Intel Chip-Chat (not least because the retro-70s musical theme is hilarious IMHO).

A good example is the recent Chip-Chat Episode 11c: Introducing 32nm Logic Technology. Note Intel’s explicit emphasis on simulation-based design rules that are then experimentally validated in prototype devices. The social role of simulation in binding Intel’s global enterprise is stated only implicitly … but it is at the beating heart of Intel’s enterprise culture, and of many simular technology-driven companies.

Perhaps soon, D-Wave (or some similar company) will give detailed press conferences like Intel’s … to understand D-Wave’s design rules even in outline, would be a major advance for the whole community.

As for quantum system engineering considered more broadly, well, those enterprises are already launched, and they are emerging as strategically central to the global economy.

Math, science, engineering, technology, and jobs … this to me is what a vigorous science and technology ecosystem looks like.

Comment #150 November 27th, 2007 at 9:22 am

Hello All,

I wanted to add one observation regarding the omelette analogy. I think I can revise it – I dare not say refine – such that it also includes some criticisms of the hardware. My apologies in advance if the following sounds a bit insensitive, but it should get the point across.

Imagine you wanted to make an omelette, and for some reason that was a reasonably complex task – it takes a while for ingredient prep, the eggs tend to stick, and the like. Now imagine that some group of scientists postulated that 4-year-olds, under certain conditions, might be able to make perfect omelettes dramatically faster than adults. So you call on your 4-year old cousin, make an omelette and announce to the world that you have accomplished the tremendous technical feat of realizing the scienitsts vision.

But then the questions start. Well, the omelette was really rather crude – just one egg, no filling, bits of shell – but it was similar qualitatively to the more complex western omelettes that everyone wishes they could make more efficiently. Ok, ok, the 4-year old didn’t really make the omelette so much as help with one part of the process, and you pulled it together enough to do the rest (it was a crude omelette after all). What was the cousin’s contribution? He handed you the eggs.

But that’s not the end of the story. Your cousin is in a persistent vegitative state, and while scientists believe it is possible that such a 4-year old could contribute meaningfully to omelette production, no one really knows how. Such a child has to be brought back to consciousness, and made to interact with his/her surroundings in a particular way to help. But all anyone has ever achieved with a comatose four-year old is getting them to blink their eyes. Proof enough that they might be useful one day, but not quite the same as being ready to make your omelette for you.

So you have admitted that the child only passed you the eggs, but you come under withering criticism for the vegitative state part. You ensure everyone that you really know that your cousin handed you the eggs and you just can’t share the proof because you’re both under NDA.

But there is one more issue. Your cousin has a full-time nanny who is very helpful, and can do lots of menial tasks, so long as they aren’t too complicated. Under pressure you admit that you haven’t really seen “proof” that your 4-year old cousin passed you the eggs. Indeed, you turned your back on your cousin and his/her nanny, waited, and then the eggs were right next to you. You’ve done a lot of theoretical work to show that the four-year old in a persistent vegitative state could have gotten up and handed you the eggs, but you dont really know for sure, and you also can’t rule out assistance from the nanny.

But you go on telling everyone that you have realized the dream, and are ready to sell restaurants access to your cousin for their omelette making tasks. So long as they never see what he/she is actually doing and just send out for the omelettes to be prepared in Vancouver…

Comment #151 November 27th, 2007 at 12:15 pm

So, Michael, what are you saying about the 16 qubits? That they didn’t show much quantum behavior? Or are you saying that their classical controller — presumably this “nanny” that you describe — did MIS for 6-vertex graphs instead of having the 16 qubits do it?

Comment #152 November 27th, 2007 at 1:13 pm

Hi Greg,

I’m saying two things – the classical controller did the majority of the work, and what was left for the “quantum” part may very well have been achieved in a purely classical way. The nanny is the classical process which may masquerade in place of the quantum part. “You”, as the true omelette chef are the classical controller performing the bulk of the work, while the true contribution from the four-year-old is very difficult to discern.

Comment #153 November 27th, 2007 at 2:15 pm

Michael, I think all the posters on this thread would agree that even a sketchy description of D-Wave’s design rules (both hardware and algorithm-related) would be very welcome.

Here “sketchy” means, “the minimum sufficient to suggest concrete avenues for research.” Most technology companies find ways to do this … mightn’t D-Wave manage it too?

Comment #154 November 27th, 2007 at 5:33 pm

I think the design rules would indeed be useful for this simple algorithmic implementation. However, on the hardware side, we know roughly how they make their devices based on published papers in PRL, and the JPL fab process. However, we do not have a clear understanding of the “quantumness” of operations on these devices. DWave has produced an interesting, but thus-far unconvincing theory to suggest why operations on their system are quantum. However, a demonstration of algorithmic performance plus a theoretical explanation of why the behavior could be quantum is startlingly insufficient. Instead, definitive experimental evidence that the devices are performing quantum rather than classical analog operations is required (e.g. Chuang or Bacon’s ideas on varying temperature). Otherwise, aside from being disingenuous at best, it is impossible to say if such devices have any chance of working in more complex forms.

Comment #155 November 28th, 2007 at 1:52 am

Michael: I get what you are saying now. Maybe you could more artful with your analogy, especially since it came from my analogy.

Let us say, more simply, that D-Wave boasted that they have a 2-year-old child prodigy who can make an omelette. (That is, solve a Sudoku.) But the omelette was actually made by a house cook who only asked the “prodigy” to fetch the eggs and onions. They insist that the toddler has special mental powers even he needs a lot of help in the kitchen, but actually he seems entirely ordinary. (That is, non-quantum.)

Now another question is whether there actually was a nanny (meaning other supporting hardware) who stepped in for the toddler to fetch the eggs and onions. Or if on the day of the demo, the house cook didn’t bother talking to the toddler at all because it was too much trouble. Varun Jain did what he could to make the story sound more honest, but his side of it only goes so far.

Comment #156 November 28th, 2007 at 2:10 am

I’m surprised that with the omelette analogy being fully solidified in this thread, no one has made any jokes about how D-Wave might be thinking that you can’t make an omelette without breaking a few eggs. Fill in the details.

Comment #157 November 28th, 2007 at 9:17 am

Oh Tyler…

Greg – sorry for not being more artful. I don’t really have a significant ability in that area. Note that I didn’t say I was refining your analogy – just revising it. But also note that the nanny in my version isn’t just supporting classical hardware/software. It’s a completely parallel classical computational path using the superconducting hardware – thermal annealing. And it may have actually done what little work was attributed to the “quantum computer.”

Comment #158 November 28th, 2007 at 9:24 am

Analogies and metaphors can add to confusion. For example, physicists’ analogy of a rubber sheet for spacetime is misleading. They show a ball depressing a rubber sheet and say that gravitation is the result of the sheet’s shape. This is not conducive to understanding. The omelette analogy is also a mistake.

Comment #159 November 28th, 2007 at 10:53 am

I disagree. While a detailed technical discussion among an audience entirely composed of experts should not use analogies, the breadth of topics in question suggests that analogies may have a useful role in highlighting how the pieces fit together in this discussion. Analogies serve to assist in understanding general concepts, and their use should be limited to such purposes, as is done here.

Comment #160 November 28th, 2007 at 12:06 pm

Analogies are very important in Science and Mathematics, just as a paper without a “narrative” is harder to get emotionally involved with.

In the very old days, I think that I could have convinced Euclid (if I spoke Greek) and much later, Newton, that in Euclidean space:

“An analogy is a parallelogram in semantic space. To say A is to B as C is to D, where A, B, C, D are statements about the physical universe, social universe, or an abstraction, can be translated.

The line segment jointing A to B is parallel to and the same length as the line segment joining C to D.”

This generalizes to vector spaces, or bimodules for noncommutative cases, and further to metric spaces. One actually sees this done, sort of, in Web 2.0 pages that analyze web pages and make 2-D or 3-D maps of the keywords or sub-domains based on a metric of distance.

In that Euclidean sense of analogies, what is an analogy between analogies? Is it a 3-D parallelopiped? How does that generalize?