More “tweets”

Update (Feb. 4): After Luke Muelhauser of MIRI interviewed me about “philosophical progress,” Luke asked me for other people to interview about philosophy and theoretical computer science. I suggested my friend and colleague Ronald de Wolf of the University of Amsterdam, and I’m delighted that Luke took me up on it. Here’s the resulting interview, which focuses mostly on quantum computing (with a little Kolmogorov complexity and Occam’s Razor thrown in). I read the interview with admiration (and hoping to learn some tips): Ronald tackles each question with more clarity, precision, and especially levelheadedness than I would.

Another Update: Jeff Kinne asked me to post a link to a forum about the future of the Conference on Computational Complexity (CCC)—and in particular, whether it should continue to be affiliated with the IEEE. Any readers who have ever had any involvement with the CCC conference are encouraged to participate. You can read all about what the issues are in a manifesto written by Dieter van Melkebeek.

Yet Another Update: Some people might be interested in my response to Geordie Rose’s response to the Shin et al. paper about a classical model for the D-Wave machine.

“How ‘Quantum’ is the D-Wave Machine?” by Shin, Smith, Smolin, Vazirani – was previous skepticism too GENEROUS to D-Wave?

D-Wave not of broad enough interest? OK then, try “AM with Multiple Merlins” by Dana Moshkovitz, Russell Impagliazzo, and me

“Remarks on the Physical Church-Turing Thesis” – my talk at the FQXi conference in Vieques, Puerto Rico is now on YouTube

Cool new SciCast site ( lets you place bets on P vs NP, Unique Games Conjecture, etc. But glitches remain to be ironed out

77 Responses to “More “tweets””

  1. Michael Says:

    Hi, your third link is not functioning.

  2. Scott Says:

    Godammit, happens every time! Thanks, sorry, and fixed.

  3. Michael Says:

    Haha, no problem. Maybe run through all of them in a post preview before publishing each post?

  4. Sol Warda Says:

    Scott: How about this paper from your own Physics Dept.(correct me please if I’m wrong):

    And this one from Harvard:

  5. Scott Says:

    Sol: Yep, them too!

  6. Bram Cohen Says:

    Color me not surprised by that first one

  7. Sol Warda Says:

    Bram: That’s an IBM conspiracy!!!!.

  8. Sid Says:

    Perhaps a stupid question:

    Suppose we assume that human brains are simulatable by Turing machines (I think this is known?). Now, if you go out into the world, do experiments, and build a physical theory, then hopefully the only reason you built the theory in the first place was so that you can compute predictions of the theory. Indeed, we scorn any theory which is non-predictive.

    And since this computation happened in the brain, then that means that any physical theory humans might care about has to be computable by Turing machines. Doesn’t this prove the Church-Turing hypothesis?

    Of course, this leaves open the possibility that the Universe has some non-Turing computable law, but then we’d never find it because we would never build that theory in the first place.

    What am I missing?

  9. Scott Says:

    Sid #8: Suppose someone handed you a box with flashing lights and aluminum foil that they claimed solved the halting problem. How could you test the box? Well, suppose you gave it thousands of Turing machines for which you already knew whether they halted or not—including TMs corresponding to highly-nontrivial mathematical statements (like Fermat’s Last Theorem), as well as TMs that you knew halted but only after quadrillions of steps—and for each one, the box immediately gave you what you knew to be the right answer.

    In that case, maybe still your favored hypothesis would be that the box was “merely” a virtuoso piece of programming, rather than solving the halting problem in complete generality. However, suppose you then opened the box and discovered, to your amazement, that inside it was creating and evaporating tiny black holes. And suppose that, in order to describe the observed phenomena inside the box, physicists were forced to invent a new quantum theory of gravity that then also predicted the ability to solve the halting problem (as in Roger Penrose’s speculations).

    If that happened, I’d say we had a pretty decent empirical case that the Physical Church-Turing Thesis was false. (FWIW, my personal prediction is that it won’t happen.)

    Of course, if you care about the Physical Extended Church-Turing Thesis, then the situation is simpler in a way. If you’re willing to believe (say) that factoring is not in P, then someone “merely” needs to hand you a box that factors large numbers efficiently, which is a problem for which you can always verify the solutions yourself (you don’t need to generate special instances for which you already know the answer). Such a box could be, for example, a quantum computer.

  10. Robert Says:

    Irrelevantly, what is the demented gibberish which comes up when one clicks “Random page” in the Complexity Zoo?

  11. Rahul Says:

    Will D-Wave would comment on #2? Can we take their silence to imply they agree with #2?

  12. Scott Says:

    Robert #10: Dunno! I can ask my “tech support staff” to look into it, though if the Zoo if otherwise working, it’s probably not a high priority…

  13. Scott Says:

    Rahul #11: Someone from D-Wave will have to field that question, should they choose…

  14. Douglas Knight Says:

    Robert, it is spam.

  15. Jair Says:

    Hi Scott! I’m a longtime fan of your blog and I enjoyed your book as well. I was wondering if it is generally accepted that the class P encompasses all of the efficiently-solvable problems in the classical world? For example, even assuming factoring is not in P, is it absolutely necessary to have a quantum computer to break RSA? What evidence is there?


  16. Scott Says:

    Jair: It’s a good question. It’s logically possible that there could be something other than quantum mechanics that would also contradict the Extended Church-Turing Thesis, but I personally would say that we don’t have any convincing example of such a thing. Almost all the ideas I’ve seen are based on one of two things:

    (1) Making measurements to exponential precision, speeding up a computation by an exponential amount (maybe using relativistic time dilation), etc. Ironically, a fundamental barrier on all of these ideas is placed by … wait for it … quantum mechanics! More specifically, the holographic principle from quantum gravity, which upper-bounds the number of qubits and number of computational steps that you can ever have in a bounded region of spacetime, without the region collapsing to form a black hole.

    (2) Phenomena like protein folding, soap bubbles, spin glasses, etc., which can look to the uninitiated like they “instantaneously” solve NP-hard optimization problems. In all of these cases, the catch is that, if you set up one of the truly hard instances of the NP-hard problem, then you’d simply get stuck in a local optimum and never find the global optimum.

    For more about these themes, see my old essay NP-complete Problems and Physical Reality, or this CS Theory StackExchange answer.

  17. Jon Lennox Says:

    So I had a thought about your FQXi talk, particularly when you discussed the limitations of “small causes have small effects” computers, and weird laws of physics that could lead to weird models of computation.

    The thought that came to mind was quantum computers without decoherence. Intuitively, it seems like unitarity should lead to a “small causes can’t have big effects” property like your Digi-Comp II.

    But this isn’t true, because a quantum computer can simulate a classical one, and as long as you give it input that isn’t in a superposition decoherence doesn’t matter. So where does the unitarity constraint go wrong? And is there any way we could strengthen it (e.g. by limiting what kind of input states are allowed, or something) that makes a purely unitary quantum computer look like your Digi-Comp?

  18. Scott Says:

    Jon #17: Good question! The short answer is that there’s a confusion of levels here. Yes, at the level of the amplitudes (i.e., the entire wavefunction), the Schrödinger equation is precisely unitary, meaning that “small causes” (i.e., rotating a state by a small angle in Hilbert space) can only ever have small effects. (Indeed, if we wrote out the “state” in a classical probabilistic theory as a huge vector of probabilities, then we’d find that precisely the same property held.)

    By contrast, at the level of the actual observables—e.g., the positions or momenta of the actual particles, assuming one or the other to be relatively definite—quantum mechanics is just as “nonlinear” as any classical theory is. So for example, changing the position of a single particle by a tiny amount can completely change an experiment’s outcome.

    (Keep in mind that even moving a single particle a tiny distance, can change our original quantum state to a new state that’s almost orthogonal to the first state! This simple mathematical fact is really the core of what’s going on here.)

    Yes, you could define a quantum analogue of the Digi-Comp II. In fact BosonSampling, which I’ve been studying for the past 5 years or so, feels close to such an analogue (though the analogy is imperfect: for example, the DigiComp has writable internal state at the gates, whereas BosonSampling doesn’t). Indeed that’s one of the things that got me interested in the DigiComp in the first place. But the restrictions that you have to impose on universal QC to get BosonSampling are really orthogonal (har, har) to unitarity.

  19. Rahul Says:

    Naive question: Say, one day in the future we really do have a practical scalable super-duper, working QC that can factor large integers. Say one capable of breaking a 1024 bit RSA key.

    How easy or straightforward is it to get this machine to solve other problems, say, a practical protein folding problem. In practice, is it a trivial algorithmic jump from an integer factorising QC to other hard problems of practical interest? Or not?

  20. Scott Says:

    Rahul #19: If you had a QC capable of breaking 1024-bit RSA keys, then in practice (and as far as we know today), it would probably be a scalable, universal QC with full quantum fault-tolerance. So then, yes, you could reprogram it to do protein folding or whatever else, in much the same way you could program your classical computer to do more than whatever it came with out of the box.

    Another way to say it is that we have very few examples of “intermediate” QC models that do give you factoring but don’t give you universality. Actually, there’s one big exception to that: Cleve and Watrous showed in 2000 that factoring can be done using logarithmic-depth quantum circuits, whereas we certainly don’t know that to be true for arbitrary quantum computations. In practice, though, it’s hard for me to see how you could get fault-tolerant log-depth QC (with arbitrary couplings, as is needed here), without getting fault-tolerant polynomial-depth QC as well.

  21. Rahul Says:

    Scott #20:

    Thanks. Interesting. So is reprogramming a QC to solve a different problem really analogous to reprogramming a conventional computer? Like, we needed Shor to come up with his algorithm, quite a pathbreaking / non-obvious step, before we knew how to use QCs to factor integers.

    Is something major of that nature needed before QC’s can solve protein folding? Or is the jump really as trivial as “reprogramming” makes it sound.

    How “universal” is a universal QC from a practical viewpoint? Asides of the scaling / error correction is it fairly clear what the architecture of a general purpose QC will be & what “reprogramming a QC” will actually be like?

    Again, sorry if these are naive questions.

  22. Sid Says:

    Scott #9:

    I agree that if I found a black box which instantaneously solved the halting problem, then it’d be strong evidence for the violation of the Church-Turing hypothesis.

    But it’s hard for me to wrap my head around the possibility that we would be able to construct a theory from experiments that would say that the halting problem is solvable without actually telling us how to algorithmically solve the halting problem.

    As an analogy, in the violation of the Extended Church-Turing hypothesis, we didn’t get a black-box from nature. Instead, Shor used quantum mechanics to tell us exactly how we could it.

    I guess my question is: Are there “inferable” theories which would predict the ability to solve the halting problem? I realize that “inferable” doesn’t have a precise meaning, but it means something in the neighborhood of “theories that can be inferred using Turing machines in finite time using finite data”. Does this make sense?

  23. Scott Says:

    Rahul #21: In principle, it’s “trivial” to simulate protein folding using a QC: just run the Schrödinger equation forward! (Well, there are some tricks involved in mapping a continuous physics problem, involving particles and fields, onto a discrete system of qubits and gates. But those tricks, which go by names like “Trotterization,” are well-understood as well.) In other words, no feat of insight like Shor’s is needed to see that such systems can indeed be simulated in quantum polynomial time.

    In practice, on the other hand, the polynomial blowups from the known simulation methods are often so large as to make the simulations impractical, even for simulating pretty simple molecules on a pretty large QC. See for example this recent paper. Personally, I’m confident that the polynomials can be brought down with brainpower and hard work, rather than reflecting some sort of fundamental barrier—simply because that’s almost always been true in the past, and there’s no good reason for it not to be true here. But the work to make quantum simulations practically-efficient will need to be invested, and no, it won’t be trivial.

    As for architectures: the right way to think about programming a QC, is that you’re really programming a classical computer that in turn generates a sequence of laser pulses, etc. to control the QC.

    (Though some people will try to tell you otherwise—because they want to write papers about it—there’s really no reason to worry about control flow, loops, conditionals, etc. in a QC. The classical control system can just handle all that stuff for you!)

    So, it’s really no more difficult than classical software development plus quantum circuit synthesis, both of which we more-or-less know how to do.

  24. Fred Says:

    In your talk about the Church-Turing thesis you mention cooling as an interesting limitation on increasing computation speed, but even with an infinitely fast processing unit, you would still need to read/write data to some memory and moving information around is limited by the speed of light and storing information requires space. So memory is organized around the processing node in spheres of increasing radius – smaller and faster memory near the core and bigger but slower memory as the radius increases (memory caching in real computer approximates this speed/size trade-off).

  25. Scott Says:

    Fred #24: If it hadn’t been for the holographic / Bekenstein / black hole limitations coming from quantum gravity, you could’ve imagined that the memory cells could be made arbitrarily small to get around the speed-of-light problem.

  26. Jair Says:

    Thanks for taking the time to answer my question, Scott – that was helpful.

  27. Sam Hopkins Says:

    Do you consider it as a kind of accident that Turing machines are a good model of what is physically possible? For instance, logicians consider all kinds of models of computation that allow access to oracles that solve undecidable problems, but these would be bad models of what is physically possible. Why should we expect the entscheidungsproblem to have anything to do with the physical world?

  28. Scott Says:

    Sid #22: I don’t see any contradiction whatsoever in our brains being able to conceive, play with, and even test physical theories that imply an ability to solve computational problems that our brains themselves can’t solve, or can’t solve nearly as efficiently. Indeed, quantum mechanics seems to be an example of such a theory: humans discovered it, and discovered Shor’s algorithm, even though we ourselves can’t factor integers in polynomial time. If we discovered a physical theory that said, for example, that an infinite number of computational steps could be performed in one reference frame, in only a finite amount of time as measured by a second reference frame, and then the results could be communicated from the computer in the first frame to an observer in the second frame, I don’t see how that would be conceptually much different. (Again, though, I don’t think we actually live in the latter type of universe, because of Bekenstein-type limitations.)

  29. Scott Says:

    Sam #27: No, I don’t consider it an accident. First of all, the whole thing that convinced people of the Church-Turing Thesis in the first place is that Turing machines turn out to be equivalent to zillions of very different-looking models of computation that one can define (lambda calculus, RAM machines, cellular automata, etc). So then the only remaining question is: why did people in the 1930s converge on this large equivalence class of models, rather than (say) that equivalence class augmented by the ability to solve its own halting problem?

    I can think of two answers to that question, both of them plausible to me.

    The first answer is that the logicians (Turing, Gödel, Church, Kleene, Post, Rosser) who converged on this notion of computation, were implicitly guided by their experience of the physical world—or at the very least, by their experience of their own thought processes! And human thought processes (or the operation of humanly-buildable artifacts) simply don’t involve things like carrying out an infinite number of logical operations in a finite amount of time.

    The second answer is that, while you can define Turing machines with oracles for the halting problem, etc., there’s some objective sense in which those things are less “natural” or “basic” than Turing machines themselves. In other words, I’m ready to accept that discrete operations on bits, iterated a finite number of times, are really “special” in the universe of mathematical concepts, alongside the primes, the complex numbers, groups, and all sorts of other concepts that one can generalize in every imaginable direction, but that seem more special than their generalizations.

  30. Ashley Says:

    For those interested, here is a reply from Geordie Rose to the Shin, Smith, Smolin, and Vazirani preprint.

    The specific model proposed in Shin et al. focuses only on one experiment for which there was no expectation of an experimental difference between quantum and classical models and completely (and from my perspective disingenuously) ignores the entire remainder of the mountains of experimental data on the device. For these reasons, the Shin et al. results have no validity and no importance.

  31. Fernando Says:

    D-wave’s answer to Schin et al. paper:

    Comments on that? Socks?

  32. Fred Says:

    Scott, in the talk and in #29 you mention hypothetical computation models which would include the ability to solve the halting problem.
    Why do you pick this ability specifically?
    Is it correct that being able to figure whether any arbitrary program can halt or not means that most open mathematical questions and NP problems could then be solved?
    (by encoding them as a decision problem in a program and then analyzing whether the program halts or not).
    Is it interesting because the halting problem itself is an example of NP-Hard problem that’s not NP-complete?

  33. Rahul Says:

    I just loved Luke Muelhauser for asking this question:

    What is your subjective probability that we’ll have a 500-qubit quantum computer, which is uncontroversially a quantum computer, within the next 20 years?

    Even more I loved Ronald de Wolf for answering it straight up, no evasion, no vagueness, no waffling.

    Quite high, let’s say probability greater than 2/3.

    Clarity & precision indeed. I’d give an arm & leg to hear other QC experts to answer just this one question.

  34. Scott Says:

    Rahul: OK, I’ll give a 1/8 probability. This is one place where I guess Ronald and I disagree (but only about numbers, not about general principles). Now, may I have your arm and leg? :-)

    [However, on further introspection, part of what goes into my low estimate is game-theoretic: I'd much rather guess that we won't have scalable QC in the next 20 years and then be pleasantly surprised, than guess that we will have it and then be disappointed.]

  35. Rahul Says:

    Scott #34:

    Ah! Can we have your non-game theoretic estimate too? What would you guess then? As high as Ronald de Wolf’s 2/3?

    Paging Gil Kalai, Peter Shor and some others that I’ve seen on this blog. Guys can we have your estimates too? I’d love to see more numbers from the experts. (sorry, my arm & leg are already pawned off to Scott :) )

  36. Scott Says:

    Fred #32: There are plenty of problems that are NP-hard, presumably not in NP, but still computable—for example, the PSPACE- and EXP-complete problems. However, if you want to do something Turing-uncomputable, then it’s very hard to avoid solving the halting problem.

    Mathematically, it’s possible: we’ve known since Kleene-Post and Friedberg-Muchnik in the 1950s that there are problems “intermediate” between computable and the halting problem, and it’s also clear that there are problems “incomparable” to the halting problem (e.g., random Turing degrees). However, I don’t know how to get either of those weird possibilities by any hypothetical change to the laws of physics of the sort people typically discuss—e.g., allowing an infinite number of steps in a finite amount of proper time for some observer. In practice, if your model of computation “breaks the Turing barrier” at all, then it essentially always seems to let you solve at least the halting problem, and possibly even more.

    Again, here we’re talking only about the original, computability-theory Church-Turing Thesis. If we’re interested in the Extended (Polynomial-Time) Church-Turing Thesis, then there’s a much richer zoo of problems that we could imagine being efficiently solvable so as to falsify the ECT: NP-complete problems, factoring, graph isomorphism, lattice problems, BosonSampling…

  37. Scott Says:

    Rahul #35: Sorry, a game-theoretic estimate is all you’re getting from me. :-)

  38. Raoul Ohio Says:

    I have a first impression comment on Rose’s remarks (without having read anything in depth):

    D-Wave is promoting not a general purpose QC, but one that might work on a very restricted class of problems. So they demo one. An analysis shows it does not appear to be any better than a classical computation. Then Rose sez: “That analysis was only for a specific problem, so it has no relevance”. Looks to me like he is trying to have it both ways.

    I totally admire Rose’s chutzpah and putting his own money into a long shot at the bleeding edge of technology. And you can’t blame him for trying to put the best spin on things. But, this role is a marketer, not a scientist.

    Whatever D-Wave comes up with will be a good data point about what works, or is not easy to do.

  39. Raoul Ohio Says:

    Rahul, I will bet below Scott, maybe 1/64.

    I am far from an expert, but I know skepticism is a good bet. I take pride in having spotted Santa Claus as a scam when I was about 3 or so.

  40. Rahul Says:


    Thanks! It’s interesting to hear these numbers. I’ve never seen any similar numerical estimates before. If you know any other researchers who’ve mentioned numbers I’d love to know.

  41. Raoul Ohio Says:


    I have often wished for bets by experts, particularly in cosmology. Here are a couple of things, say known by 2040, with my quick guesses afterwards:

    Human to Mars surface and back alive? (1/8)

    Human to Mars orbit, or beyond, and back alive? (1/3)

    Life detected in solar system \ {earth}? (1/4)

    Life detected outside solar system? (1/8)

    Earth style DNA, RNA, amino acids, etc. common in asteroid belt dust (I made this one up)? (1/25)

    Utilities selling fusion generated power? (1/100)

    Dark matter particle identified? (1/5)

    Dark energy is present? (2/3)

    Cosmic inflation (current dogma version) considered to have actually happened? (1/2)

    Knuth’s AoCP Vol 5 published? (1/10)

    LaTeX 3 available? (1/3)

    D-Wave bigger than Microsoft? (1/10000)

    Ray Kurzweil on other side of singularity? (1/1000)

    Steven Wolfram proposes new theory of everything? (4/5)

    Scott buys into new SW TOE? (1/10)

    Of course, a big problem will be dealing with the uncertainty in some of these outcomes.

  42. Scott Says:

    Ashley #30 and Fernando #31: I think Geordie’s response obfuscates some basic points.

    Most importantly, there’s currently no known class of problems—as in not one, zero—on which the D-Wave machine gets any convincing speedup over what a classical computer can do. Random Ising spin problems were trumpeted as such a class by people misinterpreting the McGeoch-Wang paper (which D-Wave gave considerable publicity to). But then Troyer et al. found that, when you examine things more carefully, the speedup claims evaporate completely. So for Geordie to now say, “oh, that was never a good class of instances to look at anyway” takes some impressive chutzpah. Pray tell, then what is a good class of instances, where you get a genuine speedup? And if you can’t identify such a class, then do you retract all the claims you’ve made in the last year about currently being able to outperform classical computers?

    Moving on to the Shin et al. paper, I actually agree with Geordie that the 8-qubit experiments done a couple years ago made a pretty good prima-facie case for small-scale entanglement in the D-Wave devices: that is, entanglement within each “cluster” in the Chimera graph. (And in any case, we already knew from the Schoelkopf group’s experiments that current technology can indeed entangle small numbers of superconducting qubits.)

    But what about large-scale entanglement, involving all 100 or 500 qubits? As far as I know, the only evidence we had that such entanglement was present really did come from the Boixo et al. correlation analysis. And those correlation patterns are precisely what Shin et al. now say they can reproduce with a classical model. Of course, that doesn’t prove that there’s no large-scale entanglement in the D-Wave machine, but it does undermine the main piece of evidence we had for such entanglement.

    Most tellingly, Matthias Troyer—one of the lead authors of the Boixo et al. paper, and someone I trust as possibly the most evenhanded authority in this entire business—tells me that the Shin et al. paper caused him to change his views: he didn’t know if the correlation patterns could be reproduced in any reasonable model without entanglement, and now he knows that it’s possible.

    It’s worth stressing, again, that even if large-scale entanglement is present in the D-Wave machine, one can still use the Quantum Monte Carlo method from the Boixo et al. paper, so there’s still no reason to expect any speedup over classical computing with the current technology. In other words: in the “debate” between Boixo et al. and Shin et al., both sides completely agree about the lack of speedup. The only point of disagreement is whether the lack of speedup is because D-Wave is doing a form of quantum annealing that can be efficiently simulated classically, or because they’re basically doing classical annealing. (More to the point: is there large-scale entanglement in the device or isn’t there?)

    So, here’s a summary of what we currently know about the D-Wave devices:

    1. Small-scale entanglement. Pretty good evidence that it’s present (though still not a smoking gun).

    2. Large-scale entanglement. Who the hell knows?

    3. Speedup over classical computers. No evidence whatsoever, and indeed careful comparisons have repeatedly found no speedup (though D-Wave’s supporters keep moving the goalposts).

  43. rrtucci Says:

    Rahul and Raoul, predicting the future with such vague priors is BS, even for a Bayesian like me, isn’t it?

    If you want to put some data into your silly prior, consider the Manhattan project which took 6 years (almost to the day) from the day Einstein sent his letter to FDR (at which point nobody knew how to purify U235, and Plutonium had not been discovered yet) to the 2 explosions, using two completely different devices, in Japan. And the guys who did that didn’t have transistors or computers, or lasers or the internet or arxiv or latex or google, not even televisions… If you think those things haven’t increased the pace of scientific discovery, you are crazy.

    Or consider the Apollo project: started 1961, man on the moon 1969.

    Of course QCs will not be constructed by COMPLACENT people like you guys :)

  44. Fred Says:

    #43 “If you think those things haven’t increased the pace of scientific discovery, you are crazy.”

    One could argue that all those things are terrible distractions as well.
    Before the 60′s people had a better shot at getting things done (instead of debating endlessly on the internet for example).

  45. Raoul Ohio Says:


    I did not get a chance (in the late 1930′s) to guess that likelihood. I think that Atom bombs were in SF stories at that time, and as a kid I was a huge SF, fan, so I might have guessed high!

    In the 1950′s I was an “outer space” fan and would have predicted man on the moon earlier than 1969 (not having yet learned that skepticism is usually a good bet). In those days they carried rocket launches live on the radio; about 20 hours of delay, then the count down, then usually an explosion. That was exciting!

  46. quax Says:

    Scott #42 … careful comparisons have repeatedly found no speedup (though D-Wave’s supporters keep moving the goalposts).

    As presumably one of those D-Wave supporters I am wondering how exactly I would have moved the goalposts?

    We clearly started out at the point where D-Wave couldn’t show a quantum speed-up. There machine has getting faster to the point of becoming something that is interesting to experiment with, but we still don’t have any evidence for quantum speed-up.

    So I very much like them to continue to investigate if we can find such evidence, whereas you seem to imply that it’s a waste of time.

    From my POV none of the stances have changed. How is this moving the goalposts?

  47. Sam Hopkins Says:

    I wonder if a black box that solves arbitrary Diophantine equations is more plausible physically (in some sense) than one that solves the halting problem. It would be very interesting at least to come up with a physical theory that could find integer solutions to polynomial equations.

  48. Rahul Says:

    moving the goalposts = changing what’s the right problem they want benchmarkers to compare D-Wave’s speed to?

  49. Rahul Says:

    rrtucci #43:

    With that outlook anything’s possible really. Can’t argue with that. We may have a Mars colony by 2018, fusion may provide unlimited power for vehicles by 2020 & by 2025 we won’t even need vehicles since quantum transporters will have been invented.

  50. quax Says:

    Rahul #48, no that misses the point, my understanding is that any problem set that’ll allow to demonstrate quantum speed-up would do.

  51. Sniffnoy Says:

    Off-topic — but then what’s really off-topic in a “tweets” thread? — but anyway, apparently the matrix multiplication exponent has been improved again! (Assuming the correctness of this, anyway.) It looks to be even tinier than the other recent improvements, but, still…

  52. Raoul Ohio Says:

    El Reg addresses the D-Wave dustup:

  53. Raoul Ohio Says:


    I predict the sum of all future improvements in the exponent is less than 0.4.

  54. Scott Says:

    Sniffnoy #51: Thanks for the pointer—I hadn’t seen that! Francois Le Gall is a very careful guy, and it’s perfectly plausible on its face that you could lower ω further with more careful optimization. Given the amount of flak I got for blogging the last lowerings of ω, :-) I probably won’t blog this one, but it’s still nice to know.

  55. Scott Says:

    Raoul #52: My adviser was a “boffin”? LOL!

  56. Alexander Vlasov Says:

    rrtucci #43
    Funny, because I certainly remember how I read somewhere in 1995 about plans to build quantum computer and demonstrate first useful application in 1999 and thought – “why not, it was enough only about 5 years for Manhattan project”.

    Scott #34
    Relating expectation times – if the rough relation between probabilities and times may be expressed as
    T1/T2 = ln(1-p2)/ln(1-p1)
    for p2=1/8, p1=2/3 we have T2/T1 ~ 8 and if de Wolf’s estimation let us hope to have QC during next 20-60 years,
    should your estimation be considered as suggestion to consider time scales such as 160-480 years?

  57. Scott Says:

    quax #46: I wasn’t talking about you, though I’ll note that your position is very much opposed to Geordie’s (at least his position a year ago, when he was loudly trumpeting claims of a speedup)!

    In general, however, I’ve noticed some D-Wave supporters (e.g., ones I’ve met in “real life”) headed down the following slip ‘n slide over the past year:

    1. Aha—D-Wave does give a speedup on random instances! McGeoch-Wang! 3600 times faster! Take that, skeptics!

    2. Well, obviously you’re not going to get a speedup on random instances: it was stupid and misguided for the skeptics to demand that in the first place. To get an asymptotic speedup, you’re going to need more structured instances. Just wait; such a speedup will be shown any day.

    3. Obviously asking for a speedup at all is the completely wrong question! That’s like criticizing the Wright Brothers’ first plane for not being fast and comfortable enough. As everyone knows, the real point is just that, for the first time, D-Wave has demonstrated large-scale quantum behavior in hundreds of superconducting qubits…

    Of course, if the Shin et al. claims hold up, then we can safely predict the slip ‘n slide will take us to

    4. Obviously no one expected to see large-scale quantum behavior at this point, but…

  58. Scott Says:

    Alexander #56: I don’t think linear extrapolation really works here; technology development is not a Poisson process. If it helps, I’ll give maybe 1/2 odds of a scalable QC within the next 100 years. After that, I’d say my uncertainty becomes dominated by Knightian uncertainty about the future of civilization itself (for example, about whether there will even be a civilization).

  59. Fred Says:

    A bit off-topic, a question about QM/QC.

    Does the amount of entanglement in a system have any particular measurable “cost” or is it just a “free lunch”?

    I have two situations, similar to the setup in the EPR paradox.
    1) two entangled particles.
    2) the same two particles but not entangled.
    Besides saying that in one case the wave function is shared and that in the other case we have two wave functions, is there any other difference between the two cases, e.g. slightly different energy?
    Are we really able to entangle more and more particles (i.e. realizing qbits) without limit?
    It would seem reasonable to think that nature has to make some sort of “trade-off” to be able to entangle more and more particles (maybe that question is the same as asking if there are any hidden local variables?).

  60. Rahul Says:

    Assuming we want a public crypto alternative that’s resistant to Shor-breaking if and when a scalable QC arrives, how good are the lattice based crypto options?

    i.e. Is it proven than a QC-based attack is impossible / hard? Or are we just waiting for the right kind of “Shor-plus” that can defeat them?

  61. Scott Says:

    Fred #59: Asking whether there’s any limit to the number of particles you can entangle, is a lot like asking whether there’s any limit to the number of people who can share a secret.

    In principle, the answer is no: just keeping letting in more and more people! The effort only scales linearly with the number of people, not exponentially or anything crazy like that.

    In practice, the difficulty is obvious: the more people you let in, the greater the chance that one of them will upload the secret to WikiLeaks. Then, ironically, you’ll have overshot your target: while you “merely” wanted a thousand or a million people (all of whom you were keeping track of) in on your secret, now the entire world is in on it.

    In the same way, the more particles you entangle, the greater the chance that one of them will overshoot the target, entangling itself with the measuring apparatus, the air molecules and EM field in the room, etc. In which case, your first response might be “great! even more entanglement than I’d bargained for!” But once the entanglement has spread so far that you can’t keep track of it, it’s just as if a measurement has been made, and you no longer have any hope of doing an interference experiment that would reveal the entanglement.

    Now, the whole point of quantum fault-tolerance is to spread the entanglement “secret” more cleverly among the particles, so that even if (say) any 1% of the particles get entangled with the environment, the secret is still secure with the other 99%—and indeed, the secret can then be “fortified” so that it remains secure even if another 1% of the particles get entangled with the environment, etc. If and when quantum fault-tolerance becomes a reality, there should no longer be any practical limit on the number of particles that can be shown in experiments to be entangled (the current limit, I think, is a few billion particles, e.g. in solid-state and superconducting experiments).

    If there were a limit (not coming from cosmology) on the number of particles you could entangle, then something would have to be wrong either with quantum mechanics itself, or with some other extremely basic assumption of modern physics.

  62. Scott Says:

    Rahul #60:

      Is it proven than a QC-based attack [on lattice-based crypto] is impossible / hard? Or are we just waiting for the right kind of “Shor-plus” that can defeat them?

    Currently, there’s no security proof for any public-key cryptosystem, even if you’re willing to assume (say) P≠NP, or NP⊄BQP, or the existence of one-way functions. You always have to make some problem-specific hardness assumption, so the only questions are which assumption, how well has it been studied, etc.

    Having said that, the approximate shortest vector problem (and related lattice problems) have been very well-studied by now, and people have tried and failed to find quantum algorithms for these problems for at least 16 years. And we understand in some detail why “Shor-plus” won’t suffice: at the very least, you’ll need “Shor-plus-plus-plus.” :-) (In particular, you’d first need to generalize Shor’s algorithm to nonabelian groups like the dihedral group; and even then, lattice problems don’t exactly fit into the nonabelian HSP framework, but only approximately.)

  63. Raoul Ohio Says:

    Scott #55,

    BTW, in British slang,

    (1) a Boffin is a top notch scientist,

    (2) an Egghead, Nerd, or Geek is scientist,

    (3) social scientists and philosophers are “trick cyclists”.

  64. Jon Lennox Says:

    Scott at #58: Personally, my probability for the development of Scalable QC is dominated not by my uncertainty about the difficulty of achieving it, but by uncertainty about the future of funding to do the work.

    On the one hand, basic science research funding has had a terrible time of late; on the other hand, VCs backing a DWave++, or Google, or the NSA, might decide to open up a firehose of money to fund development of production-quality Scalable QC.

    Yes, the Manhattan Project achieved an atomic bomb in five years; but that’s because the U.S. Government threw the equivalent of thirty billion dollars (in current money) at it.

    My estimate of the likelihood of this happening is pretty thoroughly Knightian.

  65. Fred Says:

    Scott #61: thanks for the detailed answer Scott, that’s a really striking analogy :)

  66. rrtucci Says:

    Jon Lennox, the sad thing is that the NSA and British Intelligence spent 100 billion in just one year to hack into your smart phone and Angry Birds. That’s the evil of classified research. Mediocrity, favoritism, fraud and abuse are quickly made top secret.

    How about the probability that China will pour 30 billion into developing a QC? I’ve noticed that in the last 2 years, the number of QC papers in arXiv with monosyllabic Chinese names as authors has skyrocketed. China has 4 times the population of the US. They are hungry to prove themselves. They have given carte blanche to their QC scientists. Their students consistently do better than American students in Science.

  67. quax Says:

    Scott #57 Thanks for elaborating. Obviously I prefer to look at the bright side of the story, but I also want to stick to the facts.

    And there’s no two ways about it, so far evidence for quantum speed-up has been elusive. Yet, hope flows eternal :-)

  68. rrtucci Says:

    correction: it’s probably more like 100 billion dollars total, not in just one year

  69. Rahul Says:


    What’s the latest on the experimental work on Boson Sampling? Can I request a blog post? That work seems tantalizingly interesting in the early optics experiments you posted about a year ago. Have the experimentalists succeeded in scaling those? Are we getting closer to the point where Boson Sampling is solving a problem large enough to be challenging for a classical simulation?

  70. rrtucci Says:

    NSA metadata reveals that
    Feb 6:
    9:00 AM
    Scott Aaronson reads cover story in TIME magazine which is highly favorable to D-Wave
    9:15 AM
    Scott plays Angry Birds for 15 minutes
    9:30 AM
    Scott plays with daughter Lily for 30 minutes using Xbox
    Scott uses iphone to call MIT infirmary, complaining of severe abdominal pains
    Scott hospitalized at Mass General for severe bleeding ulcer, probably stress induced

  71. Scott Says:

    Rahul #69: Dude, I posted about Scattershot BosonSampling (which was a spectacular idea that emerged from the experimental groups) just a few months ago. I haven’t heard anything similarly big since then, but if I do hear something, then sure, I’ll blog about it.

    (As I keep pointing out, science doesn’t move at blog-speed or news-cycle-speed, no matter how many people want it to! And also, the experimentalists don’t always keep me in the loop! I learned about most of the earlier experiments when the papers appeared on the arXiv, just like everyone else.)

  72. Raoul Ohio Says:


    Your kidding? Time Magazine is still being published?

  73. rrtucci Says:,9171,2164806,00.html

  74. Rahul Says:

    Are D-Wave’s innards really the coldest spot in the Universe? Or is that TIME article starting off with an exaggerated fantasy? Too bad the whole thing’s gated.

  75. quax Says:

    Rahul, #74 they don’t hold the record, but they run much colder than deep space. (Article at the link has a nice chart that shows this in context).

    The hold-over background radiation makes deep space cozy warm in comparison.

  76. Rahul Says:


    Thanks! But colder than deep space is hardly special these days right? A lot of these solid state physics experiments routinely exceed 1 K?

    To make D-Wave sound like *the* coldest spot in the universe is a huge exaggeration. And he makes it sound as if this changes something major in how astronomers view the coldness of the universe.

  77. Scott Says:

    Gentlemen: You can move this discussion over to today’s post, if you like. :-)

Leave a Reply