Review of Vivek Wadhwa’s Washington Post column on quantum computing

Various people pointed me to a Washington Post piece by Vivek Wadhwa, entitled “Quantum computers may be more of an immiment threat than AI.”  I know I’m late to the party, but in the spirit of Pete Wells’ famous New York Times “review” of Guy Fieri’s now-closed Times Square restaurant, I have a few questions that have been gnawing at me:

Mr. Wadhwa, when you decided to use the Traveling Salesman Problem as your go-to example of a problem that quantum computers can solve quickly, did the thought ever cross your mind that maybe you should look this stuff up first—let’s say, on Wikipedia?  Or that you should email one person—just one, anywhere on the planet—who works in quantum algorithms?

When you wrote of the Traveling Salesman Problem that “[i]t would take a laptop computer 1,000 years to compute the most efficient route between 22 cities”—how confident are you about that?  Willing to bet your house?  Your car?  How much would it blow your mind if I told you that a standard laptop, running a halfway decent algorithm, could handle 22 cities in a fraction of a second?

When you explained that quantum computing is “equivalent to opening a combination lock by trying every possible number and sequence simultaneously,” where did this knowledge come from?  Did it come from the same source you consulted before you pronounced the death of Bitcoin … in January 2016?

Had you wanted to consult someone who knew the first thing about quantum computing, the subject of your column, would you have been able to use a search engine to find one?  Or would you have simply found another “expert,” in the consulting or think-tank worlds, who “knew” the same things about quantum computing that you do?

Incidentally, when you wrote that quantum computing “could pose a greater burden on businesses than the Y2K computer bug did toward the end of the ’90s,” were you trying to communicate how large the burden might be?

And when you wrote that

[T]here is substantial progress in the development of algorithms that are “quantum safe.” One promising field is matrix multiplication, which takes advantage of the techniques that allow quantum computers to be able to analyze so much information.

—were you generating random text using one of those Markov chain programs?  If not, then what were you referring to?

Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance?  Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now?  How does it happen that the Washington Post publishes a quantum computing piece filled with errors that would embarrass a high-school student doing a term project (and we won’t even count the reference to Stephen “Hawkings”—that’s a freebie)?

Were the fact-checkers home with the flu?  Did they give your column a pass simply because it was “perspective” rather than news?  Or did they trust you as a widely-published technology expert?  How does one become such an expert, anyway?

Thanks!


Update (Feb. 21): For casual readers, Vivek Wadhwa quickly came into the comments section to try to defend himself—before leaving in a huff as a chorus of commenters tried to explain why he was wrong. As far as I know, he has not posted any corrections to his Washington Post piece. Wadhwa’s central defense was that he was simply repeating what Michelle Simmons, a noted quantum computing experimentalist in Australia, said in various talks in YouTube—which turns out to be largely true (though Wadhwa said explicitly that quantum computers could efficiently solve TSP, while Simmons mostly left this as an unstated implication). As a result, while Wadhwa should obviously have followed the journalistic practice of checking incredible-sounding claims—on Wikipedia if nowhere else!—before repeating them in the Washington Post, I now feel that Simmons shares in the responsibility for this. As John Preskill tweeted, an excellent lesson to draw from this affair is that everyone in our field needs to be careful to say things that are true when speaking to the public.

260 Responses to “Review of Vivek Wadhwa’s Washington Post column on quantum computing”

  1. Atreat Says:

    “[T]here is substantial progress in the development of algorithms that are “quantum safe.” One promising field is matrix multiplication, which takes advantage of the techniques that allow quantum computers to be able to analyze so much information.”

    Not sure, but I think this might have come from mLab: https://cemulate.github.io/the-mlab/

  2. Vivek Wadhwa Says:

    Scott, I just watched your fascinating and excellent TedX talk and really appreciate your perspectives. I’ll start by admitting that I struggle with the concepts of quantum computing and found it very hard to simplify these. I have read your criticisms of journalists who have had the same issue and your frustrations with the deficiency.

    As far as the traveling salesman problem goes, the person I learned of this from, a few years ago is Michelle Simmons, director of the Centre for Quantum Computation & Communication Technology, University of NSW. This TedX talk that she gave was brilliant and I wrote to her to thank her for opening my eyes: https://www.youtube.com/watch?v=cugu4iW4W54. She repeated this example in a recent piece in ZDNet: http://www.zdnet.com/article/australias-ambitious-plan-to-win-the-quantum-race/ I also consulted a couple of other gurus and no one raised issue with this example.

    If you give me a better way of explaining how quantum computers work I will surely use that. But I find your comparison of this to my criticism of Bitcoin’s demise as a digital currency to be unprofessional and petty. Surely you don’t have to resort to such nastiness.

    On Bitcoin, I explained my views here: https://www.nbcnews.com/think/opinion/bitcoin-bubble-going-burst-let-s-promote-viable-digital-currencies-ncna834186 It is a Ponzi scheme of sorts and I would love to see you defend this.

    Vivek Wadhwa
    http://www.wadhwa.com

  3. Sniffnoy Says:

    Incidentally, when you wrote that quantum computing “could pose a greater burden on businesses than the Y2K computer bug did toward the end of the ’90s,” were you trying to communicate how large the burden might be? Or was that intentional irony?

    I mean it may not be a great illustration but Y2K did require an enormous amount of work to fix, by all accounts I’ve heard; it ended up not actually causing much of a problem only because so much work to avert it, so it was still a huge burden. Going by Wikipedia the total amount of money spent to fix it was on the order of a 100 billion dollars, so…

  4. Atreat Says:

    Vivek: perhaps this will help -> https://www.smbc-comics.com/comic/the-talk-3

  5. Vivek Wadhwa Says:

    Sniffnoy, I can’t find anyone who can give me an estimate of what the costs would be to upgrade all RSA-based systems and their equivalents to be quantum secure. What consultants who worked on those systems and who know a little about quantum computing say is that this could be a bigger problem because you have to change the code as well as the database. I would welcome input on this.

  6. Vivek Wadhwa Says:

    Atreat, the comic doesn’t help. Please give me 1-2 sentences that your children could understand that explains the possibilities of quantum computing and I will use these in future articles. I acknowledge that I am not as smart as most of you are. I struggle with the concepts of quantum computing, these just don’t make sense to me.

  7. Atreat Says:

    Hi Vivek,

    I’m no quantum computing expert. I’ll leave that to the others here. I do think you’ve been badly misled by the videos and articles you cited. For folks like Scott I think this kind of article is extremely frustrating as he’s been trying to correct the record on the popular imagination of what quantum computing actually entails. This is his chosen profession after all so he takes it fairly seriously 🙂

    FWIW, I think a key sentence from the comic I cited is this, “In Quantum Computing, the whole idea is just to choreograph a pattern of interference where the paths leading to each wrong answer interfere descructively and cancel out, while the paths leading to the right answer reinforce each other.”

    Turns out that is not so easy. Crucially, we can not just take any algorithm run on a classical computer and run it on a quantum computer to magically make it massively parallel. In fact, we have only a relatively few algorithms that are thought to benefit from a quantum speedup. The most highly cited and perhaps the one with the most real world is Shor’s algorithm: https://en.wikipedia.org/wiki/Shor%27s_algorithm This is the one that has implications for modern cryptography.

    To be sure, there are other algorithms thought to show quantum supremacy, but the real world impacts are no where near as universal as sometimes popularly believed. The other commonly cited place to have a real world impact is that quantum computers are thought to be able to model quantum mechanics much better than a classical computer.

    Again, I’m no expert. Just learned a bit from reading Scott who *is* an expert. In other words, you should take what I say with a grain of salt and consult actual real quantum experts … I’m sure Scott would be happy to pass along some names … before writing another article.

    Cheers,
    Adam

  8. Candide III Says:

    Scott: good takedown. Now I’d like to ask you to re-read your post thinking about Gell-Mann amnesia. What other topics, that you don’t have personal expertise in, does the Washington Post cover as badly as it does quantum computing?

    Vivek: if you struggle with the concepts of quantum computing and they don’t make sense to you, that’s perfectly OK, no shame in that. Any number of excellent and/or smart people know nothing about quantum computing, not to mention even larger numbers of less smart and/or excellent people like yours truly. The question that has me stumped is, if you admittedly struggle with the concepts of quantum computing and they don’t make sense to you, why do you write about it — in the Washington Post, no less? Would you consider writing for the Post about, say, Afghan-Thai fusion restaurant scene in DC without knowing the first thing about either cuisine?

  9. Jean-Gabriel Young Says:

    That’s unfortunate, the Gell-Mann’s amnesia effect immediately comes to mind…

  10. Scott Says:

    Vivek: I’m glad you’re interested in learning about this area! Unfortunately, whichever people you talked to failed you utterly, if they didn’t flag the traveling salesman or “combination lock” things. (If you tell me who those people were, I can write to them to ask what exactly was going through their heads.)

    Very briefly, there is no evidence at present that quantum computers will give you more than about a square-root speedup for traveling salesman, or for most other NP-complete problems. The very first thing to learn about quantum speedups is that they’re NOT as simple as “trying every combination at once.” It all depends on your problem having a structure that lets you set up a pattern of constructive and destructive interference, so that different contributions to each wrong answer cancel each other out while the different contributions to the right answer(s) reinforce. The amazing exponential speedups that we know, compared the best known classical algorithms, are mostly confined to factoring and other problems important for public-key cryptography, as well as quantum simulation and a few other things.

    If my cartoon was no good then try my Shor, I’ll do it blog post, or other resources that you can find on the sidebar to the right of this blog. Better now than never! 🙂

  11. Scott Says:

    Sniffnoy #3: OK, fair point.

  12. Chris Says:

    I am also enjoying the use of the phrase “classic computers” – perhaps this refers to the ZX Spectrum, or the Commodore 64!

    To Vivek Wadhwa: if you struggle with the concepts of quantum computing, perhaps you are not the one to write about them. It seems extremely weird to write about something without even being able to critically evaluate the claims made about it. It’s not about intelligence, you clearly are intelligent. But you are lacking in some prerequisite knowledge to write such an article effectively.

  13. gentzen Says:

    Vivek Wadhwa #6: Scott’s recent John Preskill, laziness enabler post should help (or rather the paper it advertises). John Preskill’s paper is full of easily quotable material about quantum computing, and motivation for the current noisy intermediate-scale quantum technology area. Here is where he describes his own reasons for being excited about quantum computing:

    For a physicist like me, what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in Nature. That’s not true for classical (i.e., non-quantum) digital computers, which can’t simulate highly entangled quantum systems. With quantum computers we should be able to probe more deeply into the properties of complex molecules and exotic materials, and also to explore fundamental physics in new ways, for example by simulating the properties of elementary particles, or the quantum behavior of a black hole, or the evolution of the universe right after the big bang.

  14. Sniffnoy Says:

    Alternatively: Wadhwa should have read this blog’s tagline. 😛

  15. Joshua Zelinsky Says:

    “Sniffnoy, I can’t find anyone who can give me an estimate of what the costs would be to upgrade all RSA-based systems and their equivalents to be quantum secure.”

    I don’t have an estimate for this, but it should be clear, we’re nowhere near needing to do this. There’s a decent that switching to crypto systems which are believed to be quantum-resistant right now is actually a bad idea because for two big reasons: First those systems have had much less serious examination by many people and so we’re not as certain about their general robustness even in classical contexts. Second, quantum computing is still sufficiently in its infancy that we can’t rule out that there aren’t efficient quantum algorithms for any of those.

    Right now, as it stands, we cannot show in a classical context that having a fast algorithm for factoring goes against any deep conjecture (such as forcing P=NP or even forcing the polynomial hierarchy to collapse). But we are also nowhere near proving that any proposed quantum resistant encryption system would cause some sort of bad collapse if it turned out to be breakable in polynomial time on a quantum computer.

    I’m a number theorist, not a quantum computing expert, so I’m only qualified to have an opinion on half of what I’m about to say, and even then my certainty level is low, but I do think this is still worth stating: If someone told me that sometime in the next year either a fast, classical factoring algorithm had been discovered, or that quantum computers had advanced to have enough qubits to reliably factor numbers the size of those used in RSA or to reliably do discrete log at the size that Diffie-Hellman uses, I’d strongly guess that the first one had happened rather than the second, and that would also apply to 3 years or 5 years down the road. Moreover, an improved classical factoring algorithm doesn’t need to be in polynomial time in order to functionally break enrcryption. An improvement in the choices of polynomials in the general number field sieve https://en.wikipedia.org/wiki/General_number_field_sieve could plausibly drastically increase the efficiency of factoring enough to break smallish RSA keys without having any deep implications either for number theory or for computational complexity.

  16. Most inaccurate media number ever? | Stats Chat Says:

    […] Update: Scott Aaronson, who knows from quantum, is Not Impressed. […]

  17. Bob Strauss Says:

    I can’t refute Scott’s critique, but as a fellow journalist, I have to say that it was pretty brave of Vivek to post in this thread.

  18. Candide III Says:

    I second Bob, coming here to engage and ask questions is commendable even if a bit late.

  19. fred Says:

    Scott Aaronson, destroyer of worlds…

  20. Matt R. Says:

    I’m really glad Vivek has responded in this thread. On the one hand it shows a willingness to engage with those that disagree with his work. That’s incredibly important for the kind of world most of us Shtetl-Optimized readers want to live in.

    On the other hand, if you’re so ready to admit these concepts are hard for you to understand, __maybe you shouldn’t be writing globally publicized articles about them__.

    If only the rationalist (read: Enlightenment) ideals of __getting the right answers__ were held outside of our small corner of the internet…

  21. Robert LeChef Says:

    To quote the ornery David Bentley Hart, “Journalism is the art of translating abysmal ignorance into execrable prose.” I find it odd that Scott holds the Pravda on the Potomac in such high esteem [0], though I have no personal animus toward Vivek, only the usual confusion about why journalists insist on expounding on topics they have no basic comprehension of. The mass media must always be read in the same spirit one might read a less vulgar version of the National Inquirer. In this regard, Vivek’s article does not depart in terms of quality from the standard fare of popular science books and magazines.

    [0] Crichton’s “Gell-Mann Amnesia” quote is fantastically apropos in this regard, and certain kinds of specialists ought to recall it so that it might temper snobbery and inspire a more charitable response. I can’t tell you how often specialists, often competent in their own narrow areas of expertise, produce some of the most absurd and ignorant nonsense ever put to paper when they falsely presume to have anything of value to say about topics outside of their fields.

  22. Kevin Stangl Says:

    Whereof one cannot speak, thereof one must be silent- Wittgenstein

  23. Vivek Wadhwa Says:

    Joshua Zelinsky: very credible people from IBM, Google, and Microsoft say that we are close to breakthroughs in quantum computing that could enable these to run Shors algorithm. If this is true, then there is an urgent need to upgrade security systems. I know that Scott Aaronson said in 2011 to the NY Times that these “might still be decades away”, but it is looking increasingly likely that this happens sooner rather than later. With Y2K we knew when the deadline was, here we have no firm date. Governments are also working on these and they don’t make press releases or publish papers.

    Scott: I’ve also exchanged emails with William Cook of U Waterloo about the techniques he developed to solve the TSP. Interesting and useful algorithms. But the person you seem to be ignoring, the person I cited above is Michelle Simmons, a scientist who is actually developing the quantum computing technologies. She was just awarded “Australian of the year” for her accomplishments. If you watch the video I linked and read her papers, she clearly expresses opinions that are different than yours. I would not discount her views. I have found her work to be the most interesting of all and hold her in the highest possible regard.

    Many of the comments here are interesting and insightful but I am surprised at the unprofessional comments. I thought this was a bunch of academic researchers, not the Boys Club that I see in Silicon Valley.

    Vivek

  24. Jon K. Says:

    I’m just curious, is there a short, layman’s analogy/explanation that tries to underscore the source of quantum computation speed-ups (if you’re problem is amenable to QC), without falling into the “tries all solutions at once” fallacy? Is there a simple explanation that conveys how the simultaneous aspects of quantum systems could be leveraged? (I realize there may not be.)

    (I hope my question didn’t already assume something incorrect about QM 🙂 )

  25. tsp Says:

    “It would take a laptop computer 1,000 years to compute the most efficient route between 22 cities, for example. A quantum computer could do this within minutes, possibly seconds.”

    A 22-city problem could be solved on a laptop in a fraction of a second.

  26. Jon K. Says:

    *your

  27. Noah Stephens-Davidowitz Says:

    I think quantum computing experts (I am not one) are too hesitant to say the following: quantum computers appear to be significantly better than classical computers “only” for a very small class of problems. For the vast vast majority of problems of interest, they appear to provide either no improvement at all over classical computers, or a “moderate” speedup.

    (One source of confusion is that these much more common “moderate” speedups often get lumped together with the rare gigantic speedups. There’s really a qualitative difference. For one thing, smaller speedups won’t be very useful at all until we’re able to build quantum computers on scales roughly comparable to our current classical computers.)

    The two most important applications of quantum computers appear to be (1) the simulation of large quantum systems; and (2) factoring/discrete logarithm. There are others too, of course, some of which might prove to be quite important, but I think that these are the only two applications that are well understood and unambiguously extremely important

    The simulation of quantum systems is an awesome application. I think it gets less hype than it deserves because it just doesn’t sound cool to say “quantum computers can simulate quantum systems.” This fact sounds trivial, boring, and/or esoteric, but it’s simply none of those things. This will likely have huge effects on society via physics, engineering, materials science, molecular biology, etc. Good popular articles about quantum computers should probably focus on this almost entirely.

    The second application is strange. We don’t really care much about factoring gigantic numbers or computing discrete logarithms in huge groups. But, we happen to have based the security of the internet on essentially the first two (public-key) cryptosystems that we discovered, which happen to be broken if people can factor huge numbers or compute discrete logarithms in gigantic groups. We now know of other cryptosystems that are apparently secure against quantum computers, and if we’d just discovered those first instead, then very few people would care that there happens to be a fast quantum algorithm for factoring. As it is, this will probably turn out to be a minor inconvenience, but we do have to deal with it.

  28. Luca Says:

    This shows show how completely different are the standards for reporting on math-based fields compared to other fields.

    Imagine that an article on the economic implications of the future of medicine in a major national newspaper says something like “doctors are concerned that the overprescription of antibiotics will have negative consequences on the immune system of the genome.”

    Then a medicine blog run by an MD says, well, that literally doesn’t mean anything.

    Then the author of the article comments to say, hey, I don’t know anything about medicine, but if you want to recommend some reading material I am all for it.

    Then the MD says here, this is a cartoon explanation of the immune system, like, it is literally a cartoon.

    And the author of an article on the economic implications of the future of medicine in a national newspaper says, LOL man, that’s way too hard, there is like medicine concepts in it.

  29. Frank Wilhoit Says:

    None of this has anything to do with quantum computing (or with any other hypothetical specialized topic upon which WaPo might have felt a need to offer so-many-hundred words to its readership). It has everything to do with the incentives of publishers in WaPo’s position.

    Several commenters have, in effect, asked Mr. Wadhwa why he did not recognize, and act upon, a responsibility to decline this assignment if he felt unable to comprehend its subject matter. He has not responded and may yet; but it appears quite plausible to me that he may not have been in a position to decline the assignment, in which case those questions are perfectly unfair to him, as they reveal misfeasance not on his part but on the part of his employer.

    One way to look at this is that his employers commanded him to use his/their pulpit (quite a “bully” one, as Th. Roosevelt might have said) in order to disseminate information, if possible, or misinformation, if necessary — the latter not as a first choice, but as preferable to holding silence and thereby losing eyeshare to one of their competitors.

    So, perhaps the important point here is that WaPo and the other occupants of its market segment have perverse incentives, rather than the precise degree of botch that may have been made of this particular topic in this instance.

  30. Mike Musson Says:

    That is one of my favorites.
    “Wovon man nicht sprechen kann, darüber muß man schweigen.”—Lugwig Wittgenstein

  31. Sniffnoy Says:

    Joshua Zelinsky: very credible people from IBM, Google, and Microsoft say that we are close to breakthroughs in quantum computing that could enable these to run Shors algorithm. If this is true, then there is an urgent need to upgrade security systems.

    I think you’ve missed the point of Josh’s argument here. There are already quantum computers that can run Shor’s algorithm. The question isn’t “can we run Shor’s algorithm”; the question is, on numbers of what size.

  32. Scott Says:

    Vivek #23: William Cook is not a quantum computing guy, but he certainly could’ve told you that your claim that TSP with 22 cities takes a laptop a thousand years was laughable. Did he? Did you run it by him?

    Can you point me to any paper Michelle Simmons has written in which she explains how to solve TSP in polynomial time using a quantum computer? Do you have any shadow of an inkling of how big a breakthrough that would be, if it were true?

    I regret that I haven’t yet had the pleasure of meeting Dr. Simmons, after 20 years in this field. But if she actually claimed anywhere that quantum computers can efficiently solve TSP—and I’m not assuming she has until you show me a reference—a minute of googling reveals what the issue might be. Namely, she’s an experimentalist. QC experimentalists, while extremely careful in building their systems and describing them, have sometimes been wildly, forehead-bangingly wrong when asked to explain what QCs will and won’t be able to do once we have them. It’s not their area, just like calibrating a laser isn’t mine. For what’s known about quantum algorithms, it’s best to talk to a theorist. Did you try to? There are hundreds of us.

    With Shor’s algorithm, you seem to be conflating what IBM, Google, Intel, and others really are doing over the next few years—namely, building QCs with ~50-100 qubits and trying to achieve “quantum supremacy,” which is exciting enough in itself—with what they hope to do eventually, which is build full, scalable, fault-tolerant QCs with millions of qubits, capable for example of running Shor’s algorithm to break (say) 2048-bit RSA. The latter is still a lot further in the future, pending some breakthrough that neither you, I, nor anyone else can predict.

    I can’t resist asking one final question: so you know about Shor’s algorithm, which is great. If you still think QCs would work by simply trying every possible combination in parallel (and could thereby trivially crack TSP and all the rest), then why do you imagine that Shor’s algorithm was even necessary, or surprising?

  33. Craig Says:

    Vivek,

    There is a minority view in the scientific community that universal quantum computers with lots of qubits (like the ones that IBM, Intel, and Google recently built with 50 qubits) cannot work properly in principle.

    You ever wonder why IBM, Intel, and Google have not said anything more about these quantum computers that they built other than “we built them”, yet it has been a few months now since IBM first announced? Perhaps these companies indeed built them, but they do not work properly, which is why they are not giving the public any more information. To me, that is a really good story. Why don’t you write about that?

  34. Arthur Santana Says:

    Ralph Merkle seems to agree with the article that crypto systems need to be upgraded soon.

    Scott, could you comment on that? More info at his website: http://www.merkle.com/

    As a layman in QC, it’s hard for me to believe in any proposition when two authorities I trust seem to disagree.

  35. Noah Stephens-Davidowitz Says:

    “But the person you seem to be ignoring, the person I cited above is Michelle Simmons, a scientist who is actually developing the quantum computing technologies. She was just awarded “Australian of the year” for her accomplishments. If you watch the video I linked and read her papers, she clearly expresses opinions that are different than yours. I would not discount her views. I have found her work to be the most interesting of all and hold her in the highest possible regard.”

    Yeah.. a lot of what she says in that talk is simply not backed up by science. It’s really unfortunate. The charitable interpretation is that she’s speaking metaphorically…

    E.g., some things she says are simply false, like that classical computers are so slow at solving traveling salesman. There are theoretical solutions that have no trouble with, say, 50 cities, and heuristic algorithms that work for tens of thousands of cities on supercomputers. This is all on Wikipedia.

    Similarly, she seems to be saying that there are much faster quantum algorithms known for TSP than classical algorithms. This is false.[1]

    Some other things that she says are either misleading or false, depending on how charitable your interpretation is, like that “a quantum computer can be in 2^N states at once.” This is sort of meaningless—a quantum state is a vector in 2^N-dimensional complex space. What does it mean for it to be in “2^N states at once”? This is as about as silly as saying that “if I flip N coins and don’t look at the result, they can be in 2^N different states at the same time.”

    She uses this “2^N states at once thing” to imply that a quantum computer with N bits can do 2^N classical operations efficiently. This is false.[1] She says “a 30-qubit” computer would be more powerful than a super computer. If she means at simulating quantum systems or factoring, then she might be right, though probably not. She makes it sound like she means that this is true for most computational problems, but this is not true. [1]

    So, I don’t think this is just a difference of opinions. Her claims are misleading.

    [1] There’s this annoying caveat that we don’t know how to PROVE that there are no way faster quantum algorithms for various problems, but we also don’t know how to prove that there’s no super-fast classical algorithm. We have no reason to think that there’s a huge quantum speedup for such problems, and pretty good reason to think that there isn’t one. For example, we have models (like the query complexity model) in which quantum can offer only minor speedups.

  36. Job Says:

    Can you point me to any paper Michelle Simmons has written in which she explains how to solve TSP in polynomial time using a quantum computer? Do you have any shadow of an inkling of how big a breakthrough that would be, if it were true?

    A breakthrough and a death sentence for quantum computing, let’s face it.

  37. Vivek Wadhwa Says:

    I suggest you read up on Simmons work yourself. The person who first made me aware of the RSA issues was my then colleague from Singularity University, Ralph Merkle. Please watch his lecture: https://www.verisign.com/en_US/company-information/verisign-labs/speakers-series/quantum-computers/index.xhtml?inc=www.verisigninc.com

    Merkle told me that there is an urgent issue and it could easily be 20 years before we’ve replaced all of the Internet’s present security-critical infrastructure.

    I trust the input of both Merkle and Simmons.

  38. Sam Hopkins Says:

    Unfortunately, the 22 city TSP claim does appear in Michelle Simmons’s TEDx talk. She seems not ever to say that quantum computers will solve TSP faster, but since describes exponential-time search using TSP as her main example, then goes on to say that quantum computers will obtain speedups for some problems which previously seemed to require exponential-time search, it would be rather easy to get the wrong impression.

  39. Atreat Says:

    Vivek, the charitable interpretation is you are taking what Professor Simmons have talked about metaphorically and quoted them literally. The non-charitable interpretation is that you’ve been badly misled.

    Instead of argument from authority though perhaps you could ask Professor Simmons and Merkle to come here (or the Washington Post) back up what you claim for them: an efficient quantum algorithm for TSP. Please cite the paper.

    Again, I don’t doubt your sincerity, but you’ve seem to have been misled. Also, you really should correct your article…

  40. Scott Says:

    Arthur #34 and Vivek #37: I never said that cryptosystems shouldn’t be updated to quantum-resistant ones. At some point they should be. How urgent this is depends on a bunch of factors, including how long you need your data to be secure for, and even your personal beliefs about e.g. the relative likelihoods of a scalable QC being built versus someone finding a fast classical way to break lattice-based cryptography.

    But that question is completely separate from my main concern here, which is the howling misconceptions about QC in Vivek’s article—misconceptions that, after a half-day of quasi-repentance, he now seems back to doubling down on.

    I doubt Ralph Merkle ever claimed that QCs are known to be able to solve TSP in polynomial time (if he has, could we see a reference?). If Michelle Simmons claimed that, then she was simply mistaken when speaking about something outside her immediate expertise. This is not a debate or a matter of dueling experts. Check Wikipedia. The sky is blue, bears crap in the woods, and we don’t know how to solve NP-complete problems in quantum polynomial time.

    On a different topic, I agree that Bitcoin has effectively become a Ponzi scheme, regardless of what it was originally. But knowing that still doesn’t let us announce its death at any specific time with any confidence.

  41. Noah Stephens-Davidowitz Says:

    Luca,
    I think there’s a pretty widespread consensus that science journalism is terrible in most fields, with the field of medicine being perhaps the most commonly cited example. There’s some debate about who’s at fault: http://www.bmj.com/content/349/bmj.g7015. There’s even at least one website that reviews popular articles about health (and finds that many of them are really bad): https://www.healthnewsreview.org/. Here’s a website listing all the times that the Daily Mail claimed that something caused or prevented cancer: http://kill-or-cure.herokuapp.com/ .

    So, whatever the problem is, I don’t think it’s unique to math/CS.

  42. Kirren Says:

    Copying my comment over from hacker news because it seems relevant:

    Interestingly, Vivek Wadhwa said that his source for quantum computing being useful for travelling salesman by trying all possibilities in parallel was from the researcher Dr. Michelle Simmons in a Ted talk. I decided to watch it to see how he’d misinterpreted it and surprisingly enough he hadn’t.

    From 4:30 Simmons brings out the laundry list of quantum computing misconceptions. She implies that the TSP doesn’t scale poorly for quantum computers, implies that there’s an efficient quantum algorithm for list lookups because the list can be read in parallel, and says the power of QC is partially because you can store so much information in the superposition (which is, at the very least, misleading). Then she starts comparing the power of QC and classical computers, saying that a 300 bit QC would be more powerful than all the computers in the world put together without putting the necessary caveats on such a statement. In a section that’s about what quantum computers will be useful for she includes as examples economic and climate modelling, which is news to me. The slides even include the phrase – ‘Quantum computer- can check many different possibilities in parallel’.

    All in all, I’m not surprised that Wadhwa was confused.

  43. Vivek Wadhwa Says:

    If we agree that building quantum safe security systems is urgent, perhaps a matter of national defense, then I encourage you folks to write articles that are more technically precise yet can be understood by the general public.

    Again, it was my friend Ralph Merkle who first made me aware of the threats and I have tried my best to explain these as accurately and understandably as possible. Ralph was deeply concerned about the security issues and has long had me thinking about these. That is why I wrote this piece (and by the way, I choose what topics I write about and do this for different publications).

    Again, I’ll leave it to you folks to do better than me. Make our business leaders and policy makers aware so that we don’t have the disasters and national security risks.

    I am checking out now. Thanks for the debate and sorry if I have offended you.

    Vivek

  44. Philomath Says:

    Vivek #23

    You should look at gentzen #13’s post. John Preskill’s paper on Noisy Intermediate Scale Quantum systems is aimed towards business people and entrepreneurs and is a level headed and accurate assessment of the realistic progress of quantum computers in the next decade.

    A system that could reliably run Shor’s algorithm on a several hundred bit input is at least a decade away. Here is the main technological improvement that you would need to see before you could run Shor’s algorithm.

    **A single fault-tolerant error corrected qubit.**

    This means that the information of the single quantum bit can be preserved effectively indefinitely. The main proposals to do this require hundreds to thousands of physical qubits working together to encode one “logical” qubit. (see papers on the surface code.) These qubits would need to be fully controlled and are different from the hundreds to thousands of qubits that systems like D-Wave have.

    Also, the reason people keep mentioning the TSP is because it is the archetypal example of an NP-complete problem. (NP means a computer can efficiently check the answer if you have the answer. NP-Complete means that this problem is the hardest kind of problem in NP. )

    NP-Complete problems as far as we know cannot be solve efficiently by quantum computers. This is unlike factoring (not known to be complete) where we don’t have an efficient classical algorithm but have found an efficient quantum algorithm.

    Mentioning the TSP when talking about quantum computers is a red flag that one does not understand the various levels of hardness of different computer problems. The main problems that quantum computers help with are factoring (a lot), search (somewhat, see Grover’s algorithm), and quantum simulation. It is thought that they will have just as much trouble on NP-Complete problems.

    Looking at William Cook’s publications, I don’t see anything mentioning TSP and quantum computers.

    I hope you find these observations helpful.

  45. Job Says:

    In the lecture by Merkle that Vivek linked to above, he mentions molecular manufacturing as a path to quantum computers:

    We’re going to have molecular manufacturing, that’s coming. As a consequence, i have very high confidence that… because molecular manufacturing will give us Quantum Computers… that route, at least, is one which will give us Quantum Computers. And as a consequence, we’re going to have Quantum Computers one way or another.

    is that true? The challenges behind building Quantum Computers (decoherence/isolation, error correction at scale) seem distinct from “mechanosynthesis”, but it’s the first time i’ve heard of it.

    Merkle puts molecular manufacturing 20 years away, so that’s his strict upper-bound for the availability of RSA-breaking QCs.

  46. FooBar Says:

    Vivek Wadwha:

    1. There is no ‘debate’ here. You have failed in your responsibilities as a journalist. You have expended no effort into trying to actually understand what the criticisms of your piece are, and have misinterpreted this back and forth of multiple sources informing you you are wrong, while you blithely ignore them, as ‘debate’.
    2. People are ‘offended’ here by your general sense of entitlement, not because you made a mistake. You want to simultaneously accept journalistic responsibilities, and then when you are actually held to them, to complain that you’re being held to a higher standard than the lay public.
    3. That being said, nobody here is out for blood. All that is desired is for you to either a. retract the piece, or b. expend a modicum of effort correcting your mistakes.

  47. Rohit Says:

    I’m surprised this post:
    https://www.scottaaronson.com/writings/highschool.html

    Wasn’t brought up yet. Is it still considered an accurate summary?

  48. Joshua Zelinsky Says:

    I do think we should give Vivek some credit here; if one doesn’t know much about the topic, one would probably come away thinking that what Dr. Simmons is saying is literally correct; I’d be willing to say that most of the problem here is Vivek getting very unlucky with their primary choice of sources.

  49. Shmi Says:

    > “Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance? Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now? How does it happen that the Washington Post publishes a quantum computing piece filled with errors that would embarrass a high-school student doing a term project (and we won’t even count the reference to Stephen “Hawkings”—that’s a freebie)?”

    This seems like a suitable moment to recall the Gell-Mann Amnesia effect https://calhounpress.net/blogs/blog/78070918-wet-streets-cause-rain-michael-crichton-on-the-gell-mann-amnesia-effect

    > “You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward––reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

    In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

    That is the Gell-Mann Amnesia effect. I’d point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all.

    But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn’t. The only possible explanation for our behavior is amnesia.”

    While Quantum Computing and presidential shenanigans may not be in the same reference class, I hope next time you read about ANY subject, politics included, in Washington Post, or any other non-peer-reviewed media, you will remember that the truth content is about the same as for the column on quantum computing, namely that that half of it is pure fiction, and the other half is rife with errors, misunderstandings and misrepresentations.

  50. Douglas Knight Says:

    I have an idea for encouraging cryptanalysis of post-quantum cryptography: a blockchain.

  51. Douglas Knight Says:

    Joshua 48, he probably got no more unlucky than a p-hacker.

  52. Scott Says:

    Rohit #47: Sure, if that piece works for you, it’s fine! I like to imagine that I’ve gotten better at explaining this stuff since I was ~20 years old, but if we’re being honest, it’s probably just as likely that I’ve gotten worse.

  53. Scott Says:

    Joshua #48: Indeed, from what everyone is saying here, Dr. Simmons seems to bear a huge part of the blame (perhaps the majority), for giving a talk that was like manna from heaven to the misunderstandinistas. If I can’t say for sure, it’s only because I’m allergic to watching talks on YouTube, especially ones that I fear might raise my blood pressure—give me a written transcript any day!

  54. Scott Says:

    Job #45: Sure, it seems plausible that a molecular manufacturing capability, of the sort Merkle talks about, would help in building QCs—and also, conversely, that QCs would help with molecular manufacturing. I offer no opinion one way or the other about Merkle’s claim that molecular manufacturing “is coming in 20 years.”

  55. Scott Says:

    Jon K. #24: Sure, a bunch have already been linked in this comment thread. Also try the book “Quantum Computer Science: An Introduction” by N. David Mermin.

  56. Noah Stephens-Davidowitz Says:

    Vivek,
    NIST is aware of this and is currently reviewing proposals for a standardized version of post-quantum crypto: https://csrc.nist.gov/Projects/Post-Quantum-Cryptography . I don’t know if NIST counts as a policy maker, but they’re sort of the de facto group that decides how we encrypt stuff.

    The NSA is also aware of it, and plans to switch the government over to post-quantum crypto “in the near future” (or, well, they said that in August 2015): https://www.schneier.com/blog/archives/2015/08/nsa_plans_for_a.html .

    Google is also aware of it and has already tested post-quantum crypto at a large scale: https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html .

    And, perhaps more importantly, many researchers are aware of this, as evidenced by the many submissions to NIST—NIST counts 82 submissions.

  57. Tim May Says:

    Scott #40, you say that Bitcoin is now effectively a Ponzi scheme.

    A few–factual–points:

    1. I’ve never bought any Bitcoin or any “altcoins.” No “use case.” Don’t need to wire money. Don’t see it as “investment.” And for sure haven’t wanted to spend time on exchanges, wallets, fraud, 2-factor authentications, etc.

    2.Yes, many “altcoins” and–especially ICOs–initial coin offerings–are mostly “pure” scams.

    (Bitcoin per se is mostly unaffected. Yes, some huge price changes, but no evidence I can see of fraud or Ponzi schemes. Sure, wild expectations about sudden riches.)

    3. I’ve been investing, with some losses and some wins, since I started with Intel in 1974. I put about 45% of every paycheck into various investments. Slow and steady.

    4. I’d had enough of corporate life by 1986, so I quit and moved to the beach near Santa Cruz.

    5. A year later, recharged, I got involved in crypto for some interesting uses. Early years of heady stuff like zero knowledge proofs. Wrote a bunch. Co-founded “Cypherpunks” with Eric Hughes and John Gilmore” in 1992. A lot of stuff came out of this.

    6. Including Bitcoin. (Not sure who Satoshi was/is, but very clearly out of this milieu.)

    7.However, Scott, dismissing the whole set of ideas as “a Ponzi scheme” is misleading, maybe almost as misleading as someone saying “quantum computers try all possible solutions in parallel.” It leads to the kind of “drive by shooting” journalism I think we all dislike.

    8. I never had an axe to grind about cryptocurrencies. No need to send money abroad. And I convert dividends and stock sales gains into other world-traded equities, so no yip-yap about how the dollar is just “fiat.” Assets are world-valued.

    9. But beware your offhand comments, as they are akin to the Gell-Mann Effect for many of us.

    10.FWIW, my interests for a while have been more involved with names or concepts like Abramsky, Coecke, Girard, linear logic, monoidal categories, Baez, Stay, etc.

    I greatly enjoy your blog, but I mostly tune out when you self-flagellate yourself about being a shy wimpy nerd who deserves to be whipped by Marxist feminista dominatrixes. (Perhaps an exageration.)

    You are one of the clearest thinkers and explainers I know of.

    –Tim

  58. Douglas Knight Says:

    Here is a transcript of Simmon’s talk, pulled out of the youtube closed captions.

  59. orange devil's advocate Says:

    “Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance? Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now?”

    A different reading could be that this article demonstrates a lack of fact-checking and journalistic integrity that applies across the board. In fact, that seems like a simpler null hypothesis. The difference vis a vis the hyperbolic reporting on Trump is that this article makes claims that (1) pertain to your area of expertise; and (2) can be fact-checked with recourse to scientific knowledge readily accessible in the public domain.

  60. jonas Says:

    > Very briefly, there is no evidence at present that quantum computers will give you more than about a square-root speedup for traveling salesman, or for most other NP-complete problems.

    This is definitely new information. We have no evidence for a polynomial time quantum algorithm, because that would extend to every other NP-complete problem. But the traveling salesman problem definitely has a pretty special form that might make it much faster to solve classically than the hardest NP-complete problems, and I don’t think it’s well-known what the the speed of the best classical algorithm for it is. As I know very little about quantum algorithms, it wouldn’t surprise me if the traveling salesman problem happened to be one of those problems where you happen to get more than quadratic speedup, at least when comparing the runtime of a quantum algorithm to a non-parallel classical algorithm. As a result, if Dr. Simmons told me that we can’t solve a 22 node traveling salesman problem with a classical computer, then you should doubt that because there is an easy algorithm that takes only O(n * 2**n) time where n is the number of nodes, but not realizing that isn’t such a grave mistake that a high school student writing a term paper should be embarrassed to make. If, further, Dr. Simmons told me that we could solve the 22 node traveling salesman problem with a quantum computer (without claiming that the quantum algorithm used scales polynomially in the number of nodes), then I don’t see why a high school student writing a term paper would doubt them.

  61. amy Says:

    It does occur to me, Scott, that not talking to journalists may be taking up more of your time than talking to journalists used to.

  62. Greg Says:

    Scott #53: yeah, Simmons’s Ted talk is loaded with all the usual hyperbole and misinformation: quantum computers can solve TSP efficiently by trying all routes in parallel, quantum computers allow Moore’s law to continue by adding one quibit at a time, etc. As much as I would like to see better science journalism in general, I don’t really think it’s fair to expect Vivek to doubt the words of a leading expert in the design of quantum computing systems.

    Sydney is where I was born and raised, and in particular where I got my math/CS education, so I don’t particularly like casting shade on the most recent Australian of the Year, especially when her award marks a wider recognition by the country of the merit of basic research into quantum computing that has cropped up in Sydney in recent times. But I don’t really think there’s a kind way of interpreting that talk.

    It’s worth noting that almost all of Sydney’s quantum computing research is on the physics side, in trying to build hardware, fabrication processes, error correction theory, etc. I’m pretty sure the only exception to this is UTS’s quantum algorithms and complexity group. There isn’t much of a basic or theoretical compsci research culture in Australia. Probably the most salient defence of that Ted talk would be that Simmons’s team seems to be primarily concerned with designing fabrication processes for tiny silicon-based quantum computing components, which is what half of her talk is about, so maybe she doesn’t have to think about the algorithm theory a whole lot and assembled that part of her talk to satisfy pressure to motivate her research despite not really being prepared to do so.

    But tbh, I think it’s very possible that many physicists in quantum computing just don’t care if the claims they make in their outreach collapse a large swathe of the rich taxonomy that computer scientists have uncovered. I’ve seen people come away from a quantum computing explanation given by a physicist thinking that quantum computers will solve *undecidable* problems, even. Seems like they figure that it serves their ends to instill in the public a vague sense of wonder about what quantum computers can do, and that distinctions between BQP and NP, decidable and tractable, etc. are superfluous details that benefit neither their work nor the public’s enthusiasm for their work. I don’t mean this as an indictment of physicists in general (and I think more academic physicists in quantum computing care about accurate portrayal since the whole D-Wave thing), but it’s a thing I’ve seen happen enough that I don’t think it’s fair to entirely blame the media for the misinformation in question.

    fwiw, whilst I don’t really get a lot of chances to speak to physics faculty these days, none of the quantum computing grad students in Sydney that I know would be caught dead making any of these kinds of claims. It’s pretty easy to rile them up with popsci nonsense claims like NP ⊆ DWAVE. 😛

  63. BQAnon Says:

    Prof Simmons is perhaps referring to the line of argument in “Dialetheic approximation of the Jones monomial” by N. Stavrianos, a notorious position paper which has somehow found favor with the quantum establishment in Australia.

  64. Mike Says:

    I am a little worried at Vivek’s insistence on pointing out “building quantum safe security systems is urgent”. Cryptographers have been at it since the seventies. I guess maybe his sources didn’t know about their work either?

  65. Scott Says:

    Tim #57: No, I did not mean to suggest that Bitcoin is deliberately fraudulent in any way (in retrospect, I should have made this explicit). I meant only that it’s become what Paul Krugman calls a “naturally occurring Ponzi scheme”, a.k.a. asset bubble—that is, something that increases exponentially in value almost entirely because of excitement about the fact that it’s increasing exponentially in value. Such things rarely end well for the majority of investors, although of course they’re great for anyone who bought in early and also knows when to cash out.

    As for the “self-flagellation,” I think it’s best to think of me as someone who was born with only one card to play: namely, “if people hate me, then maybe if I spend the effort to explain myself more clearly, they’ll stop hating me!” And if there are people for whom every effort to explain myself only makes them hate me more—well, that’s unfortunate, but it still doesn’t give me any other cards.

  66. Scott Says:

    Douglas Knight #58: Thanks for the transcript! So, just as people said here, Simmons never explicitly states that quantum computers can efficiently solve the Traveling Salesman Problem, as Wadhwa then went on to do. But she strongly suggests it (along with lots of the usual exponential parallelism hype)—as it were, passing the falsehood-ball to someone else who can then sink it through the net.

  67. Scott Says:

    amy #61:

      It does occur to me, Scott, that not talking to journalists may be taking up more of your time than talking to journalists used to.

    Yeah, that might turn out to be true! But keep in mind that you never even see most of the stuff I decline. And also: in this particular case, it’s not as if Wadhwa asked to learn from me and I refused, or I fobbed him off on colleagues, and he then wrote the column he did. Had that happened, at least 90% of my rage would instead have been guilt.

  68. Scott Says:

    Greg #62: OK, thanks for the background about Sydney, a beautiful city that I last visited in 1995 and hope to visit again. (I visited Brisbane in 2005, and then again in 2006 for the QIP conference.)

    Of course, Australia used to have Michael Nielsen—coauthor of “Mike & Ike”, still the canonical quantum computing textbook—but Michael now lives in San Francisco and (alas for us…) no longer does quantum computing theory.

    Now, it seems to me that a large weight, like a whole crate of Vegemite, falls on the capable shoulders of my good friend and UTS quantum computing theorist Michael Bremner. Michael, the country of Australia is depending on you! Speak Truth to Parallelism!

  69. Tim May Says:

    Scott #57,

    I did actually read most of the several hundred comments in the “shy wimpy, dorky, nerdy, dweeby” thread of a few years ago.

    I found it off-putting, as the kidz today might say.

    A bit like sayng oneself is a “proud n**ger.

    Best, I believe, not to play into stereotypes about being dweeby, a dork, or a nerd. Shy nerd or not, these are all insult names.

    Scientists and engineers should be proud, not apologizing and claiming to be shy, wimpy, nerdy dweebs and tools.

    You have a pretty wife, a couple of kids. Time to stop calling yourself a dweeb or dork. You’re not.

    It’s what I would do. And did. In my dating years, any references to being a nerd or dork was the end of things.

    –Tim

  70. Michael Says:

    I agree with amy#61. Scott, have you considered that maybe you should amend your policy of not talking to reporters? Some of the reporters don’t have reliable alternate sources.

  71. Scott Says:

    jonas #60: Of course, when we compare quantum vs. classical algorithms for NP-complete problems, we first look at what’s the best known classical algorithm—based on backtrack search, local search, dynamic programming, or whatever else. Or at least, the best known classical algorithm with provable performance guarantee. Then we try to figure out what quantum speedup might be obtainable on top of the classical speedup over brute-force search.

    Note that, if you only care about polynomial vs. exponential, then all NP-complete problems are the same—but if you care about the order of the exponential, then all NP-complete problems are different. You might get (4/3)n for one NP-complete problem, 2n for another, and even exp(√n) for a third, artificially-constructed one. And the quantum speedups that are achievable also differ from one NP-complete problem to the next.

    Having said that, if you just guess “Grover speedup,” you won’t be far from the truth for most of the NP-complete problems that have been studied. I.e., after years of research on the subject, we now know a great deal about how to achieve the Grover speedup (or in some cases, a sub-quadratic but still polynomial speedup) even when you need to layer it on top of classical algorithms that are themselves clever and nontrivial. And, on the other side, it’s only for extremely artificial NP-complete problems that we can ever prove more than the Grover speedup, compared to the best known classical algorithm. (For an example of such an artificial NP-complete problem, see footnote 40 in my P vs. NP survey.)

    There are quantum heuristic algorithms for NP-complete problems—most famously, the adiabatic algorithm and the “QAOA”—whose performance we still don’t really understand, and probably won’t understand at least until we can test them out with real QCs. And onto that lack of understanding, one can project one’s hope that these algorithms will turn out regularly to give super-Grover speedups over the best classical heuristic algorithms. But everyone should be clear that right now, it’s only that: a hope.

  72. Scott Says:

    Tim May #69: We now live in an age where every celebrity on the face of the planet has given interviews where they say “oh my gawd, I was such a dweeb in high school! Such a dork! The biggest nerd!” But I guess this is all countersignalling—and that, if you are or were actually a nerd, then it’s best to keep quiet about it? 🙂

    It reminds me of the classic Jewish joke about the Yom Kippur service where the rabbi and the cantor both fall to the ground and cry out “please have mercy on me God, I am dust, I am nothing”—so then the janitor gets into it too and falls to the ground and cries as well, and the rabbi smirks at the cantor and says, “now look who thinks he’s nothing!”

  73. Murray Says:

    “Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance”

    Trump’s malfeasance was proved by WP as it proved that QC solves TSP instantaneously.

    You are suffering from Gell-Mann amnesia.

    http://webseitz.fluxent.com/wiki/GellMannAmnesia

  74. Scott Says:

    For all those who keep commenting to say that the Washington Post is as wrong about Trump’s malfeasance as it is about quantum computing, here is Aaronson’s First Law of Journalism, first articulated three months ago:

      When journalists write about an imminent world-changing miracle technology, they’re probably grossly exaggerating. But when they write about how powerful idiots are screwing you over, they’re probably not even telling you the half of it.
  75. Scott Says:

    Incidentally, another resource that I can strongly recommend, for those looking for an explanation of QC “accessible to children,” is Terry Rudolph’s recent book “Q is for Quantum”.

  76. Murray Says:

    Please, when you find time sometime, write an article with details of how on earth and in what aspect Trump “screwed you over” the past year.

  77. Alex V Says:

    Scott #71, It seems, for transcript #58 N=10^19, so even due to Grover and square root quantum computer could be few billions times faster. Talk about parallel calculation for Grover algorithm is not a big sin (especially because Grover himself used classical picture with N emitters to invent that). TSP is slow quantum algorithm, but in Simmons’ example with particular number N it is still useful and impressive.

  78. Atreat Says:

    Me thinks all those blathering on about “Gell-Mann amnesia” and the WP and how this absolves Trump are overstating their case.

    The lesson of the folktale in question seems the rather obvious and bland point that one should adopt a healthy skeptical eye when reading the news. To imply that errors (no matter how fantastical) in one article necessarily render all other printed words in the WP as false or unreliable is absurd.

    Further, perhaps those pushing this logic with motivated reasoning could apply the same to Trump specifically or the White House generally? Since he’s been caught in numerous fantastically incredible lies – and the WH in turn – I’m sure all of you are remembering your inferred lesson of Gell-Mann and assuming every new utterance is a lie, right?

  79. Scott Says:

    Murray #76:

      Please, when you find time sometime, write an article with details of how on earth and in what aspect Trump “screwed you over” the past year.

    If we’re only going to count things that directly affect me or people close to me in the near future—but we also include the Vichy Congress, and things they’ve tried to do and come close to doing in addition to what they’ve actually succeeded at—then the main ones are:

    1. Making my Iranian PhD student feel, with considerable justification, that he has no future scientific career in the US. Making it super-difficult for me to recruit any new students from Iran.

    2. Eliminating the SALT deduction, which will hit me and most of my extended family to pay for a tax cut that almost entirely benefits the ultra-rich.

    3. Trying to eliminate the grad student tuition deduction—something that, had it passed, would have completely decimated the funding model for PhD education in STEM fields.

    4. Trump’s recently trying to slash the NSF budget by 30% (!!). Even assuming this doesn’t pass Congress, it primes us to accept a 5% cut as a big win, and this at a time when China and the EU are doubling down on basic research.

  80. Atreat Says:

    Scott #79,

    Let’s see how you are suffering from Gell-Mann amnesia… A quick google reveals articles from the WP on all the above so applying the lesson of Gell-Mann amnesia and this Vivek article that means Trump and the Republican congress didn’t do any of that, right?

    Got that??!!

  81. Murray Says:

    OK. Thanks.

  82. Jon K. Says:

    #55 Thanks, Scott. I have taken your Quantum reading advice in the past (QED by Feynman), read “Shor I’ll do it”, and perhaps I will check out that book you just mentioned, but what I was really after was a one- or two-liner that a journalist might use that provides a loose analogy to where the quantum speed-up comes from(if your problem is amenable to it). I think your explanations are good, I’m just looking for something like(*rolls up sleeves*), “Normally to open a safe you need to try lots of combinations, but QC allows nature’s wall of simultaneous analog clocks to do it quickly for you, provided the combination for the safe happens to be a property that relates to the average density of all the hand positions on all the one-handed clocks, which are presumably being used to turn the dials on the safe clones, which can cancel each other out or combine together to make an even bigger safe… all in parallel.” There, one sentence, crystal clear. Send it to print, Vivek.

  83. Jean-Gabriel Young Says:

    Atreat #78,

    Good point, the Trump stories and the QC articles are definitely not in the same reference class. That much is obvious. But this particular story is a good reminder that Wadhwa’s column and other technical articles in popular media are not to be trusted…

  84. Sanketh Says:

    Scott #71: I don’t think QAOA will give a superpolynomial speedup. For all we know, it might be possible to classically simulate QAOA (with additive error.)

    On a sidenote, any comments on Ryan Williams’ “Some Estimated Likelihoods For Computational Complexity”.

  85. Scott Says:

    Jon K. #82: The trouble is, if you could explain QC in as familiar a way as many journalists are looking for, it wouldn’t be so interesting; it would just be another form of classical computing.

    Still, you can point in the right direction by including a few sentences about amplitudes and interference—about the different paths leading to wrong answers interfering destructively and canceling each other out, while the paths leading to the right answers reinforce. This is what Feynman did, it’s what I do, it’s what any competent popularizer of QM ends up doing, it’s not that hard.

    Or you can simply say honestly that it’s too subtle for you to explain in a few sentences in a satisfying way. You can say: it’s NOT just a classical computer but faster, it’s NOT just a massively parallel computer, it’s NOT just a probabilistic computer, it’s not any of the things you might round it down to if you didn’t know about quantum mechanics. And to explain quantum mechanics would take us too far afield. But suffice it to say, for our purposes, that quantum computers could efficiently solve X but probably not Y, and that this week QubitCorp reported the first ion-trap quantum computer that does Z… (or whatever it is you want to write about)

  86. Phil Koop Says:

    When I was a student back in the 80’s we used to joke about the Y2K bug. Then when we sold our startup in 1999, it didn’t seem so funny. There was no shareholder lockup and no management agreement; we got cash with no strings attached! No string but one, that is: we principals were liable in perpetuity for Y2K.

    My point is that Y2K really was a serious issue, but that is not proof it was not taken tooseriously. Nor does the fact that a dollar was spent mitigating Y2K prove that it was spent wisely. In my experience, most of the money went to consulting firms who were trying to maximize their income rather than solve the problem in the most efficient way possible.

    In my view, over-excitable journalists bore some responsibility for this. In that sense, it is indeed apposite to raise the memory of Y2K in connection with quantum alarmism; but not, I think, in the way that was meant.

    Re Scott #72, love your joke.

  87. Bill Cook Says:

    Vivek Wadwha first contacted me concerning the TSP on February 13, after I made the following post on Twitter.

    https://twitter.com/wjcook/status/963146847186444289

    “To clarify, computing an optimal TSP route on 22 points takes 0.005 seconds on my iMac, not the 1000 years reported in @washingtonpost An error factor of 6 trillion. Like reporting the US National Debt is $4.”

  88. Tim May Says:

    To any journalists/science writers planning an article on quantum computing, reading–or even skimming–“Quantum Computing Since Democritus” would be a good start. Scott gets into some wild ideas, while keeping it grounded in some actual math and science.

    (A comparable book, when I was a teenager in the 1960s, was George Gamow’s “One, Two, Three….Infinity.” I must’ve read it three times when I was in the 9th-10th grade.)

    I have nothing against some wild speculation, as for instance these days in ER = EPR, but readers of articles need to understand just how speculative the ideas are. (Very! But also very interesting.)

    Personally, I was put off by Brian Greene’s string theory popular account some 20 or so years ago. Hand-waving about vibrating strings on a cosmic guitar just aren’ very persuasive to me.

    The speculations about tachyons, for instance, some 50 years ago didn’t bother me much….I knew they were not being presented as the “real deal.”

  89. Scott Says:

    Bill Cook #87: Thanks so much for that crucial clarification! In comment #23, Vivek didn’t say outright that he’d consulted with you before writing his article, but he left me with that clear impression, and I’d been wondering how such a thing was possible. 🙂

  90. Jay Says:

    Actually I’m completly impressed by Jon K!

    Most nerds here would feel terribly embarassed in the same situation. If that was me I’d probably go depressed for three weeks and still have nightmares about it five years from now. Actually it would be so embarassing I may even be tempted to cook up some excuse. Yeah sorry it’s uterly wrong, but it’s not my fault: I was writing it when my beloved grandma fall in front of my window and started yelling in pain. That’s why I had to rush and finish it in 15 minutes top. I wouldn’t let my grandma in pain more than 15′, you heartless bastard!

    But that’s not John. Completly appropriate in tone. Completly assuming the blame. Even trying to make something constructive out of it! It’s like, Yeah sorry to all the fans. It was wrong to say that Federer will retire as captain of the Red Socks. I should know he’s not playing football. Yeah, football for general audiance. I mean, football in the sense of whatever sport the Lakers are playing at. Sorry again, fans! But actually I went as far as consulting several simplified almanac, but the captain is changing all the time. That’s not helpfull. I’m just looking for something simpler. You know: captain of the Red Laker is… Can anyone help?

    Jokes aside, I really think Jon’s posts are very important to think about. If we want better science popularisation, we need a collection of small sentences that a journalist, with no specialised knowledge nor any actual interest for the topic, could still use and get the main messages right.

  91. Scott Says:

    Jay #90: Um, what?? Did you mean to be talking about Vivek W., rather than Jon K.?

  92. Jay Says:

    Oops, sorry about that! You know how these grandmas are, those day.

  93. Joshua Zelinsky Says:

    Sort of on topic, so we can’t in general show that the polynomial hierarchy collapses even under the assumption that NP is in BQP, and we have an oracle A where P^A = BQP^A, but P^A != NP^A. Can we do better if we assume that NP lives in a weak subset of BQP? In particular, we know that NP in P/poly would cause the polynomial hierarchy to collapse, and NP in ACC would similarly cause a collapse for the same reason.

    So, if the strongest version of what Vivek seemed to think was the case, and we actually had NP contained in the class of problems which solvable with fixed depth quantum circuits of polynomial size(which seems to be close as an analogy to being able to do things effectively instantly as I can think of), is there some way to show that that would also lead to the polynomial hierarchy collapsing? (Actually, is this question well-posed or is there going to be subtleties about the gate set used and whether or maybe issues of if we allow unbounded fan-out gates?)

  94. jonas Says:

    Re Alex V #77: like I said, no. It’s pointless to try to check all routes. There’s a faster elementary classical algorithm based on what some people call “dynamic programming” that only checks each subset of nodes, and which only takes on the order of magnitude of n*2**n time on n nodes. This is already much faster for 22 nodes, so fast in fact that the computer in your pocket can execute it within a second.

  95. Scott Says:

    Joshua #93: Nice question! (And kudos for taking intellectual fertilizer and using it to grow something edible. 🙂 )

    Let’s let JZQP, for Joshua Zelinsky Quantum Polynomial Time, be the class of all decision problems solvable by a BPP machine with access with constant-depth quantum circuits. Note that we need to be careful about identifying this class with “BPPQNC0“, because we’re assuming that our BPP machine gets to see actual samples from the output distribution of a constant-depth quantum circuit—not just the answers to decision problems solvable by such a circuit (which are all trivial).

    Alas, I don’t see how to use any existing tools to show, for example, that if NP were contained in JZQP, then PH would collapse.

    The difficulty is this: on the one hand, constant-depth quantum circuits seem really weak, in that we don’t know a single example of a decision problem outside BPP that they let you solve. But on the other hand, they’re also really strong, in that the ability to sample their output distributions exactly in classical polynomial time would collapse PH (by Terhal-DiVincenzo combined with me and Arkhipov or Bremner-Jozsa-Shepherd).

    As a result, I don’t know how to put any decent upper bound, like AM or coAM, on JZQP—even though I also don’t have any example that separates JZQP from BPP! And, not knowing that, I don’t know how to show that NP⊆JZQP would lead to a collapse of PH, or any other such consequence.

    But this problem is certainly worth circling back to from time to time. Or does anyone else have ideas?

  96. James Gallagher Says:

    It’s funny when a supposedly prestigious source publishes something which can actually be judged objectively for its rational content, rather than the usual subjective opinion pieces which are essentially low-grade philosophy analysis of very difficult areas, but at a level suitable for public readership and the left-biased academia who can’t be bothered to think too much when reading popular journals as long as the opinion vaguely aligns with their notions of what’s good and what is bad.

    Maybe most of the published articles are, basically, irrational.

  97. melior Says:

    I have to cringe at the backpedaling to “Bitcoin is only ‘sort of’ a Ponzi scheme after it’s pointed out (see, e.g., en.wikipedia.org/wiki/Ponzi_scheme) that it’s nothing of the sort.
    Yes, excitement-driven asset bubbles happen, but they have nothing in common with Mr. Ponzi or Mr. Madoff, and it can be difficult to tell when the excitement underlies an actual world-changing idea. One famous example is Amazon stock, which experienced an early enthusiasm-driven price spike that promptly faded away on profit-taking. Of course, we know now that its value went on to rise to stratospheric levels, proving that those who sneered at the time were basing their smugness merely on seeing the excited behavior alone — a fallacy — rather than the underlying potential of the idea.
    Paul Krugman has been making foolish predictions about Bitcoin since he first read about it, many of which could be easily have been shown to be silly at the time by merely reading the original white paper.
    All of which only goes to underscore what I think is the original point of Scott’s post: it takes a bit more research to write a knowledgeable article than consulting someone who claims to be an expert.
    If Scott and Zach’s comic explanation of QC seems too difficult to follow, perhaps there’s another subject out there to write about?

  98. kg Says:

    melior #97, the difference is that Amazon has real assets, employees, etc. behind it.

    Bitcoin is just a Fiat currency without a government behind it, and with terrible user experience.

    Like unless you are an anarcho-capitalist, distributed cryptocurrenncy provides no real applications.

    You can just use a centralized system instead.

  99. Alex V Says:

    jonas #94: I am talking about classical algorithm with antennas illustrating idea of Grover quantum algorithm.

  100. Alex V Says:

    jonas #94: O’K for 22 nodes it was really not very good idea. I do not know, if it could be corrected with bigger graph. There is some comparison between classical and quantum algorithm in https://arxiv.org/abs/1612.06203

  101. Tez Says:

    Thanks for the book plug Scott – I think it shows you can rigorously understand at least up to the Deutsch-Jozsa algorithm if you are capable of basic arithmetic like -(2-3)=-2+3, and if you aren’t capable of that should you be a science journalist?

    In defense of Sydney – the quantum community there is as awesome as the city. It is only at the University of New South Wales (where Michelle is based) that you might say they have “only” an experimental effort. At Sydney University and Macquarie University as well as UTS there are excellent theorists, doing lots of cool stuff that goes far beyond “merely” building a quantum computer! (I’m not disagreeing that the weaker the theoretical physicist the closer they end up working to current experiments, we all know that – but the Sydney community is not a good example). In fact I’d wager the non-UTS Sydney theorists understand at least as much CS as the UTS ones, but hey, who am I to provoke such an argument 🙂

  102. Thom Blake Says:

    Vivek,

    “I struggle with the concepts of quantum computing, these just don’t make sense to me.”

    I know this is a hard thing for men in particular to understand, but when you don’t understand something, you have the option of not opening your mouth. You don’t have to give your opinion when you don’t understand something, and if you’re asked to write about it you can suggest they go talk to someone else more qualified instead.

  103. Martin Schwarz Says:

    Scott #93: Just for reference, Kalai and Kuperberg have introduced what you’ve called JZQP as SQCL, but in a circuit model picture. Then they present a slight generalization called AQCL which is identical to BQP as a corollary of their main result.

  104. bks Says:

    Only tangential, and apologies if this is old news:
    “The Argument Against Quantum Computers”
    https://www.quantamagazine.org/gil-kalais-argument-against-quantum-computers-20180207/

  105. lewikee Says:

    Is there a way to confirm that it is indeed Vivek Wadhwa commenting in this thread?

    I cannot believe the author would have the audacity to reveal he doesn’t understand the concepts of quantum computing just after having written a piece about it in the Washington Post. Or that when presented with a comic to help him understand, responds with “…the comic doesn’t help. Please give me 1-2 sentences that your children could understand that explains the possibilities of quantum computing and I will use these in future articles.”

    And then after revealing all that, he stands by the statements made in his piece despite admitting he doesn’t understand the concepts behind them.

    It is suggested he misunderstood someone’s misleading TED Talk. But instead of at least admitting he should look further into the issue, he dismisses the concerns outright.

    Surely this must be someone impersonating him in an attempt to damage his reputation?

  106. Job Says:

    I cannot believe the author would have the audacity to reveal he doesn’t understand the concepts of quantum computing just after having written a piece about it in the Washington Post.

    You’re being irrational.

    It’s important for a journalist to be able to assess their level of understanding of a subject.

    And they should share that. It’s not audacity.

  107. Scott Says:

    lewikee #105: He didn’t reveal any lack of understanding in this thread that wasn’t already revealed by the column.

  108. lewikee Says:

    Job #106:

    That assessment should occur prior to choosing to write a piece based on said understanding (or a lack thereof). Not after. To do it in the wrong order would be…irrational.

    Scott #107:

    Fair enough. I should have said that the audacity didn’t lie in his lack of understanding, or even his sharing of it, but in his subsequent “I’m taking my ball and going home” attitude when confronted with feedback that pointed at fundamental problems with his piece.

    His attitude became “I don’t know quite what you’re all talking about, but you’re rude and I have these authorities I am appealing to and that’s enough for me! Goodbye!.”

    It seems that those praising him for coming onto this thread at all only read his first comments.

  109. James Gallagher Says:

    bks #104

    If, as Kalai suggests Quantum Computers do start failing, predictably, at around 50 or a few hundred qubits – it’s gonna be tricky deciding whether that’s due to his noise argument or an even more fundamental explanation like discrete time evolution in Quantum Mechanics.

    A noise-free pure Hamiltonian evolution with discrete time-steps kills large scale QC just as much as fundamental noise

  110. Raoul Ohio Says:

    I am just catching up on this issue. By coincidence, I am reading William J. Cook’s “In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation” right now.

    Now I see why people think that maybe linear programming will be decisive in P vs NP, etc. Not saying that it will, but, humm…

    This is a really fun, semi technical book. (Hey Bill, would it be too much to rig the sample hand calculations to come out in integers, rather than: 33 1/3 ?). Highly recommended.

    I became aware of this book last week when I saw some list of “five great algorithms books”. In checking it out, I saw that lots of people had submitted 5 books, including Nick Higham (most of you already have “handbook of writing in the math sciences”, and ought to have “Functions of Matrices” on your shelf). Three of the five were only $20 or so, so I ordered them. TSP was the first to come, on Monday, and I just finished chapter 5. (This is why I have trouble getting any research done).

  111. Urban Says:

    Job, the thing is… according to his own website, he’s not a journalist, he’s an “Academic, Researcher, Writer & an Entrepreneur”, whatever that means… Understanding what you are talking about before writing about a topic is 101 for at least 50% of his jobs.

  112. Andres Gomez Emilsson Says:

    Dear Scott, here is a partial transcript of the video “Quantum computation | Michelle Simmons | TEDxSydney” so that you can find out exactly what Michelle said without having your blood pressure spike even a little bit:

    After discussing Moore’s law and how it is harder and harder to make smaller transistors, she goes on to discuss how the quantum scale is actually an opportunity rather than a problem. Starting at 4:07…

    “We are now transition to quantum mechanics. If we could control quantum physics we could actually build computers in the quantum regime that are predicted to have exponential speed-ups over classical computers. One of the questions that a lot of people ask me is ‘aren’t classical computers fast enough already?’ Can’t they do all the things that we need them to do? Obviously everyone wants things to be faster all the time, but there is some problem out there that cannot be solved efficiently using a classical computer, and one of them is the Traveling Salesman Problem. So here we have the salesman, we want him to travel to lots of different cities, and we want to work out what the shortest possible route is. It sounds like an easy problem, but it’s actually one of those intractable exponentially hard problems. So here we have on the screen the number of possible routes that you can take as a function of the number of cities. It’s something that grows very very quickly. By the time that you have 14 different cities there are about 10^11 possible routes you can take. So if I take a classical computer, it works in the Gigahertz regime, hence the 10^9 operations per second, and it can work out the shortest possible route in about a hundred seconds. So that’s not a big deal. But now what happens when I get to 22 cities. There are 10^19 possible routes, and with that same classical computer, it would take 1600 years. This is amazing, and if you look, by 28 cities, it takes longer than the lifetime of the universe to work out what the shortest possible route is. Now I heard this problem many years ago, and didn’t quite believe it. But this is a real problem, it exists out there. So how can we make a computer that somehow solves those kinds of problems? Well, we have to look at how classical computers work. A classical computer is very fast, but it searches all of the one after the other, rather like a recipe. So if I was to write down a telephone number in a piece of paper, and I’ve forgotten whose telephone number it is I would get my classical computer to start looking through all the A’s and all the B’s and then all the C’s, and eventually it would find whose number it is and tell me. If I wanted to get faster I could put two computers on to the problem, get one to search from A to L and the other from M to Z, and it certainly would get faster. Faster even with three computers. But that’s the digital world. If you could make a quantum computer, the actual calculations are done in parallel. They are all done simultaneously.” That’s at 6:30 now.

    From there she starts talking about qubits and and how (7:20) the number of possible states of qubits of size n is 2^n, which gives it an “exponential computing power”. She then says that a 30 qubit quantum computer “is predicted” to be as powerful as the most powerful supercomputer that exists. And if a 300 qubit quantum computer existed, it would be more powerful than all of the computers in the world connected together.

    Then she discusses applications such as cryptography (at 8:35): “Very recently they’ve broken the code in 2010 of RSA 768. That’s a 768-bit number. And it took them 3 years using the most powerful classical computers that existed. Now what they are encoding is a 1024-bit number, and using the same classical computers it will take 3,000 years. If you had a quantum computer, it could solve it in minutes.”

    And then she looks at other examples, including databases, climate modeling, economic system modeling, chemicals reactions, etc. And at 10:22 she starts talking about her area of expertise: How to actually build quantum computers. I trust that everything from then on is fine.

  113. Prad Says:

    I think there is an urgent need to remove this misconception from Vivek’s mind. He somehow believes that all the servers and computer hardware would be magically replaced with quantum machines one fine day and before that particular day the software engineers must be ready with quantum compatible software to run all the encryption algorithms and security features..

  114. wolfgang Says:

    @James #109

    I did not understand Gil’s argument(s) as predicting a specific number of qubits which one cannot exceed. Rather I understood his argument as saying that the cost for error correction increases exponentially with the number of quibits, due to non-local correlations etc. , therefore we will not see quantum computing with 7, 15, 50 … (the numbers seem to increase over time 😎 qubits anytime soon.

  115. Michael P. Says:

    Hi Scott,
    I am very far from QC, and I hold the following misconception about it:
    “QC is a fancy analogue computer. There are two ways to compute harmonics of a drum: either solve the corresponding PDEs numerically or build the damn drum, hit it hard, and listen. The former is the classical computer, the latter is the quantum one. QC is effective only if you can build the drum for the problem you are trying to solve.”
    How far is the above misconception from the truth?

  116. JimV Says:

    I agree with lewikee #105: VW started in a big hole, and kept heading deeper and deeper. The “audacity” and imposter parts were sarcasm. The main point was a summary of VW’s behavior here, and I thought it was accurately done.

    VW did not seem to be commenting here in order to learn how to do better, but to defend his work and express his disregard for contrary opinions. E.g., it is our fault for not giving him better material with which to profit on in future articles, by coming up with material he can understand.

  117. Scott Says:

    Michael #115: You’re pretty far off-base, because

    (1) you don’t mention that this drum vibrates in exponentially many Hilbert space dimensions, which is the thing that makes it interesting and fundamentally different from a classical drum, and

    (2) you also don’t mention quantum fault-tolerance, which is the thing that should ultimately let this drum behave a lot more like a digital computer than like an analogue one with respect to errors (i.e., errors can be discretized, detected, and corrected, rather than just adding up in a continuous drift).

  118. Jim Graber Says:

    Is quantum supremacy (QS) overdue? IBM’s 50 qubit computer was announced three months ago. Why haven’t we heard about its achievements (or lack thereof)? Perhaps I’m too impatient, but I am beginning to wonder. (Google was also aiming for a demo of QS last year.)
    Jim Graber

  119. Scott Says:

    Jim #117: Calibrating superconducting qubits takes a really long time (always has). IBM rushed out an announcement for PR reasons, but that doesn’t mean their qubits are well-calibrated enough to produce useful data yet. I would start worrying that something turned out to be harder than expected, if we don’t see 50-qubit data from Google, IBM, or anyone else by the end of the calendar year.

  120. Michael P. Says:

    Scott #116: Hilbert space dimensions — exponentially many with respect to what parameter?

  121. Scott Says:

    Michael #120: With respect to the number of particles in your system.

  122. Jr Says:

    “Even assuming this doesn’t pass Congress, it primes us to accept a 5% cut as a big win, and this at a time when China and the EU are doubling down on basic research.”

    If others are spending lots of money on basic research it suggests less need for the US to spend money on it.

    And most of the SALT deduction went to high-income people as well if we are going to engage in class warfare rhetoric.

  123. Atreat Says:

    More on the power of QC to solve TSP…

    http://www.newsweek.com/photons-light-physics-808862

    “This research is the latest step towards a long-fabled quantum computer, an ultra-powerful machine that could solve problems beyond the realm of traditional computers. Your desktop PC would, for example, struggle to solve the question: “If a salesman has lots of places to visit, what is the quickest route?”

    “[A traditional computer] could solve this for a certain number of cities, but if I wanted to add more cities, it would get much harder, very quickly,” Vuletic previously stated in a press release.”

  124. Scott Says:

    Jr #122:

      If others are spending lots of money on basic research it suggests less need for the US to spend money on it.

    Sure, if you don’t care about the US ceding its role as a technological power, and you also don’t care about the prospect of scientists leaving the US en masse for a place where they can continue their work. Isn’t it bitterly ironic that it’s the side gleefully taking an ax to everything underlying the long-term security and prosperity of the US that gets called the “patriotic” side?

  125. Job Says:

    Understanding what you are talking about before writing about a topic is 101 for at least 50% of his jobs.

    Sure but to which extent?

    Does that include an understanding of the separation between BQP and NP which is not yet established?

    The article’s focus is on the susceptibility of public-key cryptography to quantum attacks and the increasing urgency to address it.

    The rest is the typical filler, apparently sourced from Simmons’ TEDx talk, and meant to illustrate the power of a Quantum Computer, erring on the side of mind-blowing.

    It’s hard to criticize a reporter for reinforcing a misconception supported by an expert on the applied side of the field.

    But he is definitely at fault for not revising the article to address Scott’s comments and instead deferring to the authority that Simmons clearly does not have when it comes to the theoretical side of quantum computing.

  126. asdf Says:

    Is anyone saying where these complex amplitudes in quantum states actually come from? Surely physics doesn’t care that C is the algebraic closure of the reals so all polynomials have roots there. Rather, physics equations are full of complex numbers because the physical universe is full of harmonic oscillators, that arise from the simple ODE’s that describe so many classical interactions.

    So don’t the complex quantum amplitudes make people say there must be a pony, I mean a harmonic oscillator in there somewhere? And then ask, well really, infinite dimensional without any bounds on how the amplitudes are distributed except that the norms add up to 1? Are there existing experiments to support such a belief?

  127. grue Says:

    Scott objected that the article describes quantum computing to be as “simple” as trying every combination, but the article doesn’t actually claim that this is *all* that’s involved. And in fact the article is correct on this point that a reason Quantum algorithms can be faster is that superposition allows multiple options to be tried in parallel. Now it’s true that this doesn’t allow arbitrary algorithms to be sped up in this way since problem-specific tricks are needed to extract the solution from the superposition, but the article didn’t deny this.

    It seems as though Scott may be over-correcting on this point, and possibly misleading many people into thinking that trying options in parallel isn’t part of the Quantum-computation story (when in fact it’s a defensible interpretation)?

  128. Scott Says:

    Job #125:

      Does that include an understanding of the separation between BQP and NP which is not yet established?

    It at least includes understanding that the containment NP⊆BQP is not established, which Wadhwa clearly didn’t (and apparently still doesn’t, even after like 20 different commenters tried to tell him…).

      It’s hard to criticize a reporter for reinforcing a misconception supported by an expert on the applied side of the field.

    Actually it’s easy. Imagine a reporter who said: “I get to write that HIV doesn’t cause AIDS, because that’s what my expert, Peter Duesberg, says, and I fully trust him and put him up against any of your experts … and even if I’m wrong, it’s not my fault, because I was just trusting Duesberg.” There’s no belief so wrong that you can’t find some expert, somewhere—or at least someone who would strike outsiders as an expert—who has their own reasons to back you up on it. Or who at least would go 80% of the way to backing you up on it, leaving you to fill the remaining 20% yourself while still maintaining plausible deniability.

  129. Scott Says:

    grue #127: After explicitly stating that QCs could efficiently solve the TSP, the article goes on to say that QC is “equivalent [emphasis mine] to opening a combination lock by trying every possible number and sequence simultaneously.”

    How much clearer in its wrongness could it have possibly been? If this were an exam answer that I were grading, it would not be one of the many ambiguous cases: it would be 0/20, or maybe 1 pity point, depending on my mood.

  130. Scott Says:

    asdf #126: Presumably, any fully satisfying answer about “where the complex numbers in QM actually come from” would need to be in terms of a theory that was even deeper than QM—but such a theory does not currently exist.

    The best we can do, at present, is to prove that if you want certain nice mathematical properties that QM satisfies, then you only get them with complex amplitudes, and not (for example) with real or quaternionic ones, however much of QM the latter seem to reproduce.

    There are many beautiful results of this kind—one of the most famous being that it’s only with complex amplitudes that the statistics of local measurement outcomes give you exactly enough information to characterize an arbitrary entangled state. With real amplitudes it’s too much information, and with quaternionic amplitudes it’s too little. In many of the fascinating recent “axiomatic derivations” of QM—for example, those due to Lucien Hardy, and to Chiribella et al.—this “local tomography” fact gets elevated to the status of a principle, and used to kill off possibilities other than the amplitudes being complex. Admittedly, though, this isn’t a principle that I would’ve found obvious, without already knowing what the right answer was! 🙂

    It’s a very interesting question whether we can use the harmonic oscillator to argue for why amplitudes should be complex. From a far enough remove, the harmonic oscillator is “just some particular solution” to the equations of QM, to be worked out after those equations have already been laid down. On the other hand, it’s true that, if you want the harmonic oscillator to behave the exact way you know and love (something that some physicists might even consider conceptually prior to QM…), then the amplitudes better be complex! Does anyone else have thoughts?

  131. Paul Fabel Says:

    Is the current wiki on TSP wrong?

    https://en.wikipedia.org/wiki/Travelling_salesman_problem

    (…so this solution becomes impractical even for only 20 cities.)

  132. grue Says:

    scott #129 “…the article goes on to say that QC is *equivalent* [emphasis mine] to opening a combination lock by trying every possible number and sequence simultaneously. How much clearer in its wrongness could it have possibly been?”

    Well, it’s an analogy and obviously not literally equivalent, but it accurately captures the idea that Quantum superpositions allow multiple options to be tried in parallel. It seems like you’re real objection is not so much that he said something false in this case, as that you wish he had given more detail about some of the additional complexities/contraints in the Quantum case, e.g. that it’s not always possible to extract the answer from the superposition. I don’t think the analogy needs to capture all these complexities to be useful, but actually even the combination analogy is consistent with the issue that if many people try a lock in parallel (in different locations say), it’s not necessarily always efficient/feasible for them to share the final answer with each other.

    Btw, I’ve often had the sense that you make comments on your blog that give the impression that trying options in parallel isn’t even relevant to why quantum computers are faster, and I just happened to pick this case to reply to.

  133. grue Says:

    In response to someone else (Job #125) you say:
    “…I fully trust him and put him up against any of your experts and even if I’m wrong, it’s not my fault, because I was just trusting Duesberg’. There’s no belief so wrong that you can’t find some expert, somewhere—or at least someone who would strike outsiders as an expert—who has their own reasons to back you up on it.”

    This is really strange. You seem to be implying that the author deliberately sought out an expert who would “back” him up on some issue where he knows *his* expert is at odds with the consensus view (like with HIV). But what actually seems to have happened is just that he got his information from an experimentalist who gave him a somewhat un-nuanced account and possibly he lost something in translation. It’s not necessarily clear-cut for journalists to determine which expert to trust (though there may be a bias towards experts who explain things in simple terms e.g. “it’s like a combination lock”, which other experts will then interpret as dumbing down).

  134. Jr Says:

    Paul #131, It says “this solution” which is important. There are better algorithms than the direct, try-all-possibilities solution.

  135. Scott Says:

    grue: To riff off your own handle, imagine that someone explained the concept of “grue” by saying “it’s equivalent to green.” Unless they also add that grue things will turn blue at a certain date, they’ve missed the entire essence of the concept. That’s not a legitimate simplification, a mere removal of some technical detail.

    In the same way, if someone says “QCs try every combination in parallel,” then unless they add that quantum mechanics puts severe restrictions on how you can access the results, and you need to use interference to boost the probabilities of the combinations you care about, I would say: they’ve missed the entire essence of the concept. They’ve rounded QC down to a magical form of parallel classical computing, and have missed what’s specifically quantum about it. And this error actually matters, even in the crudest practical terms, because it leads the sufferer to say wildly wrong things about what problems QCs are known to be able to solve (as, indeed, Wadhwa did).

    It’s true that the situation only became Duesberg-like after Wadhwa was confronted, in this comment section, with a unanimous chorus of experts telling him “no, that’s not at all how it works”—and Wadhwa eventually doubled down and said “I’m going to believe my expert, she won Australian of the Year and that’s good enough for me.”

    But another difference is this: I don’t think for a second that Michelle Simmons herself actually believes that QCs are known to be able to solve TSPs efficiently. There’s no “other side” to this debate, even to the tiny extent that Peter Duesberg could be called a “side” of the HIV/AIDS “debate.” Our side is the only side.

    It’s just that—I hate to say this so bluntly—there are some people in our field, even very good people within their domain, who take an almost Trumpian attitude toward the question of what QCs will be able to do once we have them. In their worldview, questions about experiments, or progress toward building QCs, belong to the ordinary category of “truth” and “falsehood,” but questions about what you could do with QCs belong to a totally different epistemic realm, where you can just say whatever you want, or whatever your audience or your investor or your funding agency most wants to hear. (I wish I had a different explanation for what we observe, and am open to suggestions.)

  136. grue Says:

    “imagine that someone explained the concept of ‘grue’ by saying ‘it’s equivalent to green.’…”

    The problem with your analogy is that it’s strictly false to say that the concept of grue is “equivalent” to green. Whereas it’s strictly true that quantum computers allow multiple options to be run in parallel as the article claimed (i.e. running them in parallel doesn’t necessarily mean you can feasibly extract the solution in all cases).

    “Wadhwa eventually doubled down and said ‘I’m going to believe my expert…'”

    He actually just said he wasn’t ready to “discount” her. And he tried to engage with people in the comments but was mostly met with exaggerated critiques and claims that he shouldn’t have written an article about QM at all if he admits struggling with it! Which is odd since every journalist struggles with QM, and he did in fact rely on experts, as would fairly be expected.

    “I don’t think for a second that Michelle Simmons herself actually believes that QCs are known to be able to solve TSPs efficiently.”

    Well, you don’t deny that QCs will likely be able to solve them *more* efficiently, and in fairness he didn’t actually claim an exponential speedup in this case (he didn’t talk about relative asymptotic complexity). As far as what Simmons believes, she does seem to describe QMs as being able to solve TSPs in “real-time” quoted here. I agree she’s being misleading, but that’s not really Vivek’s fault.
    http://www.zdnet.com/article/australias-ambitious-plan-to-win-the-quantum-race/

  137. James Gallagher Says:

    Wolfgang #114

    Yes, I overlooked that at the end of the article he is reported to have said “This prediction can already be tested; you don’t even need 50 qubits for that, I believe that 10 to 20 qubits will suffice”. Well that’s a pretty brave prediction! In contrast, I don’t think there will be problems at such low qubits as that, if we have (for example) ~planck-time random jumps seeding the universe evolution, so maybe Gil Kalai has accidentally suggested a good way to distinguish between noise and discrete time evolution explanations if the Quantum Computers fail to work in the future? Mind you, I haven’t done the calculations for failure rates with discrete time and QC, it would maybe make a nice paper to take some specific QC algorithms and analyse the impact of a discrete time evolution on them.

    Worst outcome, perhaps, would be that scalable QC works just fine, there is no new science theory, and the QC algorithms available don’t really make the world all that more exciting until we get to millions of qubits, which would enable exact simulation of non-trivial microscopic processes, which will be very useful, but not available for many decades.

  138. grue Says:

    By the way, as far as other places where I feel you’ve been misleading on this parallelism issue, one example is your QM explainer cartoon.

    For instance, the child says that QM allows multiple options to be tried in parallel, and the mom dismisses this as a mistake (i.e. the same issue we’ve been discussing above), when in fact what the child says is true.

    Also, later in the cartoon the mom says that quantum superposition “doesn’t mean 0 and 1 at the same time, at least not the way you think”! Well, in fact it does mean that at least according to some QM interpretations that are widely held among physicists (e.g. many worlds).

    My concern is that you have a particular (somewhat idiocyncratic) interpretation of QM, and you are using your position as an “expert” to give readers the impression that your view is a consensus view in the field (which it isn’t) and that people who disagree are simply making some kind of beginner’s mistake.

  139. Scott Says:

    grue: But they are making a beginner’s mistake. The smoking gun, the proof that I am right about this, is that virtually everyone outside the field who talks about a QC “trying all possibilities in parallel,” then goes on to say something about Shor’s algorithm working by just trying all the possible factors in parallel, or (as in Wadhwa’s case) about QCs being able to solve instances of TSP in a fraction of a section that would take classical computers thousands of years. At that point, trying to defend it is almost like saying “well, it’s not a crime to fantasize about killing someone” when you’re standing over the dead body. 🙂

    When those within the field commit this error, the most charitable explanation I can come up with is that they pessimistically don’t believe there’s any better way to explain QC to a lay audience. I think they’re wrong about this—there is a better way—but the better way takes some time to learn, as well as an inner compulsion not to mislead.

    Crucially, this is all independent of what one believes about the Many-Worlds Interpretation. For a serious many-worlder is going to talk about different “branches” only after the branches have decohered—i.e., precisely the opposite of the situation that obtains in a useful quantum computation. Before decoherence, a many-worlder will agree with everyone else that all the information in a QC is there in its wavefunction, and you can extract a given piece of information if and only if you can construct a measurement to do so, typically by exploiting interference (as with the quantum Fourier transform). I.e., just as the mother said in the cartoon, quantum superposition doesn’t mean “or,” but it also doesn’t mean “and”—at least, not in the way the son thinks. It means complex linear combination.

  140. Jr Says:

    grue #138, If you say you tried all possibilities in solving a problem, people would think that you, how shall I put it, tried all possibilities. They would not think that somehow all possibilities were implicit in a mathematical description of what you were doing but actually inaccessible to you.

  141. grue Says:

    scott: “But they are making a beginner’s mistake. The smoking gun, the proof that I am right about this, is that virtually everyone outside the field who talks about a QC “trying all possibilities in parallel,” then goes on to say something about Shor’s algorithm working by just trying all the possible factors, or (as in Wadhwa’s case) about QCs being able to solve instances of TSP in a fraction of a section that would take classical computers thousands of years.”

    Well, if they go on to make that subsequent mistake go ahead and correct it, but claiming that they (or the child in the cartoon) is wrong about QCs trying out possibilities in parallel is also misleading (probably more so than the original claim you are “correcting”). As far as Wadhwa, even if he mistakenly believes that TSPs are one of the cases where QC provides enormous benefits, he apparently just believes that because the expert he talked to made similar claims, which I linked in my previous comment (so it’s not clear what this has to do with a “beginner’s mistake” about the parallelization issue).

    “Crucially, this is all independent of what one believes about the Many-Worlds Interpretation. For a serious many-worlder is going to talk about different “branches” only after the branches have decohered”

    I disgree, though to be clear I’m just giving many-worlds as one example of an interpretation that takes a realist view towards superposition, i.e. many-worlds supporters typically believe that the qbit is both on and off at the same time (both before and after decoherence), whereas some interpretations, such as Copenhagen are skeptical as to the underlying reality of the superposition and prefer to focus on the resulting probabilistic formalism (as the mom does in the cartoon, after denying that the qubit is really in both states).

    I understand that you are trying to give yourself a way out (in the cartoon) by saying it’s just not both on and off “in the way the son thinks”, but on reasonable interpretations of quantum mechanics the qbit really is in both of these states in exactly the way the child described. The fact that detailed information about the superposition is not always measurable has little to do with whether it really is in a superposition of two states at once. Similarly, the fact that two electrons can interfere in a double-slit experiment doesn’t mean the electron isn’t in multiple locations at once.

    As far as only being allowed to talk about branches after decoherence, decoherence is a statistical phenomena, so there’s no hard line in the sand at which point it’s fair to describe them as separate worlds (and also particles don’t have to be in separate “worlds” to be in two positions/states at once).

    To summarize, it still seems to me quite misleading to deny that the a superposition involves being in two states at once, as well as to deny that superpositions of qbits perform computations in parallel. Why not just accept these natural commitments (or at least that they are defensible), but then provide the *additional* information that the resulting states are not always accessible, so clever tricks are needed to get speedups in practice from QCs. I worry that the explanation is that there’s better schadenfreude arguing that people are dumb/wrong than just pointing out the mundane fact that there is some additional info that might give them a fuller picture.

    Btw, from your response I’m not sure if you actually saw both of my sequential comments above.

  142. Scott Says:

    grue: But Wadhwa did make the further, smoking-gun type error (of course, encouraged by Simmons, who I’ve been happy to blame as well). And the kid in my and Zach’s comic also made the further error.

    For me, if a way of talking about an issue leads with wearying predictability to a certain verifiable error—and having been explaining QC to popular audiences for ~20 years, I’ve seen pretty much every path to every misconception that it’s possible to take—then that way of talking is itself an error to be warned against. But I agree, the way of talking itself doesn’t carry the same degree of culpability as the further “smoking-gun” errors that it almost inevitably leads to.

  143. grue Says:

    “For me, if a way of talking about an issue leads with wearying predictability to a certain verifiable error—and having been explaining QC to popular audiences for ~20 years, I’ve seen pretty much every path to every misconception that it’s possible to take—then that way of talking is itself an error to be warned against.”

    Well warning against a particular wording as pedagogical point is one thing, but this blog entry and the cartoon both go further, basically calling true statements false/wrong (i.e. qbits are in two states at once and compute in parallel); I would certainly object to this, even if your goal is to reduce the likelihood of some other more serious mistake (e.g. all NP-complete problems can solved efficiently with QCs).

  144. Scott Says:

    grue #143: No, the true statement is that a qubit can be in a superposition of a 0 state and a 1 state. Saying that it can be in “both states at once” is a popular rounding down of that true statement, which would be okay if it didn’t regularly lead to egregious errors, but it does regularly lead to egregious errors, as we saw in this case (which makes it a terrible test case for your broader point! 🙂 ).

  145. grue Says:

    Whether it is in “superposition” is a formal claim that all interpretations can agree on. That it really is in some sense in both states at once is a matter of philosophical interpretation about what it means to be in a superposition, and some interpretations would agree and some wouldn’t; but it’s not straightforwardly false, nor is it simply a rounding down. For instance, I suspect most many-worlds theorists would disagree with you on this point.

    Also, saying the qbits aren’t computing in parallel is even more straightforwardly incorrect.

  146. Scott Says:

    Nope, same thing about computing in parallel. The true statement is that a QC can do many computations in superposition. Saying it can do the computations in “parallel” is a popular rounding down that, again, would be fine if it didn’t lead to egregious errors, but it does lead to egregious errors.

    My personal experience has been that many-worlders discuss quantum computations the same way anyone else in our field does. But if a case arose where they were encouraging popular misunderstandings (e.g. QC as massively parallel classical computer), I’d criticize that the same way I’d criticize it from anyone else.

  147. grue Says:

    Now you’re just arguing semantics. For the most part there’s little need to engage in philosophical interpretation when doing scientific work in QM/QC, so it’s to be expected that supporters of different interpretations talk similarly to each other in that context (i.e. focusing on the formalism e.g. superpositions, etc). That doesn’t mean they would agree with you on these philosophical interpretive questions (in my experience many in the field are happy to defend these types of claims). But it’s hard to know why you see these as rounding-down since you didn’t respond to my substantive points.

  148. Scott Says:

    Ok, we’re done arguing here. I don’t care that much about differences in semantics or philosophy as long as they don’t lead people into egregious errors. But we’re talking here about situations where they have led people into egregious errors—errors that basically take everything that’s been figured out about the capabilities and limitations of quantum computers after a quarter-century of effort and struggle, throw it into a garbage can, and set it on fire. For me, that’s the main substantive point here. And unless you acknowledge the reality of this, I don’t see that there’s anything further to discuss.

  149. James Gallagher Says:

    Ironically, the (common) mistaken idea that Quantum Computers are doing “Parallel Computing” comes from a naive idea that Quantum Computers are executing in discrete time steps just like a classical computer! When in fact, if QCs were subject to discrete time evolution of the wavefunction they would be severely impacted in performance (compared to a continuous evolving wave-function).

    I think that would be the simple thing to emphasise – classical computers operate in discrete time steps, Quantum Computers use an algorithm based on continuous time evolution of the wave-function.

  150. Scott Says:

    James #149: Nope, that’s wrong. The performance hit that you get from taking a continuous-time Hamiltonian, even one with a sum of many 2-qubit terms, and “Trotterizing” it to get a discrete sequence of 2-qubit unitary transformations, is quite small: like, logarithmic or so, depending on the desired error. And most quantum algorithms are just thought about directly in the discrete-time setting. This has nothing to do with the QC’s performance advantage, which is all about being able to choreograph interference patterns in a Hilbert space of exponentially large dimension.

  151. Joshua Zelinsky Says:

    Scott #150,

    Huh, that’s a very interesting point, and probably should be pointed out more often to people who think that quantum computers are somehow “just analog computer.”

    I guess that point never occurred to me because I only think of these things in discrete time steps because it minimizes how much actual physics I need to think about, where continuous stuff feels more physicsy.

  152. Gil Kalai Says:

    Hi guys, regarding the 10-20 qubits in my interview what I say is actually very simple. For the experiments based on random circuits, if we measure the correlation between the desired distribution and the noisy distribution for 10-20 qubits, we can *extrapolate* and estimate the point of failure. (I also expect that the correlation will behave like the correlation computed for BosonSampling in my paper with Guy Kindler.) You can read about it in my ICM2018 paper https://arxiv.org/abs/1801.02602 .

    (It is true that I’d guess that the point of failure is closer to 20 than to 50 but this is just a guess.)

  153. grue Says:

    @scott #148 “I don’t care that much about differences in semantics or philosophy as long as they don’t lead people into egregious errors.”

    That’s fine, but the problem is that you *are* in fact making strong philosophical claims, e.g. when you dismiss someone as wrong/mistaken for merely suggesting that qbits can be in more than one state at once. This is not false, and in fact it’s a widely accepted philosophical interpretation of what it *means* for particles to be in a superposition. And when people hear these claims from a QM expert they are mislead into thinking they are hearing scientific facts, as opposed to the philosophical opinions of someone who doesn’t even “care that much” about differences in philosophy.

    “But we’re talking here about situations where they have led people into egregious errors—errors that basically take everything that’s been figured out about the capabilities and limitations of quantum computers after a quarter-century of effort and struggle, throw it into a garbage can, and set it on fire. For me, that’s the main substantive point here. And unless you acknowledge the reality of this, I don’t see that there’s anything further to discuss.”

    Honestly I think this is a bit exaggerated, but it certainly is a common misconception that QC algorithms can speed up any problem just by running all the options in parallel, and it’s great that you’re helping to clear up this misconception. I just don’t think that there’s any need to make false claims in the process, even if those claims are “merely” philosophical. It’s perfectly possible to accept that qbits can be in more than one quantum state at a time (and can compute in parallel), and go on to explain the difficulties in extracting the final computational result from a quantum system.

    So I’m not denying your substantive problem. I just think you are going too far in the other direction and calling reasonable claims false, apparently motivated in part by the fact that you see these claims as instrumentally harmful, which is surely a dangerous basis for assessing truth.

    Anyway, I appreciate your taking the time to respond to so many of my comments/questions and hope there’s no hard feelings.

  154. Job Says:

    It at least includes understanding that the containment NP⊆BQP is not established, which Wadhwa clearly didn’t (and apparently still doesn’t, even after like 20 different commenters tried to tell him…).

    That distinction is not that accessible. I know i’m still trying to figure out some of the details.

    Like, apparently people want to use QCs to solve NP-Hard optimization problems in the context of machine learning.

    Why should that be possible, and what does it mean? Are they strictly going for the Grover speedup, or something more?

    I honestly don’t feel like i can blame a reporter for getting this stuff wrong when i’m still trying to figure it out and there are people like Simmons reinforcing the misconception.

    Also, i think that reaching out to Vivek privately in a more diplomatic way would have been more productive.

  155. Ron Monson Says:

    #grue147 The main criticism of the article is that it portrays, argues, promotes and ultimately embeds the idea that $NP \subseteq BQP$. In a sense all else is peripheral since if true a fully fault-tolerant universal quantum computer would have earth-shattering consequences (modulo some unlikely constants). These consequences would likely go beyond any previous technology ever introduced and certainly well beyond the immediate cryptographic implications (erroneously) drawn by the article from pending experiments. But of course the consensus is that $NP \not\subseteq BQP$.

    So in an article explicitly designed to educate, excite or warn the public about what quantum computers can do, the egregiousness lies, not in any philosophical interpretations of parallelism but instead from showcasing an example (TSP) that treats $NP \subseteq BQP$ as fact and hence also treats as fact the earth-shattering transformation awaiting the construction of the world’s first universal quantum computer (when or if it happens there should be enough to get excited about!).

    I agree that this could have been misinterpreted from Michelle Simmon’s TED talk and its juxtaposition of parallelism and the TSP and that this accounts for at least part of Vivek’s article. See also the following from Michelle’s Australia Day acceptance speech

    <blockquote

    Such a computer is called a ‘quantum computer’ and is predicted to bring with it an exponential speed up in computational power. This is because instead of performing calculations one after the other like a conventional computer, a quantum computer works in parallel, looking at all the possible outcomes at the same time. The result is massively parallel computing, allowing us to solve problems in minutes that otherwise would take thousands of years.
    ….

    I want Australians above all to be known as people who do the hard things

    I wonder about the consensus on the above?

    I guess there is a fine line between aiming to inspire investors/readers/grant-assessors while also tempering with likely boundaries from hard-won theoretical knowledge. I’m not sure where this line should be drawn but it seems reasonable that it should be consistently applied with and to authority.

  156. jonas Says:

    Scott re #150: that’s what I’d guess too, but do we know that? So if the solution to a Navier-Stokes equation can blow up with finite energy, which we can’t exclude yet, then that requires continuous space too, not only continuous time?

  157. Scott Says:

    Job #154: I don’t agree with you that it’s a subtle or abstruse point.

    No one knows how to use quantum computers to get exponential speedups for NP-hard problems.

    No one knows how to use quantum computers to get exponential speedups for NP-hard problems.

    No one knows how to use quantum computers to get exponential speedups for NP-hard problems.

    (see also: the tagline of this blog 🙂 )

    The trouble with this statement is not that it’s subtle or abstruse, or that there’s any informed doubt about its truth—it’s simply that people have strong emotional reasons not to want to accept it. For one thing, if true, it would mean that 20 years of popular discussion of quantum computing has been shot through with irresponsible hype and error. (Yep.) For the QC skeptics, it would mean that QC is not a magic uber-computer—that it has a profile of abilities so strange that no sci-fi writer would have had the imagination to invent it—so maybe we have no prior intuition to tell us that such a thing is impossible. (Yep.) Finally, it would mean that understanding QC would actually require learning something about complexity—e.g., about the difference between NP-hard problems, and problems like factoring that are conjectured to be exponentially hard but without being NP-hard—rather than mentally collapsing all these things into a single category, and treating the distinctions between them as just irrelevant babbling. (Yep.)

    Regarding the people who hope to get quantum speedups for NP-hard optimization problems arising in machine learning, I’d say there’s a trichotomy. Either

    (1) they’re only hoping for a Grover speedup, which we almost certainly can get, or

    (2) they’re hoping to get more than the Grover speedup over the best known classical algorithms, at least for special instances of NP-hard problems, by using heuristic algorithms like the adiabatic algorithm or QAOA—something that we can’t rule out, and that’s reasonable to think about (and even try to test with the small QCs of the near future) but which is mostly just a hope at present, or

    (3) they’re engaging in irresponsible hype or have fooled themselves.

    I’ve seen many examples of all three. Of course combinations are also possible, e.g. when someone is at (3) but falls back on (2) or (1) when challenged.

    Regarding contacting Wadhwa in private, I could’ve said the same! I.e., he could’ve contacted me in private rather than publishing totally irresponsible statements in the Washington Post. Even though I’m on a 1-year journalist sabbatical, I would’ve gladly put him in touch with a half-dozen colleagues.

  158. Scott Says:

    jonas #156: Yeah, PDEs that require both continuous time and continuous space to simulate are a different animal. There are even situations where you can build a “Zeno computer” using such a PDE, able for example to solve the halting problem in finite time—indeed, Terry Tao has an ambitious program aimed at proving that Navier-Stokes might be an example of such a PDE.

    (If so, then we’d of course say that such PDEs are being continued way beyond where they have anything to say about real physics: it won’t take long until the classical fluctuations being talked about get smaller than the radius of an atom, or smaller than the Planck scale! But still fun to think about mathematically.)

    Even for computations that are continuous in time but discrete in space (e.g., a sum of nearest-neighbor Hamiltonians acting on an array of qubits), I didn’t mention it before, but the total energy of the Hamiltonian enters into the time needed to simulate the computation using discrete gates. Indeed, this has to be true: if it weren’t, then we could speed up any computation arbitrarily by just replacing the computer’s Hamiltonian H by 1000*H, or 10100*H, or whatever. In real life, though, we are limited in how much energy we can pump into our computer, so the Church-Turing Thesis is upheld.

  159. sandro Says:

    More references to TSP at http://news.mit.edu/2017/scientists-demonstrate-one-largest-quantum-simulators-yet-51-atoms-1129, though in the context of adiabatic QC. I’m not an expert, but also this sounded a bit misleading in some places, possibly confirming a trend suggested by an earlier comment here.

  160. Paul Fabel Says:

    >(JR) It says “this solution” which is important. There are better algorithms than the direct, try-all-possibilities solution.

    Thanks JR. So when n=22 cities, then (2^n) x (n^2) is not too big, but by n= 60 cities, exact solutions to TSP might take too long on modern computers?

    If so, maybe it is not such a hanging offense to conflate 22 with 60, since both n! and 2^n get unreasonably big unreasonably quickly.

    Regarding inequalities involving n!, 2^n ,60, 22, and the WaPo, perhaps the one whose (potential) truth is of greatest importance is 2018 > 2017.

  161. Job Says:

    they’re hoping to get more than the Grover speedup over the best known classical algorithms, at least for special instances of NP-hard problems,

    That’s almost certainly the case. You’ll always get a substantial speedup if you target only a subset of instances, in most cases because the problem won’t be NP-Hard anymore.

    That’s why it’s so irrelevant when people talk about a “partial solution” to TSP.

    The point i’d like to understand is why should the constrained subset of instances solvable by a QC align with machine learning more than say the subset of instances with property X, where X enables an efficient classical solution.

    Other than it being twice the size.

  162. gentzen Says:

    Scott #130:

    On the other hand, it’s true that, if you want the harmonic oscillator to behave the exact way you know and love (something that some physicists might even consider conceptually prior to QM…), then the amplitudes better be complex! Does anyone else have thoughts?

    Here is one thought: If you take the “simple” Schrödinger equation (instead of the relativistically “more accurate” Dirac equation or QFT), then the speed of rotation in the complex plane is directly proportional to the energy. But the energy for Schrödinger’s equation is not absolute, since a potential with an arbitrary constant offset still leads to the same physics. And luckily, only the difference in speed of rotation between different modes is relevant for the resulting physics.

    But this “relativism” only works if we have the imaginary unit in the Schrödinger equation. If that imaginary unit is absent like in the Klein Gordon equation, then we get solutions with positive and negative energy, energy becomes more absolute (i.e. less relative), and we have to fight to explain away the solutions with negative energy. On the other hand, only a derivative of the potential enters into the Klein Gordon equation, so things don’t break down immediately.

    The background of that observation is that I failed/skipped QM at university, and only came back to it after extensive experience in statistical optics. Funnily, one main reference work was from Max Born and Emil Wolf, and it introduced me to an instrumentalistic position which made dealing with QM much easier for me. I noticed that constructing optical toy models which exhibit some features of QM (even entanglement) is easy, but that my toy models only featured absolute energy instead of relative energy. (Which is not surprising, because the imaginary unit is absent from the equations of optics.)

  163. Scott Says:

    Job #161:

      The point i’d like to understand is why should the constrained subset of instances solvable by a QC align with machine learning more than say the subset of instances with property X, where X enables an efficient classical solution.

    A priori, you could simply hope that there’s property X that enables an efficient classical solution, and property Y that enables an efficient quantum solution, and Y is strictly broader than X … since there’s no reason for it not to be! But then of course you can wonder: how much broader? And: broader in a direction that anyone cares about in practice?

    A posteriori, we know that quantum speedups are sometimes obtainable by using quantum tunneling to get over tall, narrow energy barriers. But is that an effect that really matters for real-world optimization problems? I don’t think we know the answer to that yet, and we might not know until we have real QCs to test things out on.

  164. Raoul Ohio Says:

    asdf says: “Is anyone saying where these complex amplitudes in quantum states actually come from? Surely physics doesn’t care that C is the algebraic closure of the reals so all polynomials have roots there.”

    This is like saying “How can the four color theorem be true? Surely geometry does not care about solutions to x^2 = 2^x”.

    A better understanding is that C is the right set of numbers for many things (including QM), and C was first discovered by considering roots to polynomials.

  165. Raoul Ohio Says:

    Scott #135.

    Excellent point:

    “In their worldview, questions about experiments, or progress toward building QCs, belong to the ordinary category of “truth” and “falsehood,” but questions about what you could do with QCs belong to a totally different epistemic realm, where you can just say whatever you want, or whatever your audience or your investor or your funding agency most wants to hear. ”

    Here is a related phenomena. Although few people in TCS believe there is much likelihood that P = NP, they are often asked by journalists and others “OK, it would be a great theoretical result, but what practical benefits would come from P = NP?”. Rather than giving the obvious answer of “I have no clue”, many instead speculate on all sorts of things that would follow if NP were in FC. Here FC is loosely defined as the “set of problems that can be feasibly computed, say O(n^k) for a low value of k, such as 5 or 6”.

    But P is not in FC, so NP is not in FC irrespective of P = NP or not.

    A common counter argument, that P is in FC, goes like this: Whenever a problem is shown to be in O(n^k) for a big k, pretty soon the result is improved to something like k = 6 or less.

    This is interesting, but so what? It could easily be the case that the only problems anyone has figured out any O(n^k) bounds for are actually easy ones, where the best k are 6 or so.

  166. James Gallagher Says:

    Scott #150

    Yes, but it does effect the size of the Hilbert Space if the whole universe is evolving discretely.

    Your QC algorithms will not work so efficiently as you think if this is the case. If the evolution was pure deterministic, then QC won’t work at all, but if it’s random seeded discrete time steps then QC will work OK up to a certain number of qubits, and will begin to fail statistically as the non-infinite size of the Hilbert Space begins to impact on the algorithm. This will vary for different algorithms.

  167. Scott Says:

    James #166: Sorry, I didn’t understand that in the slightest. For all we know, time could be discrete at the Planck scale, and no quantum computation that anyone is planning to do in any foreseeable future—anything below the Planck energy, basically—would be sensitive to that discreteness at all, any more than a classical computation would be. We only wish it were that easy to do a terrestrial experiment that would be sensitive to quantum gravity!

  168. James Gallagher Says:

    Scott #167

    Why wouldn’t a quantum computation be sensitive if the Hilbert Space is reduced? You just seem to think I’m saying a “Many-Worlds” with discrete-time branching – I’m not, I’m saying a random jump (microscopically, anywhere in the universe) causing the entire universe to then rotate (evolve unitarily in Hilbert Space) say, on average, every ~planck-time.

    Mathematically speaking, we have a structure with large Hilbert Space and superpositions, but in reality only one real evolution and, QC would struggle once qubit numbers get high

  169. kg Says:

    What do you think of my general means of explanation of QC. I think it’s mostly correct and also innoculates against the “trying everything in parallel” school.

    “A regular computer like yours can be viewed as one that can do two things (1) calculate and (2) flip a coin.

    But let’s say that every flip of the coin splits the universe, and thanks to compounding interest you end up with, e.g., a trillion universes after 40 coin flips. This gives you a lot of computing power since you have a trillion universes to calculate in. However, if you have to randomly pick a universe to actually go into when you’re done (for example, by flipping a coin 40 times and going down that path), this clearly gets you nowhere, because you could have just picked beforehand and replaced every coin flip with a check on which universe you’d end up in.

    Quantum mechanics lets you mix the results of the universes somewhat before picking which one to go into. This gives you some extra power mathematically on all problems that involve searching for an answer (like guessing passwords), but not much (we can get around this by doubling the lengths of our passwords). However, on some problems that involve cyclic patterns in the universes, it can be hugely useful.

    These kinds of problems are generally not useful day-to-day but have been integrated into most Cryptography and thus being able to solve them breaks a lot of stuff we take for granted (for example, the “s” in https).”

  170. John Sidles Says:

    Raoul Ohio observes (circa #165)  “Few people in TCS believe there is much likelihood that P = NP …”

    Scott Aaronson poses his final Edge Annual Question  “Can we program a computer to find a 10,000-bit string that encodes more actionable wisdom than any human has ever expressed?”

    With a view toward evolving better-informed opinions in regard to Scott’s Edge Question, it is natural to seek human-composed 10,000-bit strings that outstandingly encode “actionable wisdom”.

    Specifically, and with a view toward Raoul Ohio’s comment, please allow me to commend Emanuele Viola’s essay, of 16 Feb 2018 on his weblog Thoughts, titled “I believe P = NP” (note that Emanuele Viola is a well-known/well-respected complexity theorist and computer scientist).

    Viola’s essay/manifesto attests:

    After worrying about P vs NP for half my life, and having carefully reviewed the available “evidence” I have decided I believe that P = NP. …

    The evidence is clear: we have grossly underestimated the reach of efficient computation, in a variety of contexts. All signs indicate that we will continue to see bigger and bigger surprises in upper bounds, and P = NP.

    Do I really believe the formal inclusion P = NP ?

    Maybe, let me not pick parameters. What I believe is that the idea that lower bounds are obviously true and we just can’t prove them is not only baseless but even clashes with historical evidence …

    Hmmm … if it is becoming increasingly credible — to theorists like Emanuele Viola anyway! — that P = NP, then how much more credible is the Efficient Church-Turing Thesis (ECT) becoming?

    The ECT can be concisely stated as:

    Anything that can be computed can be computed efficiently by a Turing Machine.

    Here the interpolated word efficiently distinguishes the “ECT” from the (unextended) Church-Turing Thesis as given by Chris Berhnardt in his recent book Turing’s Vision: the Birth of Computer Science (2016, MIT Press, page 69; a work praised by Scott, in a blurb on the back cover, as “a delightful introduction for the lay reader to the ideas surrounding Alan Turing’s great paper of 1936”).

    For Shtetl Optimized readers, the ECT phrase “anything that can be computed” is best understood in its broadest physical sense, as encompassing any and all measurement data that arise from any and all quantum electro-dynamical (QED) experiments. In a similar vein, “efficiently” is understood to mean that the computational cost of simulating the data resulting from any QED experiment (measured in units of Turing Machine steps) is a polynomial function of the entropy-dissipation of that experiment (measured in units of Shannon entropy-bits).

    PS: Claude Shannon’s article “Prediction and Entropy of Printed English” (Bell System Technical Journal, 1951) first established that the information-content of printed English is, to a reasonable approximation, one bit-per-letter (counting spaces as letters, while ignoring punctuation).

    So by Shannon’s measure, the “actionable ideas” in Emanuele Viola’s “I believe P = NP” essay are encoded in ~6,748 bits … well within Scott’s Edge-question limit of 10,000 bits! 🙂

  171. Scott Says:

    kg #169: Not bad.

  172. Scott Says:

    James #168: I still don’t understand the actual proposal you have in mind. What is the distribution over these random jumps? What are they, like random 1-qubit unitaries? So can I model this as just a noise process? Whatever it is, though, it’s completely different from what I thought had been asked about, which is simply making time discrete rather than continuous (while otherwise leaving QM unchanged). Let’s be careful not to confuse the things.

  173. James Gallagher Says:

    Scott #172

    Well look, I think if we’re all honest we can agree that the Copenhagen Interpretation is unsatisfactory, and any idea that humans measuring things has a big impact on the evolution of the Universe a bit silly (Even If we allow that humans have free-will then the impact is only local)

    So now we have to wonder if we actually understand how Quantum Mechanics works.

    And then you make this blog post dissing a silly analysis of Quantum Computing.

    But you, yourself, don’t actually know if Quantum Computing will work, unless you are a hardcore Copenhagen/MWI advocate – are you sure it will actually work?

    And then I come along (about 5 years ago) when you invited someone to argue that “Quantum Computing is Bunk”, with a perfectly reasonable suggestion of how the Universe might be evolving that would comply with most known Quantum Mechanics experiments but wouldn’t enable large-scale Quantum Computing.

    Anyway, I didn’t mean to hijack this thread with my speculations, but if it helps anyone to think originally and actually make a real breakthrough in the status-quo wrt to Quantum Mechanics it would be good.

  174. Job Says:

    Isn’t a billiards table the most intuitive analogy?

    They’re already used as a model for reversible computation, and a recurring theme in QM documentaries with Brian Greene.

    You can shoot as many balls as you want simultaneously, but they have to fall into place.

    At the end you get a random a ball from the gutter.

  175. Scott Says:

    James #173: Which interpretation of QM you espouse (e.g., MWI, Copenhagen, or Bohm) has no effect—none, zero—on what you should predict about the scalability of quantum computation, because by explicit design, all interpretations make exactly the same predictions for any experiment you can do on any system external to yourself.

    The one thing in foundations of QM that does matter for QC, is simply whether you believe QM is literally true or whether you think it needs to be modified. As I never tire of pointing out (because others never tire of forgetting it), if QM did need to be modified, that would be a far greater scientific breakthrough than a mere success in building scalable QCs, and we can only hope that the quest to build QCs would terminate with such an exciting outcome. In any case, though, we should be clear that this discussion is completely orthogonal to the main discussion on this thread, which was between two groups who both provisionally accept that QM is true,
    and one of whom cares about it being explained without the Exponential Parallelism Fallacy.

  176. Scott Says:

    Job #174: With all due respect to Brian Greene, how does the billiard ball thing get across interference, the central phenomenon of QM? Is the idea that one billiard ball can “interfere” with a second one by hitting it? If so, then that seems actively misleading, because in QM a particle can interfere with other “branches” of itself. More generally, any view in terms of particles or billiard balls bouncing around in ordinary space seems wrong in the most fundamental way, because it misses that the real action is happening in configuration space, which has vastly larger dimension than ordinary space. It’s not surprising to me that people would love such a picture, but isn’t that only because it replaces something unfamiliar and true by something familiar and false?

  177. James Gallagher Says:

    Scott,

    I’m sorry I can’t provide the perfect experiment to decide that the universe obeys Copenhagen QM rather than a new model, with the far fetched idea that a discrete random jump + unitary evolution is the exact description of what is actually happening.

    it’s not a fight you know – I just think you traditionalists are boring and lacking imagination

  178. Scott Says:

    James #177: Do your ideas make any predictions different from the standard ones for what will happen with 50-qubit or 100-qubit QCs? If so, then rather than having to figure out who’s more interesting or more imaginative, we should have the opportunity very soon to let Nature tell us who’s right.

  179. Job Says:

    With all due respect to Brian Greene, how does the billiard ball thing do anything whatsoever to get across interference, the central phenomenon of QM?

    It’s a big table, in n-dimensional space. There’s 2^n balls, 2^(n-1) holes.

    Half of the balls tell you something about the solution, half don’t. You don’t know which are which.

    You hit all the balls simultaneously along different trajectories such that the useful balls fall into the holes and the useless ones knock each other off path, in pairs.

    You got a better analogy?

  180. Joshua Zelinsky Says:

    James #177,

    “I just think you traditionalists are boring and lacking imagination”

    To some extent, this sounds a bit like when SETI comes up people will frequently say “But you are looking at stars that are likely to have Earth-like planets. What if life evolves on gas giants? Or what if life ends up being silicon based. You traditionalists are lacking in imagination.”

    It both ignores that the people who do SETI have thought *a lot* about alternatives, and they’ve generally decided they are low probability or are too hard to test for given current knowledge. It seems like this is a very similar issue; I’m confident that the quantum computing people like Scott and the physicists who study the foundational particle physics issues have thought about the idea that there is some subtle issue involving discrete v continuous behavior, and whether unitary evolution is only an approximation to something subtly non-linear. There needs to be something more than just saying that “Hey, maybe it is this vague thing, but I don’t have a way to test it” for them to have a reason to pay attention.

  181. James Gallagher Says:

    Scott #178

    OMG you want a quantitative decision? – jeez, the opponents of supersymmetry are rejoicing after a paltry sub 20 TeV investigation of their existence by the LHC, when anyone sensible knows we need at least 100 Tev colliders to really probe that sector.

    No Scott, my “ideas” don’t make any predictions regarding 50 vs 100-qubits, because, they are too dumb – at best they vaguely hint at what the universe might be doing

  182. The problem with gatekeepers Says:

    The last comment I read from Vivek Wadhwa was #37. I don’t know if he continued to double down on his nonsense.

    Scott: you do have a lot of patience with these people.

    Vivek Wadhwa’s article illustrates very well the problem with many public intellectuals that shape public opinion on highly technical issues. They simply have no deep understanding of what they are talking about. They hear a couple of ideas here and there, form their own opinions and they put them in front of a large audience as if it were a matter of fact. It’s the problem Richard Feynman warned about 30+ years ago, but on steroids (more info here https://www.youtube.com/watch?v=tWr39Q9vBgo ).

    In my life, I have learned to be very skeptic of people who speak using the type of generic language Vivek Wadhwa used in his article when making far reaching claims. I am happy that you took him to task.

  183. asdf Says:

    I just found there’s apparently a subject area called “Emergent Quantum Mechanics” which is about where QM actually comes from. It seems to contain considerable amounts of woo woo despite being done by Real Physicists(tm) and having lots of equations. But in some cases it does propose modifications to QM.

    Nelson’s Stochastic Mechanics is known to be physically incorrect (it makes experimental predictions that turn out to be wrong) but it’s still considered highly intriguing because it leads to a lot of QM from simple principles. I wish I understood it :). There’s a big literature about it by many authors. Here’s an overview by Nelson himself, describing some of its successes and failures:

    http://iopscience.iop.org/article/10.1088/1742-6596/361/1/012011/pdf

    As another approach, there’s a somewhat weird book by Gerard ‘t Hooft that you probably know about (warning, 3MB download):

    https://link.springer.com/content/pdf/10.1007%2F978-3-319-41285-6.pdf

    It explicitly (p. 87) says it’s incompatible with large-scale QC and that if such QC happens then the book’s proposed theory is falsified. So at least it says something concrete :).

    There’s a series of conferences with some interesting content: http://emqm17.org/

    I’m surprised there seems to be no Wikipedia article (anyone?).

    I keep hearing that QM has been confirmed by experiment to amazing accuracy, like to 1e-10. But beyond that, who’s to say what happens at the 1e-50 level? The idea of further adjustments being required (just like the quantum and relativistic adjustments that classical physics ended up needing) doesn’t seem outlandish.

  184. Gil Kalai (really asdf!!!) Says:

    Whoa, this is asdf and I just brought up your comment form and it was pre-filled with Gil Kalai’s name and email. Scott, something is wrong with your WordPress installation!

  185. Christopher Greene Says:

    @Scott,

    Maybe you should submit a correction to the Washington Post?

    https://helpcenter.washingtonpost.com/hc/en-us/articles/115003675928-Submit-a-correction

  186. Scott Says:

    Gil Kalai (really asdf!!!) #184: This is not a new bug. It’s something I’ve reported to WordPress over and over—and they kindly assigned a team to fix it free of charge—but then every time they say it’s fixed, it later reemerges like some horrible swamp monster.

    I will tell them again.

    In the meantime, don’t leave your real email address if you care about it being leaked.

  187. Scott Says:

    asdf #183: When there’s anything in “emergent QM” that can account for even Bell inequality violation—i.e., an already known fact—in an even slightly convincing way, maybe I’ll be more inclined to pay attention to what the emergentists say about quantum computation.

    I include Gerard ‘t Hooft’s ideas in that statement.

  188. Scott Says:

    James #181: At some point, it seems to me that the people working to realize quantum computation are licensed to say to the skeptics, “specific predictions or GTFO” 🙂

    It’s to Gil Kalai’s considerable credit, for example, that he makes such predictions—ones that might actually be falsified within the next year.

  189. Gerald Says:

    #182: Michelle Simmons claimed in no uncertain words that the advent of QC will be like “massively paralleled computing” coming in and that that will allow the TSP to be “solved in real-time” (ZDNet link above). Vivek Wadhwa is right when he observes that “she clearly expresses opinions that are different than yours [the commentors on this blog].”

    Google “Michelle Simmons”! There’s not the slightest hint anywhere that she could be a Peter Duesberg like crackpot. Why should a layman like Wadhwa then doubt her and believe a dozen commentors on a single blog instead? Of course he doubles down. It’s the reasonable and safe thing to do in such a case. He cannot know that Simmons is wrong unless he understands the technical details.

    Journalism is not at all to blame here, the embarrasment is fully on the side of QC science. There’s really something wrong if a renowned experimentalist of QC is so profoundly confused about what QCs can do. An outsider must get the impression that the QC community does not know wth they are talking about. It also reeks of tribalism when all the harsh criticism is directed against a non expert journalist while at the same time obvious inside disagreements (between theorists and experimentalists) are played down.

  190. Scott Says:

    Gerald #189: I’ve been very clear in giving Simmons a huge part of the blame for this, as soon as I understood her role (which I didn’t until about comment #66): specifically the fact that, while Wadhwa delivered the final coup de falsehood, it was Simmons who had led him there; he hadn’t just willfully misunderstood her.

    FWIW, I’ve found the following rule of thumb to be helpful, for fields where I’m not myself an expert: if two groups of experts disagree with each other, but one of them is extremely polished, paints in broad strokes, delivers TED talks, publishes in major newspapers, and declines to get into the weeds with annoying nerds on the Internet; while the other group is those annoying nerds on the Internet, and is socially oblivious and uses phrases like “well, actually…” and you wish they’d just shut up and give it a rest—it’s the second group that’s usually turned out to be right. 😀

  191. The problem with gatekeerpers Says:

    Gerald #189:

    I don’t buy the excusing you are doing for Vivek Wadhwa. His wikipedia page says “He graduated from the University of Canberra in 1974 with a Bachelor of Arts in Computing Studies” then in the career section it is made clear he is no strange to the world of software development and computer science.

    Vivek Wadhwa is the one who chose to be a public voice on these matters, so he cannot claim “I was misled” as a valid excuse. When one chooses a career like this, he/she owns both the successes and the failures. Carl Sagan for example made the wrong prediction in the so called “Kuwaiti oil fires” affair and he acknowledged it later.

    It used to be that public intellectuals were happy to own both successes and failures. Now they only own the successes and they are happy to blame the failures on somebody else. Then people ask how is that the public trust in science is at an all time low. For the layman who doesn’t have scientific training, the public debates on science resemble more a debate contest in which the loudest voice wins the argument, not a probing exchange of ideas in search for the truth.

  192. josh Says:

    @Scott #190 ‘I’ve been very clear in giving Simmons a huge part of the blame for this, as soon as I understood her role (which I didn’t until about comment #66)’

    vivek already explained her role in comment #2.
    in your comment #10 you just ignore that she was mentioned.
    vivek remarks this and mentions her explicitly again in comment #23.
    comment #66 is from you.

    i guess you wanted to refer to the comment where the transcript was posted. but if you would have cared you could have easily found that yourself.

  193. Sniffnoy Says:

    Gerald #189:

    To me, it looks like Wadhwa’s fundamental mistake is thinking he can only go by “one authority’s word vs another’s”, can only be decided on the basis of authority, and not attempting to actually explore the details of what each person is actually saying at all. Where do they match up? Where do they contradict? Where do they appear to contradict but actually match up due to a difference in terminology? Where they contradict each other, does either piece contain responses, counterarguments to the other, or are they totally ignorant of what the other is writing? How does what each says mesh together with other aspects of the outside world? Is either one of them saying things that just don’t make sense? And of course, as Scott points out, how does all this compare against Wikipedia? 🙂

    If you’re a reporter, you should be prepared to do this sort of reconciling of contradictory information! Hell, you even occasionally have to do this sort of thing in the much simpler world of mathematics…

  194. Scott Says:

    Sniffnoy #193: Thanks, and well said!

    josh #192: I believe in “innocent until proven guilty” in intellectual matters no less than in legal ones! Wadhwa proved himself to be intellectually negligent with nearly every sentence of his column. But with Simmons, I hadn’t seen her actual words and was too busy that afternoon to seek them out for myself, so I gave her some benefit of the doubt until the transcript removed all doubt.

  195. Mateus Araújo Says:

    This whole drama reminds me of a time when Philip Ball put up an article on Nature News about quantum teleportation that contained a serious mistake. Annoyed by the misrepresentation of my field, I bitched about in my small and unknown blog. What happened? Philip Ball found out about my post, contacted me, understood the problem, and corrected his newspiece within a few days.

    All journalists make mistakes. Only the serious ones correct them.

  196. Scott Says:

    Mateus #195: Glad to hear about your experience! I also got a positive impression from my interactions with Philip Ball.

  197. JimV Says:

    Some commenters seem to be missing the fact that VW didn’t just publish some misinformation – he has, so far, according to his comments here, refused to acknowledge the misinformation and publish corrections. (Of course some commenters also miss the fact that it was misinformation.)

    Based on my anecdotal experience, things said in Ted Talks which are supposed to awe the audience have about a 50-50 chance of being true – far less than Wikipedia. Also from what I can tell from his comments here, Ted Talks seem to be a major source of VW’s material. I suppose that saves some of the effort of tracking down and interviewing experts and reading their papers.

  198. asdf Says:

    JimV #197: let’s ask Michael Bay.

    https://www.xkcd.com/748/

  199. Raoul Ohio Says:

    I have had the misfortune of listening to a couple of random TED talks. Very disappointing.

  200. bc_fan Says:

    I had previously encountered VW while watching a recording of the intelligence squared debate on open borders. He was supposed to be arguing in favor of open borders with Bryan Caplan but ended up mostly arguing against open borders. Bryan’s account of their agreement – http://econlog.econlib.org/archives/2013/11/metaphorical_vo.html – I think, is useful in judging VW’s character.

  201. Mateus Araújo Says:

    What bothers me most is his Trumpian defence “I was given this information”. So what? It is still wrong! Take responsibility for what you write!

  202. Vincent Says:

    One takeaway of the above discussion (erring on the side of focussing on the good in everyone) is that before writing a popular article on something QC-related, it is always wise to check any claim you intend to make with the commenters here. I happen to be in this situation, so allow me to grab the opportunity.

    Case in point: I want to claim that one of the things physicists dream of using quantum computers for once these have become powerful and fault-tolerant enough (at some point in the future) is helping design materials that allow room temperature superconductivity.

    I know people dream of using quantum computers for helping understand the quantum properties of materials. I also know that a subset of the people working on understanding the quantum properties of materials using the methods currently available (classical simulation, trial and error, maybe something else I don’t know about) are working on (eventually) achieving room temperature superconductivity. So it seems tempting to add one and one and claim that people hope that quantum computers will be useful for this task. (Also I really want this specific example because of some other non-QC stuff happening in the article.)

    But… If (like with solving TSP) we can already say that this would be a stupid thing to hope for because from a TCS point of view there is no reason to think that QC will be helpful for this type of problem, then I will of course have to toss this example out and think of a different topic to write about.

    Any insights are welcome.

  203. Scott Says:

    Vincent #202: That is a totally reasonable thing to dream of using a QC to do (of course, with plenty of stress on the word “dream”).

    Maybe room-temperature superconductors simply don’t exist. Or maybe they exist but not using materials that are readily available on earth. Or finding them requires such a terrifying exponential search that not even a quantum simulation capability will get us there. Or maybe they exist, and can be found, and the breakthrough that’s needed will be totally orthogonal to having a QC.

    But there seems to be no doubt whatsoever that a large-scale QC would improve our understanding of high-temperature superconductivity, by letting us do many of the relevant simulations for the first time. So yes, along with the scenarios above, one can also dream of scenarios where QCs play an essential role in a revolution in that area.

    On a different topic, on Dana’s suggestion, I was thinking of doing a whole post on “So you want to write a popular article about QC…,” with tips, common mistakes to avoid, etc. Of course there’s an inherent irony, in that the mere fact that someone would feel enough self-doubt to visit this blog to double-check their statements, means they’re probably most of the way there already! But hopefully such a post could help people on the margins.

  204. Joshua Zelinsky Says:

    Vincent #202,

    Slightly off topic, but it is worth noting that although people really like the idea of room temperature superconductors there would be a lot of practical benefits simply in getting superconductors that require conventional refrigeration equipment rather than liquid nitrogen. If for example one could have a superconductor at around – 25 C (248 Kelvin or so), one wouldn’t need any equipment more complicated than what you have in a household freezer. There’s also been work in the last few years with substances whose critical temperature goes up at high pressures. But in order for this to be significant one needs insanely high pressures- for example the highest temperature known superconductor is hydrogen sulfide which is a superconductor at -70 C with a required pressure of around 150 gigapascals. But if you had something which was a superconductor at -40 C and a a few gigapascals, that would also provide a lot of substantial practical applications. (I picked -40 C because that’s close to the current limit of commercial freezers that aren’t doing anything incredibly fancy and are using essentially the same technology.)

    We’re getting to the point now where we can already use superconductors for conventional electricity transmission although they are still rare. For example, the Holbrook Project https://en.wikipedia.org/wiki/Holbrook_Superconductor_Project uses a superconducting set of lines for part of the electric grid in Long Island. and Tres Amigas is going to use superconducting lines to connect the three major US grids . South Korea has also implemented superconducting lines. Even a small improvement in temperature would make this cheaper and more practical.

    It is highly plausible that even if practical room temperature superconductors don’t exist that ones which function in much higher temperature ranges do.

  205. fred Says:

    Quantum Physics is never going to be something you can popularize.
    In the end, does it really matter? Proof is in the pudding, and whatever the misconceptions are, it’s really not going to affect the engineering.

    I’m sure there were a lot of misconceptions about heavier-than-air flight when the Wright Brothers became famous. Even today, what portion of the general public understands how a plane flies?

    What matters is whether the hype around a technology is justified or not. E.g. should we prioritize AI or QC?
    And it’s the job of the experts to educate politicians.

  206. fred Says:

    A funny illustration of this is the fact that almost everyone (including ppl who know physics) have the wrong ideas about how tides work:

    https://www.youtube.com/watch?v=pwChk4S99i4

    Of course in this case it doesn’t matter because tides aren’t being hyped as the next big thing in computing.
    But the point is that the value of having the right understanding of something is very relative. You have to pick your battles.

  207. Ajit R. Jadhav Says:

    Scott # 203:

    “Classical MD—especially when calculating and tracking the kinematics of all atoms (all-atom MD) as opposed to aggregates of these (united-atom or coarse-grained MD)—require significant computational resources, meaning that the method is usually reserved for systems of relatively small sizes (less than a cubic micrometer) studied for a short time (less than a microsecond), even with today’s increased capabilities.” — http://imechanica.org/node/22143; https://www.sciencedirect.com/science/article/pii/S0301679X18300756

    [They are going to run out of the lowercase alphabets for authors of a paper, I suspect (each of whom jacks up his h-Index). [As usual, I guess, it’s the maths and physics which pave the way for engineering, as usual and in that order, all with good intentions.]]

    Me? … I passingly think of homeopathy—what it possibly could do to relieve humanity of its dis-eases. … Scratch that out. … I think of the persistence in the molecular structure of certain phases known or unknown as present in the ordinary water/alcohol and at ordinary temperatures, for a period of, say, 60 years, may be? [Oh! The British some Association for some Science, as They Understand It. … And the Common Law. [The tense is the present, not the past.]]

    The only competition to Americans, I suspect, comes from the Brits. As usual. … Or does it?

    Anyway, I am really, really, interested in none of those questions.

    –Ajit

  208. Scott Says:

    fred #205: As an academic, my life’s mission is to try to increase human understanding—whether that means experts’ (including my own) understanding of open research problems, or laypeople’s understanding of the basics, or any other level that anyone might be at. Sometimes I’ll succeed and sometimes I’ll fail, but I expect to be doing this from now until the grave (or debilitating brain injury or whatever). If, while trying to understand and explain the truth, I can also contribute to the development of useful technologies, that’s awesome—but it’s someone else’s life mission, not mine.

  209. Douglas Knight Says:

    Fred, does anyone actually make this mistake?

    I agree that the video you link to is clearer than the typical explanation, but I also think it’s obnoxious for the host (who?) to claim that everyone else gets it “wrong.” When I search youtube for tides the first and second videos give the correct explanation. (Your video is #3.) Maybe these videos confuse people and probably people who are confused won’t get un-confused by watching them (but that’s a really difficult problem), but people watch these videos looking for the answer to “why don’t lakes have tides?” can find the answer.

    I’m not even sure what is the error your host thinks people make. He seems to blame the metaphor of taffy stretching. I’m not sure, but I think he thinks that people think of taffy stretching as a volume changing process. (Although most errors involve people not thinking coherent things at all.) But one of the videos I linked uses the phrase “stretched thin” addressing this and drawing attention to low tide. Also, the complaining video has a clip of taffy, as if to demonstrate that it is a bad metaphor, but which seems to me to demonstrate that it is a good metaphor. (Of course, if people don’t visualize it correctly, it’s a bad metaphor, but including the clip seems to imply the wrong error.)

  210. Douglas Knight Says:

    The host of Fred’s tide video is Gabe Perez-Giz, the old host of the series PBS Spacetime, since replaced by Matt O’Dowd.

  211. Andrew Krause Says:

    Scott #208: “As an academic, my life’s mission is to try to increase human understanding—whether that means experts’ (including my own) understanding of open research problems, or laypeople’s understanding of the basics, or any other level that anyone might be at. Sometimes I’ll succeed and sometimes I’ll fail, but I expect to be doing this from now until the grave (or debilitating brain injury or whatever). If, while trying to understand and explain the truth, I can also contribute to the development of useful technologies, that’s awesome—but it’s someone else’s life mission, not mine.”

    This is exactly why I am an academic. 🙂

  212. Ed Says:

    Scott #190, you say “[you]’ve been very clear in giving Simmons a huge part of the blame for this,” but I don’t think this is very clear. I’m not looking to browbeat you, but Wadhwa’s name is in the title of the post, and Simmons’ name doesn’t even make the post’s body, it’s just buried several pages deep in the comments. Sure, the title and body were published before Simmons’ role was understood, but there are plenty of cases where you’ve updated posts in response to developing circumstances.

    I think Wadwha’s article deserves a correction (or several), because it contains materially false statements. I also think your post should be updated, because it only tells part of the story – it criticizes the reporter for making errors without acknowledging that a very prominent QC expert is making the exact same error. I know facts and fairness aren’t the same (and ideally the WaPo would be held to a higher standard than your blog) but, you know, you’re the one criticizing publication standards. And I agree with Gerald #189 that it gives the appearance that you’re less willing to criticize members of one of your own tribes (physicists) than members of another tribe (reporters).

    By the way, I love your blog, and I don’t want to sound ungrateful. I’ve gotten a huge amount from your popular articles, and I really appreciate all the work you’ve done. I just disagree with you a little bit here. 🙂

  213. Scott Says:

    Ed #212: OK, you’re right. I posted an update on the OP summarizing what we learned from the comments section.

  214. Bob Strauss Says:

    Having some experience with this kind of thing, it’s very possible that VW broached the idea of a correction with the responsible WaPo editor, and the editor vetoed the idea (not wanting to dive into the weeds about the subtleties of quantum computing). But I agree, after defending him on this thread, that VW should have processed these comments in a more humble and receptive and much less defensive way.

  215. fred Says:

    Scott #208

    Your explanations certainly help tremendously people with some engineering background like myself who have an interest in improving their understanding.

    But QM is particularly hard, if not impossible, to communicate correctly to the general public (with no math or physics background) because it touches at the fundamentals of what we mean by “understanding” something (in the sense of that famous Feynman video where he argues with a journalist about what it means to ask “how do magnets work?”).
    It’s no coincidence that the motto of QM has been “shut up and calculate!” 😛
    One can reduce QM to something like “it’s really just another way to do probabilities” (i.e. a special calculation), and while it’s super elegant and powerful, I don’t think the general public is able to derive much intuition from that.

    Then once you add CS to the mix, it’s even less obvious.
    For example I’m having a very tough time trying to get my wife to understand intuitively why video game graphics can’t look as good as movie CGIs.

    On the other hand, I’ve had some success getting her to improve her understanding of space physics/mechanics (rockets, orbits,…), without using any math.

    About the article that was the center of this, it seems like the topic was way over their head, but they didn’t realize it (and didn’t realize they didn’t understand their sources). They have to somehow report something about the QC hype, and move on.
    Maybe they’ll be more careful in the future and know what to avoid, or who to ask 😛

  216. Atreat Says:

    Fred #215,

    “For example I’m having a very tough time trying to get my wife to understand intuitively why video game graphics can’t look as good as movie CGIs.”

    That’s easy. Show her a piece of paper filled with math problems. Tell her that her computer has to solve all of those math problems to render a single frame of your video game.

    Now, show her a book filled with math problems. Tell her that her computer has to solve all of those math problems to render a single frame of her movie CGI.

    Tell her computers are really good at solving math problems, but still, even for them, there is a big difference in the time it takes to solve a single page of math problems VS a whole book filled with them.

    In the case of the movie CGI, the computers are much more powerful than the one she uses to play her video games AND they have days/weeks/months to solve all of the problems for all those frames of the movie. It could take a day to solve all of the problems for a single frame and then that frame gets recorded.

    In the case of your video game, it has to solve all of them in real time on a much less powerful computer.

  217. Jr Says:

    “As John Preskill tweeted, an excellent lesson to draw from this affair is that everyone in our field needs to be careful to say things that are true when speaking to the public.”

    I take it you mean “only say things that are true”.

  218. fred Says:

    Atreat #216

    Yes, I’ve tried to reduce it to math analogies too, but I think the problem is more in terms of what computer “power” is.

    People who have never written any program (the vast majority, at least for the older generations 😛 ) don’t understand why/how computers get “better” with time, they just know they do get better with time (that’s the marketing pitch), and their perception is more about current internet limitations (more bandwidth, less lag for web applications) than raw local compute power.
    They also have no idea that *everything* a computer does can be broken down in terms of shockingly basic arithmetic operations (even if you tell them it’s all just math, I don’t think the connection is obvious), so they can’t compare different machines or even different applications on a given machine.

    Btw, I’ve even seen many apparently smart people with pure CS background being totally confused about how computers work at the most fundamental level (e.g. how multi-threading happens at the CPU level, that there’s no free lunch). I guess it’s a side-effect of introducing CS with (too) high-level computer languages (trying to sweep all the gritty electronic engineering details under the rug).

    On top of that, graphics rendering (creating the illusion of a solid 3D world from a 2D image) is especially hard to turn into clear mental models. And there’s also the added difficulty of what it means for an application to have strict “real-time” constraints.
    Even people working in generic IT software (like, financial engineering, without any graphics background) have a tough time imagining how this is actually done (a rough idea of each step and their cost, and the various trade-offs).

  219. Scott Says:

    Jr #217: Yes. 🙂

  220. jonas Says:

    Vincent #202 and Joshua Zelinsky #204: Oh! Simulating high temperature semiconductors. That’s actually a nice potential application for QC that I don’t think was mentioned much on this blog. I wonder if there are other potential applications in material science research. Could we use quantum simulation to design electronic storage devices or integrated circuits with even smaller building blocks? Could we use it to design manufacturing process for extreme synthetic materials like ones based on carbon nanotubes? Could we use it to design quantum computer hardware?

    fred #206: For tides, what really hurts is that there’s a certain popular science book intended to educate children about physics that gets tides completely wrong, I read this book as a child back when I didn’t realize this. Other children have probably read the same book too, and some of them won’t realize until later how wrong the explanation is, especially if they don’t leave next to an ocean and so don’t actually see tides.

    Atreat re #216: I think that’s only the lesser part of the answer. CGI in a video game is harder because not only because of computing power, but also because of human work. A video game is interactive, and so has to display different graphics each time someone plays it. For a film, animators can watch the one animation that actually happens in the film, and can do a lot of manual corrections to fix everything that looks wrong. Sometimes that work involves sitting at a computer with a digitizing tablet and pushing polygons for days on end until everything looks the best. Sometimes it involves a human actor acting a character in the scene, and his movement getting captured to use as a source for an animated character in the film. For a video game, there’s no way the creators can try all the exponentially large number of possibilities and fix each of them manually so that it’s both informative (about the video game state) and look good.

  221. Vincent Says:

    Scott #203 and Joshua #204 thanks for the valuable insights!

    Scott #203, on the topic of doing a ‘so you want to write about quantum computing’: yes I think it is a good idea. All of a sudden lots of media outlets seem to want just that, probably due to the announcements of quantum computers actually being built soon. Many writers will look for information on the internet and so some of them will find you. It would be great for everybody if you have just the thing they need lying in wait. I have two slightly more detailed suggestions about this (supposing you are interested in advice from random readers.)

    1. This is obvious but I’m gonna say it anyway: the irony of only the people least needing it finding your post (or, more generally, the sadness of few people finding it) can be diminished (a lot, I hope) by making it easier to find. I don’t think you have the resources to bribe Google into redirecting all QC related questions to you, but you could make a permalink to the blog post on the front page of the blog. Or create a new category ‘Resources for journalists’ which can serve as a de facto permalink to a whole bunch of past and future blog posts that are valuable for journalists (e.g. the one linking to John Preskill’s NISQ article). (Disclaimer: I am not a journalist so I am not the best person to judge what would or would not be valuable to them, but I think you yourself would have a pretty good idea.)

    2. It would be really nice if your ‘so you want…’-blog post would distinguish between two types of article discussing quantum computing and contain separate advise for each of them. The first type are articles where the aim of the article is to tell the audience what quantum computing is and why it is so cool. I have no doubt you know just what to tell the (future) writers of this type of article and won’t comment on it.

    The second type are articles where the objective is to tell the audience something else, but where the article still only gets written because of the prospect of quantum computers being built. Vivek Wadhwa’s Washington Post column is a perfect example of this (the way I read it at least): his goal is not to tell us what quantum computers are, his goal is to urge the reader to update their crypto systems to quantum-computing-resistant ones. This difference in goals has some advantages. For instance on future occasions when he wants to warn his readers about this again he can easily leave out the TSP example because it doesn’t affect the core of what he wants to say. (Also I am quite sure that will do that after the current discussion.)

    However, leaving out the traveling salesmen won’t solve a problem that he and writers of other ‘type 2 articles’ still face: on one hand they still need to say *something* about what quantum computers are and/or about how the existence of quantum computers would be different from just another day in the all too familiar world where computers get faster and faster every day anyway and on the other hand this part can not have substantial length since it would distract the reader from their main topic and/or push them over the word limit set by their editor. So, what writers of these ‘type 2 articles’ need are, in the words of Vivek Wadhwa himself (comment #6), ‘1-2 sentences that your children could understand that explains the possibilities of quantum computing’. Unlike some commenters I find this a perfectly reasonable request in this context.

    Now for the great news. (I was planning on telling Vivek Wadhwa this, but he already ‘checked out’ before I came here. But now I have an excuse to tell it to you instead.) There is already a great place on the internet for finding examples of just this and it is right here on this very blog! I am talking about the post where, inspired by the Canadian prime minister, you and various knowledgeable commenters try to explain quantum computing in 30 seconds. It is really great to have so many different short explanations together in one place. Not just from the perspective just discussed, but also (and more so) because together they create a feeling of understanding of the bigger picture in the reader in a way that is quite different (not necessarily better or worse) from what a longer version of any of the individual explanations would achieve.

    Anyway, this turned out much longer than it looked in my head, I hope you didn’t mind reading it.

  222. asdf Says:

    Scott, Stochastic Mechanics does predict the Bell inequality violations if I understand it right. This happens in SM by superluminal communication (remember that SM doesn’t claim to be physically accurate).

    ‘t Hooft’s thing also predicts it through what sounds a bit like woo woo. He says QC should work for small systems but break down for big ones, something like Gil K’s prediction.

    CAT (‘t Hooft’s theory) seems like a grand sweeping idea that hasn’t been discredited but doesn’t seem all that likely. SM on the other hand is maybe more interesting, a (by comparison) modest idea that -almost- works, giving the hope that it’s an approximation to something correct which hasn’t yet been figured out.

    QM doesn’t seem like a theory at all, in the sense that GR is a theory. QM instead is a set of very accurate formulas that predict experimental outcomes but don’t posit a mechanism. There was a situation in the late 19th century where Lorentz and others found formulas that predicted experimental findings like the Michelson-Morley experiment, but creating a new mystery of why the formulas worked. The explanation of course came in 1905 with special relativity. Is QM today something like that 19th century situation with the Lorentz formula?

  223. Vincent Says:

    Aha, I looked up and reread the post and comments on Trudeau and it turns out that over the years the part of my brain responsible for memories made it a bit more perfect than it actually was. (I guess native speakers have a more concise way of saying this?) Anyway, perhaps you can still use some aspects of it in your upcoming blog post.

  224. Tim May Says:

    I’m more sympathetic to Vivek Wadhwa and other journalists than many here are. I do like the writers at Quanta a lot, though.

    The Gell-Mann Effect is only visible to _me_, of course, by definition, and only in some areas I have worked on. The howlers that came from engineering writers in my Intel chip days were memorable. Then, as I started working on crypto ideas, some other major errors. (Not as many, as few journalists were writing about crypto in the 80s and 90s. Still, Steven Levy, of “Hackers” fame in the mid-80s, got it quite right in his book on the “Crypto Wars” (NSA, Cypherpunks, key escrow, export controls, and so on). A key issue seems to be how much time a publisher gives a writer to really do a deep look and talk to a lot of people.

    Also, the more controversial a topic is, or the more sexy, the more chance for howlers. For instance, Vivek is involved with the Singularity Institute or whatever its current name is. Now it happens that I have known many of those who were active early on: Eric Drexler, Vernor Vinge, Ralph Merkle, Keith Henson, Roger Gregory, Chris Peterson, others. Some of us overlapped through another mailing list, the Extropians List. Some of us were in the Cypherpunks group. And we all lived in the Bay Area and met each other many times, from the late 1970s on.

    I was skeptical of the “diamondoid bodies” and a famous 1992 slogan: “In twenty years Silicon Valley will be gone, replaced by nanotechnology” (of the diamondoid self-replicating kind, not 7 nanometer gate length CMOS!).

    I may’ve given one or two skeptical quotes about the difficulties involved (which this comment does not have space to outline), but I’m sure believers thought my comments were howlingly wrong. And nanotech was a sexy topic….concentrating on the work by Stan Williams, at H-P as I recall, would’ve put readers to sleep.

    (It’s not the job of writers to keep readers awake, but the best ones often do.)

    And of course the latest hype is about Bitcoin, AltCoins, tokens, and the various forms of distributed ledgers. I was pushing the work of Haber and Stornetta of Bell Labs in 1993. But it took the work of others–Adam Back, Wei Dai, Hal Finney, Nick Szabo, and, notably “Satoshi Nakamoto”– provide an elegant working model. And this is what made distributed ledgers, or blockchains, interesting. Without Bitcoin, blockchains or distributed ledgers would still be a backwater.

    About 80% of the articles are factually incorrect and misleading. A higher percentage than even in the QC area.

    So I sympathize with any writer who covers CRSPR one week, blockchain the next, colonies on Mars the same week, the Singularity is Near the following week. And I haven’t even mentioned AI. Or biohacking. Or…..

    As I have said, I think Scott’s “Quantum Computing Since Democritus” is a fine way to learn about the topic. I also like David Albert’s “Quantum Mechanics and Experience,” a nice little book which anyone who bothers to spend a few hours brushing up on vectors and linear algebra should have no problem with.

    P.S. Speaking of linear algebra… More people than just engineers and scientists should know some of it, and early on. The main ideas and tools are easier to pick up than calculus. It’s an irony that the “CryptoCoin mining chips (graphics chips for a while, now ASICs doing the same algorithms) AND the “AI engines” doing TensorFlow and similar “deep learning” AND “machine vision” for self-driving vehicles are, at their core (a bad pun), basically just “linear algebra machines.” Ironically, all three hot areas are now competing for the limited supply of NVidia graphics chips.

    –Tim May

  225. Tim May Says:

    BTW, my favorite little book on linear algebra is “Finite Dimensional Vector Spaces,” the old elegant classic by Paul Halmos.

    And my favorite revelation in recent years has been that quantum things don’t just live in Hilbert space but instead apparently live here:

    “In physics, this would say that every state g of the combined system X ⊗ X′ is built by combining states of the systems X and X′. Bell’s theorem [20] says that is not true in quantum theory. The reason is that quantum theory uses the noncartesian monoidal category Hilb!

    “Also, in quantum theory we cannot freely duplicate or delete information. Woooters and Zurek [110] proved a precise theorem to this effect, focused on duplication: the “no-cloning theorem”. One can also prove a “no-deletion theorem”. Again, these results rely on the noncartesian tensor product in Hilb.”

    (Baez and Stay, “Rosetta Stone”)

  226. Atreat Says:

    Jonas #220,

    I’ve seen first hand at NVIDIA headquarters $100k system with movie level CGI in a completely dynamic context with the 3D meshes being created/manipulated at runtime. ie, a video game with movie level CGI. The problem is hardware bound with the raw compute power of the CPU/GPU.

    Open up any 3D content creation tool like Maya or Blender and you can render photorealistic 3D scenes on a very underpowered computer, but it might take days/weeks for each frame. In a video game, if you want it to run at 60fps that means every frame budge of ~16ms to render the 3D scene. The raw compute power of your hardware will then determine how many polygons and complexity of your shaders you can use in that 16ms time budget.

    fred #218, I don’t know. I’ve seemed to have good success explaining this to computer neophytes, but perhaps they don’t really understand and are just saying they do. I’m also not shocked about recent CS grads. I’ve had many interns come out of university with woeful understanding of hardware/software. Yes, some of it comes from less emphasis on fundamentals in favor of higher level abstractions. Some of it is from over reliance on fancy IDE’s and WYSIWYG tooling. Some is from a lack of curiosity or people who are not really interested in computer science, but just entered the field with the promise of a good salary.

  227. John Baez Says:

    Speaking of journalistic howlers, I like this one:

    In contrast with a regular bit that can only be set to 1 or 0, a qubit has three possible states. This can add up massively across an entire machine, to the point where future quantum computers may possess the ability to perform certain operations many orders of magnitude faster than classical computers.

    It’s from an article on Silicon Angle.

  228. Joshua Zelinsky Says:

    John Baez #227,

    Who knew we had quantum computers in the 1950s https://en.wikipedia.org/wiki/Ternary_computer !

  229. Raoul Ohio Says:

    Tim May #225:

    I totally agree that linear algebra is much simpler than calculus, particularly if you want to learn proofs. It should be taught widely.

    It is totally insane that the vast majority of calculus books do higher dimensional differential calculus WITHOUT matrices! The usual explanation for this is that “students would have to learn matrix multiplication first”. So what? Matrix multiplication is 100 times easier than calculus. Differential calc in all dimensions comes down to only one equation:

    f(x + h) = f(x) + f'(x) h + err(x,h)

    with conditions for err(x,h) being small. Here f'(x) is the Frechet derivative (matrix of partial derivatives).

    In this form, the implicit function theorem, chain rule, etc., are all simple to prove, and the one dimensional case can be illustrated with a sketch.

  230. Garnet Says:

    Vincent #202

    Regarding simulating physical systems (such as high temperature superconductors) – it’s not at all clear that quantum computers have an edge when it comes to simulating the things we care about in physical systems (e.g. low energy states, simple observables etc.)

    Simulation is so often described as a regime where quantum computers are expected to be superior without any caveats, that I sometimes feel the statement has taken on some of the flavour of “quantum computers can solve TSP in polynomial time”!

  231. fred Says:

    Deep learning meets QM/QC

    https://www.sciencedaily.com/releases/2018/02/180226122531.htm

    “New machine learning techniques can help experimentalists probe systems of particles exponentially faster than conventional, brute-force techniques”

  232. Jay Says:

    fred #231
    This has little to do with deep learning (stacked RBMs can form deep nets à la Hinton, but they’re using shallow versions) and nothing to do with reinforcement learning (“inspiration”).

  233. LK2 Says:

    Hi Scott, everybody, on an unrelated note, in the case you did not see it:

    https://arxiv.org/pdf/1801.03897.pdf

    It is a very simple calculation but it is brand new for the field and maybe better stuff will come in the future.
    LK.

  234. John Sidles Says:

    Scott prescribes (in the OP)  “Wadhwa should obviously have followed the journalistic practice of checking incredible-sounding claims.”

    Commended in respect to Scott’s prescription is the Philipp Hauke et al survey article “Can one trust quantum simulators?” (2012)

    We conclude that the answer to the question “Can we trust quantum simulators?” is … to some extent.”

    Hauke et al survey the real-world QED-dominated dynamics that constrains incredible-sounding claims like

    “quantum computers will revolutionize large parts of science in a benevolent way”

    The claim is from Aaronson and Bacon, “Quantum Computing and the Ultimate Limits of Computation: The Case for a National Investment (a white paper prepared for the Computing Community Consortium committee of the Computing Research Association)”, 2008.

    Scott, plenty of quantum-interested folks (definitely including me) would appreciate a Shtetl Optimized essay that revisited, one decade later, the claims and hopes that were set forth in your and Dave Bacon’s “Visioning White Paper” of 2008.

    Needless to say, “motte and bailey” defense of the original Aaronson/Bacon 2008 quantum vision would be less interesting than an updated essay upon the general theme “here are some quantum lessons that researchers have learned, and here is an amended vision that reflects those quantum lessons”.

    Definitely a “one decade later” updated Shtetl Optimized vision would inspire me to revisit, one decade later, my own Shtetl Optimized comment of 2008 upon the Aaronson/Bacon quantum vision:

    “It’s past time to jettison the folk theorem that classical simulation of quantum systems is infeasible, because this theorem is true only under assumptions that are so restrictive, as to be inapplicable to real-world quantum systems.”

    Commended to both journalistic and scientific attention  The Computing Research Association (CRA) “Workshop on Quantum Computing“, May 22-23, 2018, Washington DC.

    — references —
    @article{cite-key, Author =
    {Philipp Hauke and Fernando M Cucchietti
    and Luca Tagliacozzo and Ivan Deutsch and
    Maciej Lewenstein}, Journal = {Reports on
    Progress in Physics}, Number = {8}, Pages
    = {082401}, Title = {Can one trust quantum
    simulators?}, Volume = {75}, Year = {2012}}

  235. Stella Says:

    @Tim May,

    I think you might be amused to learn that, in an AI course at Georgia Tech, my professors (who duel-taught the class) had an extended dialog about if AI was “just applied linear algebra” and (assuming that that was correct) further discussing what is being swept under the rug with the word “just” in that sentence.

    @ The general topic,

    I saw this article independently of this post and sent a lengthy email to the corrections email at WaPo. No response yet (it was about a week ago) but I would encourage people here who feel like they have a strong understanding of the errors in the article to submit a corrections email or call. You can contact them at corrections@washpost.com or call 202-334-6000

  236. Stella Says:

    Also, on the topic of favorite journalism mistakes, I read an article on Laszlo Babai’s wonderful work on Graph Isomorphism that explained the problem, its context, and the rough idea of the solution fabulously. The only issue is that it spent as much time talking (correctly) about P vs NP and gave readers the distinct impression that this problem was NPC (I don’t recall if it stated that, or just implied that).

  237. Nicholas Teague Says:

    Hi Scott, this comment is unrelated to this specific post, but was just working on an essay and in the process found a link I had originally pulled from one of your posts appears now to be laden with likely malware. Specifically I am referring to your link to view Turing’s paper “Computing Machinery and Intelligence” from your post “The Turing movie”. Just thought I’d let you know in case you wanted to scrub or replace with another source in your archive. Cheers.

  238. Rick Samborski Says:

    Scott,
    Are you going to comment on David Ignatius’s novel, The Quantum Spy?

    Rick

  239. adviceneeded Says:

    P=NP still would not preclude PKC. We can have polynomial time separated one way functions where the polynomial separation can be arbitrary. Why do theorists base PKC on exponential time separation? Don’t we know any one way function say quartic time separated?

  240. John Sidles Says:

    The QuTech weblog Bits of Quantum recently published a retrospective essay by David DiVincenzo titled “Looking back at the DiVincenzo criteria” (February 22, 2018).

    A Shtetl Optimized essay presenting a historical perspective similar to that of DiVincenzo’s essay — perhaps having a Quantum Supremacy focus as contrasted with a Quantum Computing focus — would be of interest to many Shtetl Optimized readers (definitely including me).

  241. asdf Says:

    Quantum Supremacy?

    http://www.tomshardware.com/news/google-72-qubit-quantum-computer,36617.html

    https://research.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html

    https://hardware.slashdot.org/story/18/03/05/2156247/google-unveils-72-qubit-quantum-computer-with-low-error-rates

  242. Vincent Says:

    Can I ask a question about a line in a different article (that I found when researching my claim above that lots of media outlets are writing about quantum computers lately)? I tried to ask the author but I didn’t get a reply (so far).

    This article is similar in many ways to the WaPo column so I will try to summarize it in a way where you don’t have to read it to answer my question. Nevertheless here is the link: http://fortune.com/2018/01/31/commentary-this-new-technology-will-crack-the-blockchain-like-an-egg/.

    As the title suggests, the main thesis of the article is that quantum computers will ‘crack the blockchain like an egg’ and that the crypto-community should start worrying about that more than they do now.

    When reading the article it is very tempting to infer that the author believes the following:

    a) Quantum computers can efficiently solve generic search problems, and
    b) Any hypothetical device that can efficiently solve generic search problems can break the blockchain.

    [I, on the other hand, as a reader of this blog, believe that while b) is true, a) is false and that quantum computers could only break the blockchain if the one-way-function used in the blockchain has some special property making it susceptible to a quantum attack. (Similarly, it may or may not have a special property making it susceptible to a classical attack other than brute force search, but neither possibility is discussed in the article.)]

    Now for interesting part: somewhere in the middle of the article there is suddenly a very concrete and verifiable prediction suggesting that perhaps the author does know more about the topic than what is shared with the readers:

    “A 4,000-qubit quantum computer, for instance, could break the blockchain.”

    So my question is: where does this very concrete number 4000 come from? Has any of the readers of the blog seen it before? Is this refering to a specific algorithm?

    The article does not give any hints as to the answer; the ‘for instance’ in the quoted statement indicates solely that this sentence is meant to support the previous statement that quantum computers could have ‘far reaching consequences’. No argument there.

    But it seems unlikely the author would make up a random number. Is there a plausible way to reverse engineer the origin of the 4000?

    Thanks in advance!

  243. Craig Says:

    Scott,

    Google just announced that they are testing a 72 qubit machine. If they are successful, then I will admit that I was wrong when I said large-scale quantum computers cannot work. I am glad they are at the stage where they can actually test things which clearly demonstrate quantum supremacy.

    But I think it will fail.

  244. Bill Kaminsky Says:

    Hey! What’s this I see at the end of today’s Gizmodo article by Ryan Mandelbaum [1] about Google’s announcement of its 72 qubit “Bristlecone” chip?

    (Thanks to Scott Aaronson at UT Austin for helping me fact-check this post.)

    Just when you thought you were out, they pull you back in!

    A tad more seriously though, if you Scott had any part — even if indirect — in the title of the article “Google Unveils Largest Quantum Computer Yet, but So What?”, then you Sir have become even more of an intellectual hero to me than you already were! 😉

    [1] https://gizmodo.com/google-unveils-largest-quantum-computer-yet-but-so-wha-1823546420

  245. Tuesday Says:

    A bit late, but I’m absolutely fascinated by the following paragraph:

    Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance? Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now? How does it happen that the Washington Post publishes a quantum computing piece filled with errors that would embarrass a high-school student doing a term project (and we won’t even count the reference to Stephen “Hawkings” (R.I.P.)—that’s a freebie)?

    What fascinates me is that you don’t pause to consider that perhaps the Washington Post and others are also, um, mistaken about Trump’s “malfeasance”. You’ve just spent 500 words pointing out how inaccurate they can be even in the absence of ideological pressure to distort the truth – and yet, you maintain full confidence in them when they are under such pressure?

    Let me give a concrete example. Remember all that “Donald Trump is anti-Semitic” stuff? You know, like this:

    https://www.washingtonpost.com/opinions/anti-semitism-is-no-longer-an-undertone-of-trumps-campaign-its-the-melody/2016/11/07/b1ad6e22-a50a-11e6-8042-f4d111c862d1_story.html

    You, too, spread that line once upon a time – despite obvious signs that it was nonsense, such as the fact that Trump has a Jewish daughter and grandchildren.

    Do you still believe in it (despite developments like the recognition of the capital of Israel)? If so, why? And if not, has this affected your confidence in your political beliefs?

    I was going to end this comment with an allusion to Gell-Mann amnesia and the Michael Crichton quote – you turn the page, and forget what you know – but I see others have already been there. So maybe I should not be writing this at all.

    But I think I have something to add, because this goes further than just the suggestion that WaPo may be mistaken on occasion. There is a visible breakdown of reason here.

    It is very hard for me to think of any standards by which the “Trump is anti-Semitic” line would not be some kind of mass delusion (and the WaPo piece is not at all an outlier) – and I was (and remain) dismayed that someone like you could fall prey to this kind of thing. It is quite possible to be opposed to Trump in a reasonable way; but neither WaPo nor you seem to be operating in this fashion.

  246. John Sidles Says:

    Younger Shtetl Optimized readers, especially, are invited to contrast the pro-Trump observations of commenter “Tuesday” (circa #245) with Alexander Grothendieck’s observations in his essay “The Responsibility of the Scientist Today” (Queen’s Papers in Pure and Applied Mathematics, 1971).

    According to Grothendieck, the Trump-approved practice of torture is a grotesquely malfeasant offense against enlightenment:

    To complete this disheartening “moral tableau” of our times, there is the systematic reappearance of torture — a practice which had virtually disappeared from the so-called civilized societies since the end of the eighteenth century, only to reappear in perfected form in Hitler’s Germany and Stalin’s Russia. It has been the inseparable companion of every war since then (notoriously in the Indochina, Algeria, and Vietnam wars), and the established policy in most police states (like Greece or Brazil).

    Grothendieck’s entire essay is well-worth reading, as is translator Gordon Edwards’ amusing account of how he came to be Grothendieck’s translator and collaborator, “My Encounter with Grothendieck“.

    Edwards attests that Grothendieck was “good friends” with Linus Pauling; this motivates me to share with Shtetl Optimized readers yet another vision of the modern scientific enlightenment: Pauling’s post-WWII white paper “The possibilities for progress in the fields of biology and biological chemistry” (1946, PDF here; this Rockefeller Foundation manuscript was communicated to me by the MIT historian Lily E. Kay).

    Pauling’s 1946 vision of enlightened science informing a scientific Enlightenment is not only Grothendieck-consilientt, but also (in alphabetical order): Wendell Berry-consilient / Ted Chiang-consilient / Richard Feynman-consilient / Peter Fonagy-consilient / Jane Goodall-consilient / James Hansen-consilient / Stephen Hawking-consilient / Eric Kandel-consilient / Donald Knuth-consilient / Martha Nussbaum-consilient / Mary Oliver-consilient / Amartya Sen-consilient / Ed Wilson-consilient / Ken Wilson-consilient / Oprah Winfrey-consilient and Irvin Yalom-consilient.

    These consiliently Enlightened thinkers all are still living and writing, except now, sadly, the Enlightened voice of Stephen Hawking has fallen silent; yes Stephen will be missed.

    Conclusion  Does the name ‘Donald Trump’ belong on any listing of consiliently Enlightened thinkers? The plain answer is ‘no’.

  247. Scott Says:

    Bill #244: After this episode, my modified policy is that journalists can call me to fact-check an already-written article, just not for interviews or for explaining the basics of quantum computing. I was sure I’d announced that modified policy somewhere, but on searching just now I can’t find where, and I’m too tired to search more. It’s been a long day.

  248. Scott Says:

    Vincent #242: I’m not going to do hermeneutics on yet another random half-informed article—as I said, it’s been a long day and I’m exhausted. Instead, if you’re interested in the effect of QCs on Bitcoin, I’ll point you to this article by authors who actually know what they’re talking about. (As usual, when a serious analysis is just a google search away, there’s no need to pass around random rumors…)

    A one-sentence summary is that scalable QCs would completely break the signature aspect of Bitcoin (the way it’s currently implemented, using elliptic curve crypto), but as far as we know, would yield “merely” a quadratic speedup for the proof-of-work aspect, the part that most people have in mind when they talk about “breaking Bitcoin.” Both effects could be significant, but both could probably be completely fixed in principle, by simply migrating Bitcoin to a post-quantum signature scheme and quantumly harder proofs-of-work respectively. It’s just a question of doing that. Neither effect is a threat in the rapidly-approaching era of “noisy intermediate-scale” QCs, but only once we get full, scalable fault-tolerant QCs.

  249. Scott Says:

    Nicholas Teague #237: Thanks for the heads-up. Can you tell me where on my site you found that link to Turing’s essay? If anyone has suggestions for a non-malware-infested edition of Turing’s essay, that would be appreciated as well!

  250. Scott Says:

    asdf #241: As that Gizmodo article correctly explained, the 72-qubit thing is best seen as a “progress report” from Google. In the world of experimental QC, we should credit someone with having “built” something when, and only when, they have actual data to report about it (and from my conversations with the Google team, I think they fully agree with that statement!).

  251. Scott Says:

    Rick #238: Maybe, when and if I read it (I haven’t yet).

  252. asdf Says:

    Scott, one of the reasons P vs NP is considered so important is that P has been considered extremely robust under changes of computation model (Turing machine, random access machine, etc).

    Does the QC model weaken that sense of robustness and perhaps make P seem less important? Thanks.

  253. Scott Says:

    asdf #252: As far as anyone knows today, quantum computing is the only physically realizable model of computation that would let us efficiently compute things outside of P. (Classical probabilistic computation was once a candidate for such a model, but today we have very strong evidence that BPP=P.) So, you could view it as a kind of backhanded compliment to P, that it took something of the enormity of quantum mechanics to transcend it! 🙂

    Yes, you could argue that QC does slightly decrease the motivation for the specific question P vs. NP—since even if P≠NP, the question we now need to ask, if we want to know whether we get the world-changing consequences that flow from all NP problems being efficiently solvable, is whether NP is contained in BQP.

    On the other hand, even before QC, I personally feel like the right way to think about P vs. NP was simply as the flagship for dozens of questions about the limits of efficient computation, none of which we currently know how to answer, and all of which seem to be hard for similar reasons. For starters, there’s P vs. PSPACE, NP vs. coNP, NP vs. P/poly, P vs. P#P, the existence of one-way functions…

    And I’d claim that, not only has QC not decreased the importance of this general class of questions, it’s significantly increased its importance, since now we know that these same questions are also bound up with the question of whether Nature can be efficiently simulated using digital computers.

  254. asdf Says:

    Thanks! As another P vs NP matter, I notice in

    https://www.scottaaronson.com/democritus/lec9.html

    you mention “modeling the stock market” as an example of an NP-complete problem. But I thought the stock market is modelled very well by Brownian motion, aka the Wiener process. That’s completely stochastic, so if you meant -predicting- the stock market, that’s not computable much less in NP. Did you have something else in mind?

  255. Scott Says:

    asdf #264: In those off-the-cuff remarks, I was talking about modeling the stock market as an example of a problem that (after suitable formalization) is plausibly in NP—but not necessarily NP-complete. What I had in mind was the usual computational problem in any machine learning: that is, finding a model, within a given class of models, that does the best job that it’s possible to do among all models in the class at reproducing past data (in this case, past stock market data). Armed with such a model, one can then try to apply it to the future, although of course there’s no mathematical performance guarantee in that case.

  256. asdf Says:

    Oh I see. That’s also called the Netflix problem ;). Thanks.

    Can you tell me if quantum state separability is in NP? I’d think it is since it’s like proving that a number is composite by showing the factors, but maybe I miss something. Wikipedia only says it’s NP-hard rather than NP-complete.

  257. Scott Says:

    asdf #256: Yes, separability is in NP. You do need to check something for that, which is that if a state is separable, then there exists a decomposition involving a number of product states that’s polynomial in the Hilbert space dimension D—but that turns out to be true (I think the number is D2). See this survey by Lawrence Ioannou for more.

  258. Vincent Says:

    Scott 249: thanks, that article you linked for is more than I could hope for!

  259. James Says:

    Another way to look at it might be that the Washington Post isn’t the shining beacon of journalistic excellence you consider it to be…

  260. JimV Says:

    I forgot to comment on your zeroth commandment, which I guess was the major point of your post.

    I have to say the two proposed forms at the start of the post seem uncharacteristically poor–sort of meaningless when you try to implement them. I would have thought your first try would have been Jesus’s “Do unto others as you would have them do unto you.” (Doesn’t work for sado-masochists though.)

    Personally, I vote for Bill & Ted’s “Be excellent to each other.”