“How Might Quantum Information Transform Our Future?”

So, the Templeton Foundation invited me to write a 1500-word essay on the above question.  It’s like a blog post, except they pay me to do it!  My essay is now live, here.  I hope you enjoy my attempt at techno-futurist prose.  You can comment on the essay either here or over at Templeton’s site.  Thanks very much to Ansley Roan for commissioning the piece.

53 Responses to ““How Might Quantum Information Transform Our Future?””

  1. Xiaotong Says:

    Relevant: http://www.smbc-comics.com/?id=1623

  2. Denver Bob Says:

    I’ve begun listening to your predictions about quantum computing and assuming they are anticorrelated with what will actually happen — this strategy seems to be working.

  3. lewikee Says:

    Regarding the second discussion question: (Do you think there’s something special about the subject of quantum computing that makes it hard for journalists to get right? Or is it just the usual difficulties of reporting on any new technology?)

    I think it is the former. First, there’s the whole ‘woo’ aspect of the popular interpretation of quantum physics, and how anything quantum is allowed to lead to magical conclusions. People figure that if even physicists announce they don’t have a good intuitive grasp of quantum mechanics, then what’s to say any interpretation is more wrong than another? This leads to the unfortunately very intuitive (and wrong) idea about the parallel universes/parallel solutions perception of quantum computing. “Finally,” people think, “I read something that makes sense about this quantum business!”

    Ultimately, it would take some learning (and thus effort and time) to properly understand what to expect from quantum computing. Understandably, that is a barrier, and journalists and readers take the path of least resistance.

  4. Scott Says:

    Denver Bob #2: What’s one example of where your strategy worked? (It would help if you could quote my prediction.)

  5. Shmi Nux Says:

    > Back around 1950, even the people working on the first electronic computers couldn’t imagine most of the applications of computers that fill our lives today.

    1950 is too far back, since you describe the future 25 years from now. Looking at 2014/15 from 25 years ago, 1990 would be somewhat more reasonable. Was there a popular but overhyped scientific idea back then, elusive but close to implementation, which is in the mainstream now? Wikipedia suggests a few, one standout being the Human Genome Project, which was hailed as leading to the future cure for most genetic diseases. We know how that went. This 5-year old quote from the NYT is still valid:

    For biologists, the genome has yielded one insightful surprise after another. But the primary goal of the $3 billion Human Genome Project — to ferret out the genetic roots of common diseases like cancer and Alzheimer’s and then generate treatments — remains largely elusive.

    Is quantum computing like that? Or more like the GPS, which was launched the same year and is now accessible from everyone’s pocket?

    Let’s try another tack, Science’s “breakthrough of the year”, which includes “the first quantum machine” in 2010, deservedly or not. Unfortunately, these only go back to 1996, but the 1990 molecule of the year was… synthetic diamonds, which had an obvious potential to dethrone naturally occurring diamonds after a quarter century, which obviously did not happen.

    Other candidates: high-temperature superconductivity (1986): very promising, but still a mystery, Bose-Einstein condensates (1995): steady progress, but still no commercial applications.

  6. Gil Kalai Says:

    “..Marvel that all the confident predictions that quantum computers were a fantasy, impossible even in theory—that deep down, the universe is ‘really classical,’ or has ‘correlated noise’ that diabolically conspires to kill quantum computation—marvel that all those ideas are now strewn like corpses across the intellectual landscape. I mean, have you read the defenses by the erstwhile quantum computing skeptics: their lame protestations of how, when they’d said ‘impossible,’ they didn’t really mean ‘impossible’?”

    I find the article entertaining and enjoyable in spite of me being one of the “skeptics” who think that superior computation through quantum computers is not possible. (And  I really mean “not possible” 🙂 .) Perhaps the best place to learn about my point of view is in the videotaped lecture (and its abstract) at the Simons Institute for the theory of computing. See also my blog post which contains also a shorter  overview video  and an additional lecture on topological quantum computing. (And a long discussion with some central researchers in the field.) You can also have a look at my year long debate with Aram Harrow which took place on Lipton and Regan’s blog. (Here is the first post entitled “Perpetual motions of the 21th century.”)

    Of course, impossibility of computationally superior quantum computing is not in conflict with quantum mechanics.

    My highlight of Scott’s lovely article is:

    “It’s the year 2040… Scientists, you’re vaguely aware, are using the new quantum simulation capability to help them design drugs and high-efficiency solar cells, and to explore the properties of high-temperature superconductors… besides the applications to quantum simulation and cryptography, and the nebulous application to optimization and machine learning, the world of 2040 has one important additional application of quantum computers: one so astonishing that, back in 2014, no one could possibly have imagined it. Alas, because of that very fact, I can’t tell you what it is. ”

    It would certainly be fun to learn about the expected and unexpected use of operating quantum computers in a 2041 conference celebrating Scott’s 60th birthday.

  7. John Sidles Says:


    @techreport{Author = {Scott Aaronson and Dave Bacon}, Institution = {MIT and the University of Washington/Computing Community Consortium (CCC)}, Month = {December 12}, Title = {Quantum Computing and the Ultimate Limits of Computation: The Case for a National Investment}, Year = {2008}}

    (conclusion)
    A Call to Action

    Building a quantum computer is a daunting challenge. Current progress toward this goal is best described as piecemeal. However, we are currently approaching a tipping point for quantum computers: the most promising technologies have demonstrated all the necessary building blocks of a quantum computer.

    A large effort to carry these technologies forward and put these blocks together is now prudent, and, we believe, likely to produce future technological spin-offs.

    The national security consequences of delayed investment in quantum computing, the valuable return on this investment, and the long-term benefits of putting the US at the front of this 21st-century intellectual endeavor, all argue for support of a serious national initiative in quantum computing.

    Open Questions  Has the anticipated [in 2008] “tipping point” of quantum computing arrived? If so, what are its salient features? If not, when can we expect the tipping point’s arrival, and what features might it exhibit? Does the present lack of a national (or global) quantum computing initiative substantially impair national (or global) security?

    Most interestingly, what new insights relating to quantum computing have been gained in recent years [e.g since 2008]?

    Sage Advice  “He [Lipmann-Muhlhausen] expresses the view that it is not at all desirable to calculate the end. The people will despair if the appointed time fails to materialize the Messiah.” (from Abba Hillel Silver’s A History of Messianic Speculation, 1927).

  8. ramsey Says:

    I think one source of confusion about quantum computing is that journalists classify it under “technology” rather than “science”. Thus they write in terms of new magical gadgets and what they will be able to do for us, rather than in terms of basic science research that at some point may underlie useful gadgets.

  9. rrtucci Says:

    Finally, a low brow and shallow blog post at Shtetl Optimized that I can participate in as an equal.

  10. Darrell Burgan Says:

    “Alas, because of that very fact, I can’t tell you what it is.”

    Absolutely awesome. A great article.

  11. Quentin Says:

    You didn’t mention the singularity, is it an omission?

  12. Scott Says:

    Quentin #11: Well, I said “no human-level AI.” I believe that implies no singularity. 🙂

  13. Jay Says:

    >the ground state energy of a complicated biomolecule

    Forgive the nitpick, but in line with your essay “low energy state(s)” seems better than “ground state”.

    > Do you think that the author is suffering from a similar myopia and shortsightedness with regard to quantum computing?

    Of course not, but the impact that QCs would have seems understated. Don’t you think it’s hard to expect less than scientific revolutions, in the sense it’s hard to expect less than P!=NP? QCD, QED, strings and LQG, chemistry and biochemistry, material science, supraconductivity, everything that goes under the name of proteomics… so many fields hit the QC wall and got stuck there, this is amazing.

  14. Rahul Says:

    I read the article twice & am totally lost in the depths of its sarcasm.

  15. Scott Says:

    Jay #13: I hope you’re right, and you might be! But another factor is that, in general, I believe it’s best to underpromise and overdeliver, so I’m far less worried about making that mistake than about making the converse one.

  16. Scott Says:

    Rahul #14: LOL, then the article succeeded! (Am I being sarcastic right now? Who knows?)

  17. Alexander Vlasov Says:

    BTW, I have estimated probability of QC during next 25 years using interpolation from couple recent estimations by Scott in this blog. I got ~15%. If I miss something?

  18. Scott Says:

    Alexander #17: I don’t remember any of the “estimations” of mine that you’re “interpolating” to get that figure—could you give me some links? In any case, the figure that you come up with seems defensible. (Of course, the answer might depend strongly on exactly how you define “QC”: for example, will extremely useful quantum simulators that still aren’t universal QCs count?)

  19. John Sidles Says:

    Scott Aaronson imagines  “The world of 2040 has one important additional application of quantum computers: one so astonishing that, back in 2014 [i.e., 26 years earlier], no one could possibly have imagined it.”

    Gil Kalai imagines  “Quantum devices are subject to noise which systematically […] causes general-purpose quantum computers to fail, and also special devices demonstrating a computational speed-up.”

    Problem 1: Unimaginable Advances  It’s not so easy to specify any technology that we have today, that is sufficiently exotic that (in Scott’s phrase) “no one could possibly have imagined it” back in 1988. So how likely is it that the next 26 years will witness “unimaginable” advances?

    Problem 2: Naturality and Universality  It’s not so easy to specify any dynamical framework that is consistent with Gil’s quantum no-go principle, and yet consistent too with existing quantum experiments.

    Problem 3: Desperate Speculations  It’s not so easy to muster the temerity even to attempt to reconcile Scott and Gil’s viewpoints — because any such attempt necessary entails desperate speculations. So let’s tackle these problems in reverse order, beginning with “desperate speculations.”

    ————

    Answer 3: Desperate Speculations  John Preskill wrote a wonderful preprint back in 1983, “Fractional Charge and Magnetic Monopoles,” that authorizes us to engage in “desperate speculations,” both explicitly and by implicit example. Preskill’s authorizing passage reads (when lightly edited for concision)

    “I will assume without apology that Cabrera really has discovered a monopole with the Dirac magnetic charge, and that Fairbank and collaborators really have discovered objects carrying fractional electric charge.

    Then how can we explain explain the apparent violation of the Dirac consistency condition?

    This question leads us into some desperate speculations.”

    Note  it would be terrific if the full text of Preskill’s (inspiring and wholly enjoyable) preprint could be made available on-line! And for all you young whipper-snappers, neither Cabrera’s monopole nor Fairbank’s free quarks were ever seen again.

    ————

    Answer 2: Naturality and Universality  We turn to another desperately speculative — yet enjoyable — 20th century article: Ken Wilson’s “Renormalization Group Methods: to Stanislaw Ulam on the Occasion of His 65th Birthday” (1975; Google finds the full text). Ken Wilson speculates (desperately as it seemed at the time)

    “Even if one succeeds in formulating the renormalization group approach for a particular problem, one is likely to have to carry out a complicated computer calculation, which makes most theoretical physicists cringe.

    Especially in the case of strong interactions of elementary particles, most theorists hope to solve the problem [e.g., of a Yang-Mills mass gap] without turning to modern renormalization group methods.

    It will probably require several years of stagnation in elementary particle theory before theorists will accept the inevitability of the renormalization group approach despite its difficulties.”

    To address the naturality-and-universality problem, we invert Ken Wilson’s notion of a renormalization group by the following (desperately speculative) chain of reasoning. As is ubiquitous practice nowadays, we approximate nature’s state-space as a varietal manifold, upon which computer simulations of cringe-inducing intricacy — that none-the-less run in PTIME — unravel dynamical trajectories by the now-standard methods of Lindblad and Carmichael.

    However, varietal simulations cannot accommodate general Hamiltonians and general initial conditions (which are essential to quantum computing) without introducing cringe-inducing causal paradoxes (because some operator algebras resolve naturally; others not). Moreover, dynamical trajectories on varietal manifolds inescapably flow through and around cringe-inducing singularities. Thus varietal dynamics per se solves neither Gil’s naturality-and-universality problem nor Scott’s unimaginable-advances problem.

    To progress further, we speculate (desperately of course!) that “neither of these problems” — that is, Scott’s and Gil’s — “can be solved alone; they are directly related, one to the other” (in George Marshall’s celebrated phrase).

    M-theory upside-down is W-theory  With a view toward a natural unification of Scott’s and Gil’s worldviews, we invert Wilson’s renormalization group philosophy — which regards low-dimension state-spaces as natural approximations to Nature’s high-dimension state-spaces — and postulate the opposite: high-dimension Hilbert spaces are the (uniquely) seamless algebraic resolutions of Nature’s (unique) low-dimension varietal spaces by Nature’s (unique) Hamiltonian flows.

    In other words, we give up the large dimensions of Hilbert space by giving up also the freedom to specify arbitrary Hamiltonians; thus tailoring the dynamical state-space and the dynamical operator algebra each to fit the other (which of course simulationists do already).

    In the limit that separability, causality, and relativistic invariance all are fully respected, the resulting uniquely resolvable varietal dynamics — which provides a consistent model for Gil Kalai’s postulates — can only be “M” theory, which we now appreciate as “W”-theory (Wilson’s theory) turned upside-down. That varietal M-theory will resolve classically to general relativity, and resolve quantum-mechanically to the gauge field theories of the Standard Model, already is evident.

    And so we arrive at a hopeful future history — which is desperately speculative of course — encompassing the next 26 years, which answers Scott’s problem as follows.

    ————

    Answer 1: Unimaginable Advances (in 2040)  The incremental extension of PTIME varietal simulation methods to systems of ever-more-realistic, ever-more-general dynamics — extensions that presently are evolved for purely practical (even medical) reasons — will lead naturally to an appreciation (in 2040) of M-theory as the unique varietal dynamical system whose unravellings resolve properly to a unique (effective) quantum field theory that is causal, spatially separable, generally relativistic … and PTIME-simulable.

    Open Problems (in 2040)  In the year 2040 it will not be known — and will be vigorously debated! — whether PTIME-simulable varietal M-theory is the true theory of Nature.

    Engineering Capabilities (in 2040)  Technologies and enterprises that involve sensing and/or transport and/or control — which is pretty much all of them — will transformationally advance toward the performance bounds that are imposed by thermodynamics and standard quantum limits, as optimized by ubiquitous and efficient PTIME varietal dynamical simulations.

    Looking Backwards (from 2040)  “Yah, here in 2040 we’re starting-up Moon, Mars and even asteroid colonies, and we’ve got solid evidence for life beyond Earth, pretty good cures for cancer and schizophrenia (and even for obesity and depression), a carbon-neutral planetary energy economy, accelerating life extension, and we’ve made fair progress toward human-level AI.”

    What Ain’t We Got? (in 2040)  “What we haven’t got here in 2040 is jetpacks — which are impractically dangerous in any century — and we’ve got no quantum computers either.”

    Never Give Up! Never Surrender! (in 2040)  “Here in 2040, scientists are working harder-than-ever to build quantum computers, because the feasibility (or not) of quantum computation is still a crucial test of the century-old hypothesis that Nature’s state-space is a Hilbert space.”

    Conclusion  Best wishes for life-long speculations that are mutually respectful, good-natured, practically useful, and desperately fun, are extended to Scott, and to Gil, and to John Preskill, and to all readers of Shtetl Optimized.

  20. Alexander Vlasov Says:

    Scott #18, Just Googling I simply found the estimations in your comments I remembered: 1/8 during 20 years, and 1/2 during 100 years in this blog 2014 (More “tweets”) – I think it is reasonable to set 15% for 25 years from such estimations, yet in 2012 you wrote about 1/2 during your lifetime, that I did not remember (Whether or not God plays dice, I do).

    Why it should be defensible or not at all, against whom and why?

  21. Rahul Says:

    Scott #16:

    Infuriating really! 🙂

    First I go, hmmm he means *this*. Read it again and I’m convinced, he surely means *that*.

    Only blogger who might match your skill at this art is Tyler Cowen of Marginal Revolution.

  22. JollyJoker Says:

    One thing I’m pretty sure we will have in 2040 (unrelated to quantum computing) is lots of devices we use by talking to them, advanced enough that the average 2014 person would think of them as “people”. I don’t mean to imply we’d have human-level AI or anything near, but we’d have the kind of smart machines you can talk to that laymen think of when you say artificial intelligence

  23. Juan Miguel Says:

    Scott,

    Great article! If I make it alive to 2040, I hope I will remember to look back at your predictions 🙂

    But no mention of quantum metrology? Remember, it’s already helping us do really cool stuff, like:

    Gravitational wave detection
    http://iopscience.iop.org/0264-9381/27/8/084006

    Magnetic sensing
    http://www.nature.com/nature/journal/v455/n7213/abs/nature07279.html

    To name a few!

  24. rrtucci Says:

    Rahul, my take home from this blog post was that the Templeton Foundation is a tremendous fraud. How does paying for such a blog post from Scott help the causes of Science or Religion in any even mildly useful way? How does it address half a dozen wars that are going on around the world right now (Syria, Iraq, Gaza, Ukraine…)? How does it address climate change and the extermination of species (eg., Rinnos, Elephants, Coral Reefs)? How does this address our pressing need for new antibiotics? Clearly war is at the interface of science and religion (modern weapons made by scientists and thou shall not kill) Clearly God, if he exists, and common sense if he doesn’t, wants us to be good stewards of Nature and of our bodies. TEMPLETON FOUNDATION , YOU ARE SUCH A BANAL FRAUD

  25. Question Says:

    rrtucci@24,

    I know you must be very concerned about the fraudulent banality of the Templeton Foundation (and I suppose by implication by Scott’s pecuniary complicity as well), but rather than wasting precious time posting OUTRAGED comments, wouldn’t your time be better spent actually working to ameliorate the vast laundry list of horribles due to which you apparently feel so overwhelmed. 😉

    John Sidles@19,

    Good to see you back, and from the length of your comment, it’s clear you’ve been storing things up for quite some time. 🙂

  26. Hanno Says:

    I was actually a bit surprised that you seem to see a real application for Quantum key distribution. For a variety of practicallity reasons I consider this highly unlikely. And there isn’t a real need, as even in the quantum computing age it seems likely that we’ll have alternative encryption algorithms to choose from.
    Another thing that made me want to disagree: I seriously hope by 2040 browsers won’t have a padlock icon – because encrypted connections will be the default and they will either completely have removed the support for unencrypted http or they will show a warning when a connection is unencrypted.

  27. fred Says:

    John #19
    “It’s not so easy to specify any technology that we have today, that is sufficiently exotic that (in Scott’s phrase) “no one could possibly have imagined it” back in 1988. So how likely is it that the next 26 years will witness “unimaginable” advances? ”

    The thing is, who cares?
    Certainly lots of things were “imagined” by some people in the 80s (e.g. just read cyberpunk scifi), but then:
    1) imagining things is easy, realizing them is hard. If the two were the same, “futurists” would all be billionaires and no entrepreneurs/inventors would be rich.
    2) the actual advances since the 80s are still pretty mindblowing even if theoretically someone could have predicted them.
    E.g. the fact that we’re walking around with instant access to the totality of humanity knowledge in our pocket.
    3) With most advances, from theory to practice there is usually enough time passing for everyone to “digest” it and take it for granted. It’s not like people woke up one morning of 1910 and suddenly you could book a plane ticket and fly to around the planet in first class.
    The only exception to this was the atomic bomb… so much happened in the few decades spanning the birth of QM (certainly the timing of WW2 was also crucial).
    4) often progress isn’t just about engineering and physics but about reaching a wide audience. E.g. going from ENIAC to home computers was about thinking outside the box in terms of what people wanted/needed. That was the true revolution.

  28. Sid K Says:

    Regarding your question about whether we suffer from myopia because we are in the pre-quantum computing era:

    Robin Hanson once asked on his blog why our advances in (classical) algorithms is very well correlated with our advances in chip hardware. This is puzzling because algorithm design and hardware design seem to be generated by different kinds of processes—small groups of computer scientist in the case of algorithm design vs. large corporations using lots of manufacturing tech in the case of chip hardware.

    The most likely hypothesis to explain the correlation is that as we obtain improved hardware, we can now try new algorithm designs on it and identify better algorithms.

    Thus, we have reason to believe that as large-scale quantum computing hardware becomes more available and reliable, we would have improved ability and motivation to come up with better and better quantum algorithms. A major issue with quantum algorithm design right now is that we can only work with pen and paper and classical computers. We can’t code things, run them and see how it works. For example, it is hard to imagine how something similar in spirit to the simplex algorithm would be discovered by the quantum computing community now.

  29. Itai Says:

    rrtucci #24
    Is it a coincidence that you forgot to mention that Israel is in a state of war now , not only gaza ? or this is what you get from your media ?

    Anyway about Scott’s post, I think we all have no clue how much and if at all quantum computation will help humanity.
    predicting the future for more than few years is a fraud, and no one should take seriously build stuff on 40 years future prediction.
    I also think that most of the futurists are frauds, especially “Singularity” prophets.

    “prophecy has been taken from prophets and given to fools”
    guess who and when said it? 🙂

  30. rrtucci Says:

    Question, I want to save the rhinos and the elephants but I need the Templeton Foundation’s money. Here is what I would do:

    Create a sanctuary for the rhinos, like a giant zoo, surrounded by a perimeter city. Tell everyone in the city, I will pay you a good living if these animals are kept alive. I will give you a bonus if their number increases. I will reduce your pay every time a poacher gets one of them. In fact, your pay will be monotonic with the number of rhinos If they all die, you lose all your income from me. Here are some Kalashnikov’s and radars and SAM’s to deal with the poachers.

  31. rrtucci Says:

    I would also flood the market with fake rhino tusk powder at half the price

  32. Scott Says:

    Sid K #28: I’m not sure if it’s true that advances in algorithms have paralleled advances in hardware that closely. For example, many fundamental problems had huge algorithmic improvements in the 60s and 70s, but extremely little since then (i.e., “most of the juice was sucked out”), even as hardware continued on an exponential trajectory. In any case, the simplex algorithm was invented in 1947, and if I’m not mistaken, Dantzig was basically just using pencil and paper. In that sense, it’s not so dissimilar from quantum algorithms like Shor’s and Grover’s, which were also discovered just fine using pencil and paper. On the other hand, I believe experimentation with actual computers was pretty important to the development of simulated annealing and other SAT-solving heuristics, and it seems likely to me that the same will be true for the adiabatic algorithm.

  33. Scott Says:

    Hanno #26: To be honest, I have no idea whether QKD will ever find a significant market or not. But at least the technology already exists (and “works,” over short enough distances), if a nontrivial market were ever to develop.

    It’s true that there are classical cryptosystems that are probably secure even against quantum computers. However, the trouble is that all such systems currently known are either
    (a) private-key, and hence cumbersome to use, or else
    (b) public-key systems like the lattice-based systems, which currently require key sizes and message sizes large enough to make them impractical for most applications.

    Of course, it’s possible that more practical quantum-secure public-key cryptosystems will eventually be discovered. Certainly lots of people have been thinking about that. But if no such systems are discovered, and if (on the other side) the technology of QKD were to improve so that it could handle much higher bit-rates and distances, then there really could be a good use case for QKD. And, in the hypothetical scenario of my story, it just so happens that both of those conditionals are true. 🙂

  34. Sid K Says:

    Scott #32:

    The simplex algorithm is probably not a very good example. The reason I invoked it was to illustrate that if we had simply done a worst case analysis on that algorithm, then we might’ve thought that it wouldn’t be useful. Of course, you could say we wouldn’t have made that mistake because Dantzig was able to solve problems on pen and paper much faster anyone else; but then he was using a classical computer—his own mind (along with pen and paper). I was trying to make the point that our current analytic techniques for quantum algorithms might miss a lot of in-practice speedups because we don’t have computers to test it. And this effect need not just be confined to adiabatic optimization.

    The claim that algorithmic progress has been on par with hardware development was made by Robin based, essentially, on small parts of this White House report (see for example, p.71). A money quote from that report:

    “In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved
    using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.”

  35. Rahul Says:

    And, in the hypothetical scenario of my story, it just so happens that both of those conditionals are true

    Do you assume conditionals as true (in your article) based on perceived likelihood? Or story telling convenience? Wishful thinking?

  36. Hanno Says:

    @Scott #33:
    If I think what I find more likely – having larger key sizes that are compensated by higher transmission speeds or some complex physical installation in my computer that requires some kind of complex physical equipment and a particle transmission line – I think I find the larger keysizes more likely.
    With “large keys” for post quantum algorithms we talk about keys in the range of hundreds of kilobytes, not more. That’s challenging, but doable.
    And there are even some post-quantum systems with reasonable key sizes and performances. The most popular and most advanced is ntru. It is patented, that’s why it’s rarely used, but by 2040 the patent will be expired 🙂

    Just to elaborate why I think qkd is highly unlikely: I can’t think of any way to do that with wireless devices. One has to keep in mind that we’re quickly moving away from classic “desktop computers” to devices like tablets and mobile phones. While I don’t think the desktop computer will disappear, I can’t think that in 20 years we will use any encryption system that requires a dedicated cable.

    Another practical issue is that common qkd protocols require an additional authentication channel. And for that you’ll still need some kind of classic crypto.

  37. David Brown Says:

    In the Wikipedia entry
    https://en.wikipedia.org/wiki/Gil_Kalai
    are Kalai’s conjectures on quantum computing adequately presented? Are there “rebuttal” conjectures on quantum computing that give us an optimistic consensus on what quantum computing can achieve?

  38. Scott Says:

    Rahul #35:

      Do you assume conditionals as true (in your article) based on perceived likelihood? Or story telling convenience? Wishful thinking?

    The question they asked was, “How might quantum information transform our future?” I decided to interpret that question as: “how do I imagine quantum information transforming our future, if the engineering aspects were to go along a very, very optimistic but still realistic trajectory, while theoretical understanding were to remain pretty much stuck in its 2014 state?”

    So, my treatment of QKD is perfectly consistent with that interpretation of the question. (And my joke about “for that very reason, I can’t tell you what it is” was pointing out why I had to hold theory constant: because it’s almost impossible to predict along which dimensions our theoretical understanding will change.)

  39. Scott Says:

    David Brown #37: Gil should speak to whether his conjectures as presented on Wikipedia are adequate representations of what he believes. The “rebuttal conjectures,” which imply quantum computing is possible, would basically just be:

    (a) Quantum mechanics is exactly true.

    (b) It is possible to prepare a collection of qubits in the all-0 state, then apply 1- and 2-qubit gates to qubits of one’s choice, then measure a qubit.

    (c) Nature has no “correlated noise source” that diabolically (and unavoidably) conspires to kill quantum computation.

    Gil himself doesn’t dispute (a), and (b) I’d say has already been experimentally demonstrated, so the dispute is about (c). And the position of most QC researchers would be that the burden is on the skeptics at this point to derive from the laws of physics what this diabolical noise source is, and show why it kills QC. It’s not enough just to postulate noise models out of thin air—especially if it’s not even clear whether the models you’ve postulated actually would kill QC!

  40. John Sidles Says:

    Juan Miguel remarks “No mention of quantum metrology? Remember, it’s already helping us do really cool stuff, like [list follows …]”

    To Juan Miguel’s fine list we can add quantum metrology triangles (QMTs). Outstanding reviews include:

    • “Metrology and microscopic picture of the integer quantum Hall effect” (Weis and von Klitzing, 2011)

    • “Quantum metrology triangle experiments: a status review” (Gallop, 2012)

    QMT naturality  QMT comprises a quantum dynamical model in which many elements of scalable error-corrected computing already are substantially fulfilled: topological protection, robustness against noise, invariant reproducibility, and unbounded precision.

    QMT centrality  QMT also provides a wonderful social model, in that its strategic implications are sufficiently important as to be regulated by one of the longest-standing and most successful intergovernmental treaties: the Conférence générale des poids et mesures (CGPM, 1875).

    QMT openness  QMT technologies — as codified in mises en pratique — are substantially free of trade-secrets, state-secrets, cryptolects, and paywalls. This openness is good news for students especially.

    QMT puzzles  Furthermore, Vladimir Arnold’s critique of thermodynamics applies particularly to QMTs

    Every mathematician knows that it is impossible to understand any elementary course in thermodynamics.

    Just because we use QMTs, doesn’t mean that we adequately understand them! QMTs promise unbounded precision in principle, but we are far from achieving unbounded precision in practice; the (many) open problems associated to QMT metrology are good news for young quantum researchers.

    Prediction 1  It is reasonable to foresee — because the CGPM is taking concrete steps to ensure it — that by 2040 the entirety of the world’s metrology system will be entirely based upon QMT technologies; this programme is the so-called “new” Système International d’Unités (new SI).

    Prediction 2  If we are really optimistic, then we can foresee that we will understand in 2040 — both individually and as a STEM community — how QMT devices work. Here “understand” is to be realized in the full mathematical and social senses of Bill Thurston’s great essay “On Proof and Progress in Mathematics” (1994). Perhaps we will even be able to simulate QMT device performance, both in principle and in practice, with sufficient accuracy to help guide experiments and optimize mises en pratique.

    Prediction 3  Fuller understanding of QMTs in coming decades will carry us far toward an appreciation of the feasibility (or not) of scalable quantum computing and the feasibility (or not) of simulating quantum systems with PTIME resources.

  41. fred Says:

    Scott #32

    “I’m not sure if it’s true that advances in algorithms have paralleled advances in hardware that closely.”

    That’s a really complex question.
    If you look closely at the frameworks that support many of the current revolutions, like being able to query the internet in seconds, or do any sort of massive data processing, it’s clear that algorithms and hardware go hand in hand. And advances in hardware is all about cost as well, usually cost of doing the processing locally (put RAM/CPU on the cellphone) vs cost of decentralizing the processing which depends on network cost and cost of running server farms (cloud).
    Like this tech article from Facebook about working on graphs with a trillion nodes https://www.facebook.com/notes/facebook-engineering/scaling-apache-giraph-to-a-trillion-edges/10151617006153920 (itself relying on services like MapReduce which takes advantage of distributed processing).
    Most big advances in data processing are about combining the right building blocks (like some recent big improvement in image classification/matching from Google).

  42. fred Says:

    Scott #32
    Another area where algos and hardware have been clearly feeding off each other is computer graphics.
    In the 80s, in the case of off-line rendering, people could afford to come up with more “theoretical” algos to compute things like global illumination all in software (using ray tracing and whatnot).
    But in the field of real-time rendering (e.g. video games), once hardware acceleration appeared, very interesting algorithms were made specifically based on the current hardware available at the time. For example, “Carmack’s reverse” which was very specific to the hardware capabilities http://en.wikipedia.org/wiki/Shadow_volume (there were even patent disputes about it).
    Now the GPUs are so flexible that the two domains have pretty much fused, and everything is coded in terms of very generic and more physically correct shaders.

  43. Gil Kalai Says:

    David (#37) Scott (#39 )

    A simple way to state my thesis is that noise rate scales up for complicated quantum evolutions. (The “diabolic” correlations that Scott mentioned are indeed part of the picture but the “first order term” is simply the noise rate.)

    A few more remarks:

    (1) To a large extent (as far as I know) the assumption of fixed error-rate which can be pushed below the threshold allowing quantum fault-tolerance is also taken out of “thin air” and not derived from some fundamental laws of physics.

    (2) The asymptotic behaviors of models is very important but ultimately  when we ask if computationally superior quantum computation can be demonstrated the question is largely about constants. (Asymptotic considerations gives us strong insights about the behavior of these constants. The point of view that every constant can be pushed arbitrary close to zero by some appropriate engineering is not correct and not useful.)

    (3) You can divide my view into two parts:

    (A) Quantum computational speed-up requires quantum fault tolerance.

    (B) Quantum fault tolerance is not possible.

    A stronger statement is that the only mechanism for computation in nature (alas, only classical) is the “repetition mechanism,” which allow robust classical information to emerge.

  44. Gil Kalai Says:

    On a lighter note let me mention that Video II of my Simons Institute videotaped lectures contains a short respond to Scott’s 100,000$ challenge (49:30-50:30). The twenty minutes of Video II (from 40:00 or so) give some pictures related to the quantum debate and describe some of its entertaining moments…

  45. Itai Says:

    Something that I think most people do not notice about quantum computation is that it presumes a one to one function between physical QM and mathematical model of QM.
    it is possible that every QM state can be described well in the mathematical model we all thing is good.
    However, some mathematical states do not have physicaly possible state, thus making some sort of theoretical QC compuatuon impossible in practice.

    It is a bit same as some told me here it is not possible to have wave function with no variance and mean, or that strong law of large numbers do not hold.

    Another question is how can grover algorithem work in practice, for it to be effective vs classical computer we need huge quantum database if i am not wrong.
    Something like 1M qubits DB does not seem practicle in the next 40 years.

  46. Vlad Says:

    I apologize if this question was asked and answered above – it’s a bit hard to read everything in detail.

    My question is simply this: having a universal quantum computer, that would allow us to simulate a lot of quantum processes that presumably are intractable with classical computers today, no?

    The article seems to say as much. I’d expect that alone with have huge and visible effects on technology, no? The article seems to really downplay this though, and I couldn’t understand why.

  47. Scott Says:

    Vlad #46: My view is that, if QC has a major practical effect, then quantum simulation will probably be the main reason why. It’s certainly something of practical importance, and certainly something that QCs could plausibly help with. On the other hand, there are also two difficulties:

    – Very often, for the quantum systems that people actually care about, it’s possible to make simplifying assumptions that make them tractable to simulate classically—at least if you only want to know basic things like the ground state energy, as is often the case. (E.g., a lot can be treated using Quantum Monte Carlo and DMRG.) Furthermore, even in a world where QCs become practical, we have to expect that classical simulation algorithms will continue to improve, as they have for decades.

    – Recent studies by Matthias Troyer and others suggested that there will be very large polynomial overheads in simulating quantum chemistry using a qubit-based quantum computer. I’m sure the simulation algorithms will be improved (indeed, they already have been to some extent), but the ones we know right now look like they could require millions of qubits (and even more gate operations) before they’d start giving any useful information about chemistry.

    Despite the above points, I’m a guarded optimist about quantum simulation, and I tried to convey my optimism several times in the article. 🙂

  48. rrtucci Says:

    Vlad, Scott doesn’t discuss the use of QCs to do MCMC and learning the structure of Bayesian networks. Those would give only quadratic speedups over classical methods, but that is good enough for me.

  49. Ben Standeven Says:

    Itai #45: The way you use Grover’s algorithm “in practice” is to replace the database with a one-way function; the difficulties then are that while the classical machine needs to evaluate the function more often, it can also do so more quickly, and that there may be some clever method to invert the function classically without a brute-force search.

  50. Ben Standeven Says:

    Gil Kalai #43: I think that’s what he meant by “diabolic correlations”: The error rate for each individual piece of a complicated process might be very low, but when the pieces are combined the error rate is much higher.

    I don’t see why this would block error correction, though. Surely the error rate can only grow linearly with the length of the process (so that the total amount of error grows quadratically)? You could compensate by doing more error correction.

  51. John Sidles Says:

    Ben Standeven wonders “Surely the error rate can only grow linearly with the length of the process (so that the total amount of error grows quadratically)?”

    One answer is that long-range gauge theories (i.e., QED) generically exhibit superradiance.

    For reasons that are not presently understood — and which plausibly can be understood in many ways, including Gil Kalai’s way — Nature inflexibly requires that the Hamiltonian flows of all quantum computers (and Bosonsamplers too) be described by long-range gauge field theories.

    The ubiquity of superradiant dynamics (broadly defined) has turned out to pose considerable challenges in the design of fault-tolerant universal quantum computers and scalable BosonSamplers … but are these “call to action” 20th century objectives infeasible in principle (given Nature’s gauge-field constraint)?

    Ingenious arguments can be advanced to support both sides of this question, and inarguably the conservative answer is: no one knows. And that is the best reason to try.

  52. Gil Kalai Says:

    Dear John, I certainly agree with your last paragraph, and surely find your proposed connection with gauge theories and superradiance, very interesting.

  53. John Sidles Says:

    Gil Kalai says “the connection [of the Kalai postulates; per arXiv:quant-ph/0607021] with gauge theories and superradiance is very interesting.”

    Shtetl Optimized readers might also enjoy reflecting upon two (much-cited) articles by Howard Georgi: “Unparticle Physics” (arXiv:hep-ph/0703260) and “Another Odd Thing About Unparticle Physics” (arXiv:0704.2457 [hep-ph]), together with the more recent preprint by James LeBlanc and Adolfo Grushin titled “Unparticle mediated superconductivity” (arXiv:1407.8492 [cond-mat.str-el]).

    The LeBlanc/Grushin preprint in particular is interesting because (in their model) unparticle dynamics is emergent from a superradiant gauge field theory, namely the usual QED-interacting condensed-matter lattice.

    Plausibly it’s only a matter of time until the first articles on the theme of “Quantum correction of unparticle/nonparticle dynamics appear” … these noise mechanisms quite plausibly (as it seems to me) can instantiate the Kalai Postulates.

    Moreover, unparticle physics — or perhaps “nonparticle” physics would be better — emerges geometrically in the context of varietal dynamics, as those elements of dynamics that reflect varietal geometry rather than Hamiltonian flow. That’s why we quantum simulationists take an interest in this literature.

    Three Possibilities 

    #1  The dynamics of Nature is strictly varietal, such that the unparticle/nonparticle dynamical sector strictly accords with the Kala Postulates.

    #2  The dynamics of Nature is effectively varietal, such that the unparticle/nonparticle dynamical sector effectively accords with the Kala Postulates.

    #3  The dynamics of Nature is strictly Hilbert, such that the Kala Postulates fail, and students can continue to worship devoutly at the Church of the Larger Hilbert Space, without much apprehension of missing out on a trendy Kalai-respecting unparticle/nonparticle physics bandwagon.

    Conclusion  Without regard for which possibility is true, it seems to me that the literature of #1-2 is exceedingly interesting and vigorous nowadays, and moreover the implications for mathematics, science, and engineering are sufficiently unexplored, such that the #1-2 literature can be recommended in good conscience to STEM students generally.