Archive for the ‘Speaking Truth to Parallelism’ Category

Google, D-Wave, and the case of the factor-10^8 speedup for WHAT?

Wednesday, December 9th, 2015

Update (Dec. 16):  If you’re still following this, please check out an important comment by Alex Selby, the discoverer of Selby’s algorithm, which I discussed in the post.  Selby queries a few points in the Google paper: among other things, he disagrees with their explanation of why his classical algorithm works so well on D-Wave’s Chimera graph (and with their prediction that it should stop working for larger graphs), and he explains that Karmarkar-Karp is not the best known classical algorithm for the Number Partitioning problem.  He also questions whether simulated annealing is the benchmark against which everything should be compared (on the grounds that “everything else requires fine-tuning”), pointing out that SA itself typically requires lots of tuning to get it to work well.

Update (Dec. 11): MIT News now has a Q&A with me about the new Google paper. I’m really happy with how the Q&A turned out; people who had trouble understanding this blog post might find the Q&A easier. Thanks very much to Larry Hardesty for arranging it.

Meanwhile, I feel good that there seems to have been actual progress in the D-Wave debate! In previous rounds, I had disagreed vehemently with some of my MIT colleagues (like Ed Farhi and Peter Shor) about the best way to respond to D-Wave’s announcements. Today, though, at our weekly group meeting, there was almost no daylight between any of us. Partly, I’m sure, it’s that I’ve learned to express myself better; partly it’s that the “trigger” this time was a serious research paper by a group separate from D-Wave, rather than some trash-talking statement from Geordie Rose. But mostly it’s that, thanks to the Google group’s careful investigations, this time pretty much anyone who knows anything agrees about all the basic facts, as I laid them out in this blog post and in the Q&A. All that remains are some small differences in emotional attitude: e.g., how much of your time do you want to spend on a speculative, “dirty” approach to quantum computing (which is far ahead of everyone else in terms of engineering and systems integration, but which still shows no signs of an asymptotic speedup over the best classical algorithms, which is pretty unsurprising given theoretical expectations), at a time when the “clean” approaches might finally be closing in on the long-sought asymptotic quantum speedup?

Another Update: Daniel Lidar was nice enough to email me an important observation, and to give me permission to share it here.  Namely, the D-Wave 2X has a minimum annealing time of 20 microseconds.  Because of this, the observed running times for small instance sizes are artificially forced upward, making the growth rate in the machine’s running time look milder than it really is.  (Regular readers might remember that exactly the same issue plagued previous D-Wave vs. classical performance comparisons.)  Correcting this would certainly decrease the D-Wave 2X’s predicted speedup over simulated annealing, in extrapolations to larger numbers of qubits than have been tested so far (although Daniel doesn’t know by how much).  Daniel stresses that he’s not criticizing the Google paper, which explicitly mentions the minimum annealing time—just calling attention to something that deserves emphasis.


In retrospect, I should’ve been suspicious, when more than a year went by with no major D-Wave announcements that everyone wanted me to react to immediately. Could it really be that this debate was over—or not “over,” but where it always should’ve been, in the hands of experts who might disagree vehemently but are always careful to qualify speedup claims—thereby freeing up the erstwhile Chief D-Wave Skeptic for more “””rewarding””” projects, like charting a middle path through the Internet’s endless social justice wars?

Nope.

As many of you will have seen by now, on Monday a team at Google put out a major paper reporting new experiments on the D-Wave 2X machine.  (See also Hartmut Neven’s blog post about this.)  The predictable popularized version of the results—see for example here and here—is that the D-Wave 2X has now demonstrated a factor-of-100-million speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is “truly a quantum computer.”  In the comment sections of one my previous posts, D-Wave investor Steve Jurvetson even tried to erect a victory stele, by quoting Karl Popper about falsification.

In situations like this, the first thing I do is turn to Matthias Troyer, who’s arguably the planet’s most balanced, knowledgeable, trustworthy interpreter of quantum annealing experiments. Happily, in collaboration with Ilia Zintchenko and Ethan Brown, Matthias was generous enough to write a clear 3-page document putting the new results into context, and to give me permission to share it on this blog. From a purely scientific standpoint, my post could end right here, with a link to their document.

Then again, from a purely scientific standpoint, the post could’ve ended even earlier, with the link to the Google paper itself!  For this is not a case where the paper hides some crucial issue that the skeptics then need to ferret out.  On the contrary, the paper’s authors include some of the most careful people in the business, and the paper explains the caveats as clearly as one could ask.  In some sense, then, all that’s left for me or Matthias to do is to tell you what you’d learn if you read the paper!

So, OK, has the D-Wave 2X demonstrated a factor-108 speedup or not?  Here’s the shortest answer that I think is non-misleading:

Yes, there’s a factor-108 speedup that looks clearly asymptotic in nature, and there’s also a factor-108 speedup over Quantum Monte Carlo. But the asymptotic speedup is only if you compare against simulated annealing, while the speedup over Quantum Monte Carlo is only constant-factor, not asymptotic. And in any case, both speedups disappear if you compare against other classical algorithms, like that of Alex Selby. Also, the constant-factor speedup probably has less to do with quantum mechanics than with the fact that D-Wave built extremely specialized hardware, which was then compared against a classical chip on the problem of simulating the specialized hardware itself (i.e., on Ising spin minimization instances with the topology of D-Wave’s Chimera graph). Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, let alone to speedups over the best known classical algorithms that are also asymptotic or also of practical importance. Indeed, all of these points also remain uncertain for quantum annealing as a whole.

To expand a bit, there are really three separate results in the Google paper:

  1. The authors create Chimera instances with tall, thin energy barriers blocking the way to the global minimum, by exploiting the 8-qubit “clusters” that play such a central role in the Chimera graph.  In line with a 2002 theoretical prediction by Farhi, Goldstone, and Gutmann (a prediction we’ve often discussed on this blog), they then find that on these special instances, quantum annealing reaches the global minimum exponentially faster than classical simulated annealing, and that the D-Wave machine realizes this advantage.  As far as I’m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine, at least within the 8-qubit clusters.  On the other hand, the authors point out that there are other classical algorithms, like that of Selby (building on Hamze and de Freitas), which group together the 8-bit clusters into 256-valued mega-variables, and thereby get rid of the energy barrier that kills simulated annealing.  These classical algorithms are found empirically to outperform the D-Wave machine.  The authors also match the D-Wave machine’s asymptotic performance (though not the leading constant) using Quantum Monte Carlo, which (despite its name) is a classical algorithm often used to find quantum-mechanical ground states.
  2. The authors make a case that the ability to tunnel past tall, thin energy barriers—i.e., the central advantage that quantum annealing has been shown to have over classical annealing—might be relevant to at least some real-world optimization problems.  They do this by studying a classic NP-hard problem called Number Partitioning, where you’re given a list of N positive integers, and your goal is to partition the integers into two subsets whose sums differ from each other by as little as possible.  Through numerical studies on classical computers, they find that quantum annealing (in the ideal case) and Quantum Monte Carlo should both outperform simulated annealing, by roughly equal amounts, on random instances of Number Partitioning.  Note that this part of the paper doesn’t involve any experiments on the D-Wave machine itself, so we don’t know whether calibration errors, encoding loss, etc. will kill the theoretical advantage over simulated annealing.  But even if not, this still wouldn’t yield a “true quantum speedup,” since (again) Quantum Monte Carlo is a perfectly-good classical algorithm, whose asymptotics match those of quantum annealing on these instances.
  3. Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about 108 times faster than Quantum Monte Carlo running on a single-core classical computer.  But, extremely interestingly, they also find that this speedup does not grow with problem size; instead it simply saturates at ~108.  In other words, this is a constant-factor speedup rather than an asymptotic one.  Now, obviously, solving a problem “only” 100 million times faster (rather than asymptotically faster) can still have practical value!  But it’s crucial to remember that this constant-factor speedup is only observed for the Chimera instances—or in essence, for “the problem of simulating the D-Wave machine itself”!  If you wanted to solve something of practical importance, you’d first need to embed it into the Chimera graph, and it remains unclear whether any of the constant-factor speedup would survive that embedding.  In any case, while the paper isn’t explicit about this, I gather that the constant-factor speedup disappears when one compares against (e.g.) the Selby algorithm, rather than against QMC.

So then, what do I say to Steve Jurvetson?  I say—happily, not grudgingly!—that the new Google paper provides the clearest demonstration so far of a D-Wave device’s capabilities.  But then I remind him of all the worries the QC researchers had from the beginning about D-Wave’s whole approach: the absence of error-correction; the restriction to finite-temperature quantum annealing (moreover, using “stoquastic Hamiltonians”), for which we lack clear evidence for a quantum speedup; the rush for more qubits rather than better qubits.  And I say: not only do all these worries remain in force, they’ve been thrown into sharper relief than ever, now that many of the side issues have been dealt with.  The D-Wave 2X is a remarkable piece of engineering.  If it’s still not showing an asymptotic speedup over the best known classical algorithms—as the new Google paper clearly explains that it isn’t—then the reasons are not boring or trivial ones.  Rather, they seem related to fundamental design choices that D-Wave made over a decade ago.

The obvious question now is: can D-Wave improve its design, in order to get a speedup that’s asymptotic, and that holds against all classical algorithms (including QMC and Selby’s algorithm), and that survives the encoding of a “real-world” problem into the Chimera graph?  Well, maybe or maybe not.  The Google paper returns again and again to the subject of planned future improvements to the machine, and how they might clear the path to a “true” quantum speedup. Roughly speaking, if we rule out radical alterations to D-Wave’s approach, there are four main things one would want to try, to see if they helped:

  1. Lower temperatures (and thus, longer qubit lifetimes, and smaller spectral gaps that can be safely gotten across without jumping up to an excited state).
  2. Better calibration of the qubits and couplings (and thus, ability to encode a problem of interest, like the Number Partitioning problem mentioned earlier, to greater precision).
  3. The ability to apply “non-stoquastic” Hamiltonians.  (D-Wave’s existing machines are all limited to stoquastic Hamiltonians, defined as Hamiltonians all of whose off-diagonal entries are real and non-positive.  While stoquastic Hamiltonians are easier from an engineering standpoint, they’re also the easiest kind to simulate classically, using algorithms like QMC—so much so that there’s no consensus on whether it’s even theoretically possible to get a true quantum speedup using stoquastic quantum annealing.  This is a subject of active research.)
  4. Better connectivity among the qubits (thereby reducing the huge loss that comes from taking problems of practical interest, and encoding them in the Chimera graph).

(Note that “more qubits” is not on this list: if a “true quantum speedup” is possible at all with D-Wave’s approach, then the 1000+ qubits that they already have seem like more than enough to notice it.)

Anyway, these are all, of course, things D-Wave knows about and will be working on in the near future. As well they should! But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they’ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup. We just can’t say for sure that they won’t see one.

In the meantime, while it’s sometimes easy to forget during blog-discussions, the field of experimental quantum computing is a proper superset of D-Wave, and things have gotten tremendously more exciting on many fronts within the last year or two.  In particular, the group of John Martinis at Google (Martinis is one of the coauthors of the Google paper) now has superconducting qubits with orders of magnitude better coherence times than D-Wave’s qubits, and has demonstrated rudimentary quantum error-correction on 9 of them.  They’re now talking about scaling up to ~40 super-high-quality qubits with controllable couplings—not in the remote future, but in, like, the next few years.  If and when they achieve that, I’m extremely optimistic that they’ll be able to show a clear quantum advantage for something (e.g., some BosonSampling-like sampling task), if not necessarily something of practical importance.  IBM Yorktown Heights, which I visited last week, is also working (with IARPA funding) on integrating superconducting qubits with many-microsecond coherence times.  Meanwhile, some of the top ion-trap groups, like Chris Monroe’s at the University of Maryland, are talking similarly big about what they expect to be able to do soon. The “academic approach” to QC—which one could summarize as “understand the qubits, control them, keep them alive, and only then try to scale them up”—is finally bearing some juicy fruit.

(At last week’s IBM conference, there was plenty of D-Wave discussion; how could there not be? But the physicists in attendance—I was almost the only computer scientist there—seemed much more interested in approaches that aim for longer-laster qubits, fault-tolerance, and a clear asymptotic speedup.)

I still have no idea when and if we’ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on.  But it’s now looking like only a matter of years until Gil Kalai, and the other quantum computing skeptics, will be forced to admit they were wrong—which was always the main application I cared about anyway!

So yeah, it’s a heady time for QC, with many things coming together faster than I’d expected (then again, it was always my personal rule to err on the side of caution, and thereby avoid contributing to runaway spirals of hype).  As we stagger ahead into this new world of computing—bravely, coherently, hopefully non-stoquastically, possibly fault-tolerantly—my goal on this blog will remain what it’s been for a decade: not to prognosticate, not to pick winners, but merely to try to understand and explain what has and hasn’t already been shown.


Update (Dec. 10): Some readers might be interested in an economic analysis of the D-Wave speedup by commenter Carl Shulman.

Another Update: Since apparently some people didn’t understand this post, here are some comments from a Y-Combinator thread about the post that might be helpful:

(1) [T]he conclusion of the Google paper is that we have probable evidence that with enough qubits and a big enough problem it will be faster for a very specific problem compared to a non-optimal classical algorithm (we have ones that are for sure better).

This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave’s computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn’t help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can’t beat an even better classical algorithm (Selby’s) at all, even in a way that won’t scale.

Scott’s central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it’s possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott’s central gripe with D-Wave’s approach is that they don’t have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them even bigger doesn’t seem like a solution.

(2) DWave machines are NOT gate quantum computers; they call their machine quantum annealing machines. It is not known the complexity class of problems that can be solved efficiently by quantum annealing machines, or if that class is equivalent to classical machines.

The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a ‘quantum speedup’. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.

What the paper does seem to be showing is that the machine in question is actually fundamentally quantum in nature; it’s just not clear yet that that the type of quantum computer it is is an improvement over classical ones.

(3) [I]t isn’t called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain and it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It’s hard to imagine what they might be when we’re still so early in the process, and still have no good idea what either the practical or theoretical limits are.

(4) The popular perception of quantum computers as “doing things in parallel” is very misleading. A quantum computer lets you perform computation on a superposed state while maintaining that superposition. But that only helps if the structure of the problem lets you somehow “cancel out” the incorrect results leaving you with the single correct one. [There’s hope for the world! –SA]

D-Wave Open Thread

Wednesday, August 26th, 2015

A bunch of people have asked me to comment on D-Wave’s release of its 1000-qubit processor, and a paper by a group including Cathy McGeoch saying that the machine is 1 or 2 orders of faster (in annealing time, not wall-clock time) than simulated annealing running on a single-core classical computer.  It’s even been suggested that the “Scott-signal” has been shining brightly for a week above Quantham City, but that Scott-man has been too lazy and out-of-shape even to change into his tights.

Scientifically, it’s not clear if much has changed.  D-Wave now has a chip with twice as many qubits as the last one.  That chip continues to be pretty effective at finding its own low-energy states: indeed, depending on various details of definition, the machine can even find its own low-energy states “faster” than some implementation of simulated annealing running on a single-core chip.  Of course, it’s entirely possible that Matthias Troyer or Sergei Isakov or Troels Ronnow or someone like that will be able to find a better implementation of simulated annealing that closes even the modest gap—as happened the last time—but I’ll give the authors the benefit of the doubt that they put good-faith effort into optimizing the classical code.

More importantly, I’d say it remains unclear whether any of the machine’s performance on the instances tested here can be attributed to quantum tunneling effects.  In fact, the paper explicitly states (see page 3) that it’s not going to consider such questions, and I think the authors would agree that you could very well see results like theirs, even if what was going on was fundamentally classical annealing.  Also, of course, it’s still true that, if you wanted to solve a practical optimization problem, you’d first need to encode it into the Chimera graph, and that reduction entails a loss that could hand a decisive advantage to simulated annealing, even without the need to go to multiple cores.  (This is what I’ve described elsewhere as essentially all of these performance comparisons taking place on “the D-Wave machine’s home turf”: that is, on binary constraint satisfaction problems that have precisely the topology of D-Wave’s Chimera graph.)

But, I dunno, I’m just not feeling the urge to analyze this in more detail.  Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic.  By this point, there may have been enough D-Wave announcements that papers realize they no longer need to cover each one like an extraterrestrial landing.  And there are more hats in the ring now, with John Martinis at Google seeking to build superconducting quantum annealing machines but with ~10,000x longer coherence times than D-Wave’s, and with IBM Research and some others also trying to scale superconducting QC.  The realization has set in, I think, that both D-Wave and the others are in this for the long haul, with D-Wave currently having lots of qubits, but with very short coherence times and unclear prospects for any quantum speedup, and Martinis and some others having qubits of far higher quality, but not yet able to couple enough of them.

The other issue is that, on my flight from Seoul back to Newark, I watched two recent kids’ movies that were almost defiant in their simple, unironic, 1950s-style messages of hope and optimism.  One was Disney’s new live-action Cinderella; the other was Brad Bird’s Tomorrowland.  And seeing these back-to-back filled me with such positivity and good will that, at least for these few hours, it’s hard to summon my usual crusty self.  I say, let’s invent the future together, and build flying cars and jetpacks in our garages!  Let a thousand creative ideas bloom for how to tackle climate change and the other crises facing civilization!  (Admittedly, mass-market flying cars and jetpacks are probably not a step forward on climate change … but, see, there’s that negativity coming back.)  And let another thousand ideas bloom for how to build scalable quantum computers—sure, including D-Wave’s!  Have courage and be kind!

So yeah, if readers would like to discuss the recent D-Wave paper further (especially those who know something about it), they’re more than welcome to do so in the comments section.  But I’ve been away from Dana and Lily for two weeks, and will endeavor to spend time with them rather than obsessively reloading the comments (let’s see if I succeed).

As a small token of my goodwill, I enclose two photos from my last visit to a D-Wave machine, which occurred when I met with some grad students in Waterloo this past spring.  As you can see, I even personally certified that the machine was operating as expected.  But more than that: surpassing all reasonable expectations for quantum AI, this model could actually converse intelligently, through a protruding head resembling that of IQC grad student Sarah Kaiser.

dwavedwave2

Memrefuting

Wednesday, February 11th, 2015

(in which I bring this blog back to the “safe, uncontroversial” territory of arguing with people who think they can solve NP-complete problems in polynomial time)

A few people have asked my opinion about “memcomputing”: a computing paradigm that’s being advertised, by its developers, as a way to solve NP-complete problems in polynomial time.  According to the paper Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, memcomputing “is based on the brain-like notion that one can process and store information within the same units (memprocessors) by means of their mutual interactions.”  The authors are explicit that, in their view, this idea allows the Subset Sum problem to be solved with polynomial resources, by exploring all 2n possible subsets in parallel, and that this refutes the Extended Church-Turing Thesis.  They’ve actually built ‘memcomputers’ that solve small instances of Subset Sum, and they hope to scale them up, though they mention hardware limitations that have made doing so difficult—more about that later.

A bunch of people (on Hacker News, Reddit, and elsewhere) tried to explain the problems with the Subset Sum claim when the above preprint was posted to the arXiv last year.  However, an overlapping set of authors has now simply repeated the claim, unmodified, in a feature article in this month’s Scientific American.  Unfortunately the SciAm article is behind a paywall, but here’s the relevant passage:

Memcomputing really shows advantages when applied to one of the most difficult types of problems we know of in computer science: calculating all the properties of a large series of integers. This is the kind of challenge a computer faces when trying to decipher complex codes. For instance, give the computer 100 integers and then ask it to find at least one subset that adds up to zero. The computer would have to check all possible subsets and then sum all numbers in each subset. It would plow through each possible combination, one by one, which is an exponentially huge increase in processing time. If checking 10 integers took one second, 100 integers would take 1027 seconds—millions of trillions of years … [in contrast,] a memcomputer can calculate all subsets and sums in just one step, in true parallel fashion, because it does not have to shuttle them back and forth to a processor (or several processors) in a series of sequential steps. The single-step approach would take just a single second.

For those tuning in from home: in the Subset Sum problem, we’re given n integers a1,…,an, and we want to know whether there exists a subset of them that sums to a target integer k.  (To avoid trivializing the problem, either k should be nonzero or else the subset should be required to be nonempty, a mistake in the passage quoted above.)

To solve Subset Sum in polynomial time, the basic idea of “memcomputing” is to generate waves at frequencies that encode the sums of all possible subsets of ai‘s, and then measure the resulting signal to see if there’s a frequency there that corresponds to k.

Alas, there’s a clear scalability problem that seems to me to completely kill this proposal, as a practical way of solving NP-complete problems.  The problem is that the signal being measured is (in principle!) a sum of waves of exponentially many different frequencies.  By measuring this wave and taking a Fourier transform, one will not be able to make out the individual frequencies until one has monitored the signal for an exponential amount of time.  There are actually two issues here:

(1) Even if there were just a single frequency, measuring the frequency to exponential precision will take exponential time. This can be easily seen by contemplating even a moderately large n.  Thus, suppose n=1000.  Then we would need to measure a frequency to a precision of one part in ~21000. If the lowest frequency were (say) 1Hz, then we would be trying to distinguish frequencies that differ by far less than the Planck scale.  But distinguishing frequencies that close would require so much energy that one would exceed the Schwarzschild limit and create a black hole!  The alternative is to make the lowest frequency slower than the lifetime of the universe, causing an exponential blowup in the amount of time we need to run the experiment.

(2) Because there are exponentially many frequencies, the amplitude of each frequency will get attenuated by an exponential amount.  Again, suppose that n=1000, so that we’re talking about attenuation by a ~2-1000 factor.  Then given any amount of input radiation that could be gathered in physical universe, the expected amount of amplitude on each frequency would correspond to a microscopically small fraction of 1 photon — so again, it would take exponential time for us to notice any radiation at all on the frequency that interests us (unless we used an insensitive test that was liable to confuse that frequency with many other nearby frequencies).

What do the authors have to say about these issues?  Here are the key passages from the above-mentioned paper:

all frequencies involved in the collective state (1) are dampened by the factor 2-n.  In the case of the ideal machine, i.e., a noiseless machine, this would not represent an issue because no information is lost.  On the contrary, when noise is accounted for, the exponential factor represents the hardest limitation of the experimentally fabricated machine, which we reiterate is a technological limit for this particular realization of a memcomputing machine but not for all of them …

In conclusion we have demonstrated experimentally a deterministic memcomputing machine that is able to solve an NP-complete problem in polynomial time (actually in one step) using only polynomial resources.  The actual machine we built clearly suffers from technological limitations due to unavoidable noise that impair [sic] the scalability.  This issue can, however, be overcome in other UMMs [universal memcomputing machines] using other ways to encode such information.

The trouble is that no other way to encode such information is ever mentioned.  And that’s not an accident: as explained above, when n becomes even moderately large, this is no longer a hardware issue; it’s a fundamental physics issue.

It’s important to realize that the idea of solving NP-complete problems in polynomial time using an analog device is far from new: computer scientists discussed such ideas extensively in the 1960s and 1970s.  Indeed, the whole point of my NP-complete Problems and Physical Reality paper was to survey the history of such attempts, and (hopefully!) to serve as a prophylactic against people making more such attempts without understanding the history.  For computer scientists ultimately came to realize that all proposals along these lines simply “smuggle the exponentiality” somewhere that isn’t being explicitly considered, exactly like all proposals for perpetual-motion machines smuggle the entropy increase somewhere that isn’t being explicitly considered.  The problem isn’t a practical one; it’s one of principle.  And I find it unfortunate that the recent memcomputing papers show no awareness of this story.

(Incidentally, quantum computing is interesting precisely because, out of all “post-Extended-Church-Turing” computing proposals, it’s the only one for which we can’t articulate a clear physical reason why it won’t scale, analogous to the reasons given above for memcomputing.  With quantum computing the tables are turned, with the skeptics forced to handwave about present-day practicalities, while the proponents wield the sharp steel of accepted physical law.  But as readers of this blog well know, quantum computing doesn’t seem to promise the polynomial-time solution of NP-complete problems, only of more specialized problems.)

Quantum Machine Learning Algorithms: Read the Fine Print

Monday, February 2nd, 2015

So, I’ve written a 4-page essay of that title, which examines the recent spate of quantum algorithms for clustering, classification, support vector machines, and other “Big Data” problems that grew out of a 2008 breakthrough on solving linear systems by Harrow, Hassidim, and Lloyd, as well as the challenges in applying these algorithms to get genuine exponential speedups over the best classical algorithms.  An edited version of the essay will be published as a Commentary in Nature Physics.  Thanks so much to Iulia Georgescu at Nature for suggesting that I write this.

Update (April 4, 2015): The piece has now been published.

Der Quantencomputer

Friday, November 14th, 2014

Those of you who read German (I don’t) might enjoy a joint interview of me and Seth Lloyd about quantum computing, which was conducted in Seth’s office by the journalist Christian Meier, and published in the Swiss newspaper Neue Zürcher Zeitung.  Even if you don’t read German, you can just feed the interview into Google Translate, like I did.  While the interview covers ground that will be forehead-bangingly familiar to regular readers of this blog, I’m happy with how it turned out; even the slightly-garbled Google Translate output is much better than most quantum computing articles in the English-language press.  (And while Christian hoped to provoke spirited debate between me and Seth by interviewing us together, we surprised ourselves by finding very little that we actually disagreed about.)  I noticed only one error, when I’m quoted talking about “the discovery of the transistor in the 1960s.”  I might have said something about the widespread commercialization of transistors (and integrated circuits) in the 1960s, but I know full well that the transistor was invented at Bell Labs in 1947.

Speaking Truth to Parallelism at Cornell

Friday, October 3rd, 2014

This week I was at my alma mater, Cornell, to give a talk at the 50th anniversary celebration of its computer science department.  You can watch the streaming video here; my talk runs from roughly 1:17:30 to 1:56 (though if you’ve seen other complexity/physics/humor shows by me, this one is pretty similar, except for the riff about Cornell at the beginning).

The other two things in that video—a talk by Tom Henzinger about IST Austria, a bold new basic research institute that he leads, closely modeled after the Weizmann Institute in Israel; and a discussion panel about the future of programming languages—are also really interesting and worth watching.  There was lots of other good stuff at this workshop, including a talk about Google Glass and its applications to photography (by, not surprisingly, a guy wearing a Google Glass—Marc Levoy); a panel discussion with three Turing Award winners, Juris Hartmanis, John Hopcroft, and Ed Clarke, about the early days of Cornell’s CS department; a talk by Amit Singhal, Google’s director of search; a talk about differential privacy by Cynthia Dwork, one of the leading researchers at the recently-closed Microsoft SVC lab (with a poignant and emotional ending); and a talk by my own lab director at MIT, Daniela Rus, about her research in robotics.

Along with the 50th anniversary celebration, Bill Gates was also on campus to dedicate Bill and Melinda Gates Hall, the new home of Cornell’s CS department.  Click here for streaming video of a Q&A that Gates did with Cornell students, where I thought he acquitted himself quite well, saying many sensible things about education, the developing world, etc. that other smart people could also say, but that have extra gravitas coming from him.  Gates has also become extremely effective at wrapping barbs of fact inside a soft mesh of politically-unthreatening platitudes—but listen carefully and you’ll hear the barbs.  The amount of pomp and preparation around Gates’s visit reminded me of when President Obama visited MIT, befitting the two men’s approximately equal power.  (Obama has nuclear weapons, but then again, he also has Congress.)

And no, I didn’t get to meet Gates or shake his hand, though I did get to stand about ten feet from him at the Gates Hall dedication.  (He apparently spent most of his time at Cornell meeting with plant breeders, and other people doing things relevant to the Gates Foundation’s interests.)

Thanks so much to Bobby and Jon Kleinberg, and everyone else who invited me to this fantastic event and helped make it happen.  May Cornell’s CS department have a great next 50 years.

One last remark before I close this post.  Several readers have expressed disapproval and befuddlement over the proposed title of my next book, “Speaking Truth to Parallelism.”  In the words of commenter TonyK:

That has got to be the worst title in the history of publishing! “Speaking Truth to Parallelism”? It doesn’t even make sense! I count myself as one of your fans, Scott, but you’re going to have to do better than that if you want anybody else to buy your book. I know you can do better — witness “Quantum Computing Since Democritus”.

However, my experiences at Cornell this week helped to convince me that, not only does “Speaking Truth to Parallelism” make perfect sense, it’s an activity that’s needed now more than ever.  What it means, of course, is fighting a certain naïve, long-ago-debunked view of quantum computers—namely, that they would achieve exponential speedups by simply “trying every possible answer in parallel”—that’s become so entrenched in the minds of many journalists, laypeople, and even scientists from other fields that it feels like nothing you say can possibly dislodge it.  The words out of your mouth will literally be ignored, misheard, or even contorted to the opposite of what they mean, if that’s what it takes to preserve the listener’s misconception about quantum computers being able to solve NP-hard optimization problems by sheer magic.  (Much like in the Simpsons-visit-Australia episode, where Marge’s request for “coffee” is misheard over and over as “beer.”)  You probably think I’m exaggerating, and I’d agree with you—if I hadn’t experienced this phenomenon hundreds of times over the last decade.

So, to take one example: after my talk at Cornell, an audience member came up to me to say that it was a wonderful talk, but that what he really wanted to know was whether I thought quantum computers could solve problems in the “NP space” in linear time, by trying all the possible solutions at once.  He didn’t seem to realize that I’d spent the entire previous half hour answering that exact question, explaining why the answer was “no.”  Coincidentally, this week I also got an email from a longtime reader of this blog, saying that he read and loved Quantum Computing Since Democritus, and wanted my feedback on a popular article he’d written about quantum computing.  What was the gist of the article?  You guessed it: “quantum computing = generic exponential speedups for optimization, machine learning, and Big Data problems, by trying all the possible answers at once.”

These people’s enthusiasm for quantum computing tends to be so genuine, so sincere, that I find myself unable to blame them—even when they’ve done the equivalent of going up to Richard Dawkins and thanking him for having taught them that evolution works for the good of the entire species, just as its wise Designer intended.  I do blame the media and other careless or unscrupulous parties for misleading people about quantum computing, but most of all I blame myself, for not making my explanations clear enough.  In the end, then, meeting the “NP space” folks only makes me want to redouble my efforts to Speak Truth to Parallelism: eventually, I feel, the nerd world will get this point.


Update (Oct. 4): I had regarded this (perhaps wrongly) as too obvious to state, but particularly for non-native English speakers, I’d better clarify: “speaking truth to parallelism” is a deliberate pun on the left-wing protester phrase “speaking truth to power.”  So whatever linguistic oddness there is in my phrase, I’d say it simply inherits from the original.

Another Update (Oct. 7): See this comment for my short summary of what’s known about the actual technical question (can quantum computers solve NP-complete problems in polynomial time, or not?).

Another Update (Oct. 8): Many commenters wrote to point out that the video of my talk at Cornell is now password-protected, and no longer publicly available.  I wrote to my contacts at Cornell to ask about this, and they said they’re planning to release lightly-edited versions of the videos soon, but will look into the matter in the meantime.

Speaking Truth to Parallelism: The Book

Monday, September 22nd, 2014

A few months ago, I signed a contract with MIT Press to publish a new book: an edited anthology of selected posts from this blog, along with all-new updates and commentary.  The book’s tentative title (open to better suggestions) is Speaking Truth to Parallelism: Dispatches from the Frontier of Quantum Computing Theory.  The new book should be more broadly accessible than Quantum Computing Since Democritus, although still far from your typical pop-science book.  My goal is to have STTP out by next fall, to coincide with Shtetl-Optimized‘s tenth anniversary.

If you’ve been a regular reader, then this book is my way of thanking you for … oops, that doesn’t sound right.  If it were a gift, I should give it away for free, shouldn’t I?  So let me rephrase: buying this reasonably-priced book can be your way of thanking me, if you’ve enjoyed my blog all these years.  But it will also (I hope) be a value-added proposition: not only will you be able to put the book on your coffee table to impress an extremely nerdy subset of your friends, you’ll also get “exclusive content” unavailable on the blog.

To be clear, the posts that make it into the book will be ruthlessly selected: nothing that’s pure procrastination, politics, current events, venting, or travelogue, only the choice fillets that could plausibly be claimed to advance the public understanding of science.  Even for those, I’ll add additional background material, and take out digs unworthy of a book (making exceptions for anything that really cracks me up on a second reading).

If I had to pick a unifying theme for the book, I’d sigh and then say: it’s about a certain attitude toward the so-called “deepest questions,” like the nature of quantum mechanics or the ultimate limits of computation or the mind/body problem or the objectivity of mathematics or whether our universe is a computer simulation.  It’s an attitude that I wish more popular articles managed to get across, and at any rate, that people ought to adopt when reading those articles.  The attitude combines an openness to extraordinary claims, with an unceasing demand for clarity about the nature of those claims, and an impatience whenever that demand is met with evasion, obfuscation, or a “let’s not get into technicalities right now.”  It’s an attitude that constantly asks questions like:

“OK, so what can you actually do that’s different?”
“Why doesn’t that produce an absurd result when applied to simple cases?”
“Why isn’t that just a fancy way of saying what I could’ve said in simpler language?”
“Why couldn’t you have achieved the same thing without your ‘magic ingredient’?”
“So what’s your alternative account for how that happens?”
“Why isn’t that obvious?”
“What’s really at stake here?”
“What’s the catch?”

It’s an attitude that accepts the possibility that such questions might have satisfying answers—in which case, a change in worldview will be in order.  But not before answers are offered, openly debated, and understood by the community of interested people.

Of all the phrases I use on this blog, I felt “Speaking Truth to Parallelism” best captured the attitude in question.  I coined the phrase back in 2007, when D-Wave’s claims to be solving Sudoku puzzles with a quantum computer unleashed a tsunami of journalism about QCs—what they are, how they would work, what they could do—that (in my opinion) perfectly illustrated how not to approach a metaphysically-confusing new technology.  Having said that, the endless debate around D-Wave won’t by any means be the focus of this book: it will surface, of course, but only when it helps to illustrate some broader point.

In planning this book, the trickiest issue was what to do with comments.  Ultimately, I decided that the comments make Shtetl-Optimized what it is—so for each post I include, I’ll include a brief selection of the most interesting comments, together with my responses to them.  My policy will be this: by default, I’ll consider any comments on this blog to be fair game for quoting in the book, in whole or in part, and attributed to whatever handle the commenter used.  However, if you’d like to “opt out” of having your comments quoted, I now offer you a three-month window in which to do so: just email me, or leave a comment (!) on this thread.  You can also request that certain specific comments of yours not be quoted, or that your handle be removed from your comments, or your full name added to them—whatever you want.

Update (9/24): After hearing from several of you, I’ve decided on the following modified policy.  In all cases where I have an email address, I will contact the commenters about any of their comments that I’m thinking of using, to request explicit permission to use them.  In the hopefully-rare cases where I can’t reach a given commenter, but where their comment raised what seems like a crucial point requiring a response in the book, I might quote from the comment anyway—but in those cases, I’ll be careful not to reproduce very long passages, in a way that might run afoul of the fair use exception.

Raise a martini glass for Google and Martinis!

Saturday, September 6th, 2014

We’ve already been discussing this in the comments section of my previous post, but a few people emailed me to ask when I’d devote a separate blog post to the news.

OK, so for those who haven’t yet heard: this week Google’s Quantum AI Lab announced that it’s teaming up with John Martinis, of the University of California, Santa Barbara, to accelerate the Martinis group‘s already-amazing efforts in superconducting quantum computing.  (See here for the MIT Tech‘s article, here for Wired‘s, and here for the WSJ‘s.)  Besides building some of the best (if not the best) superconducting qubits in the world, Martinis, along with Matthias Troyer, was also one of the coauthors of two important papers that found no evidence for any speedup in the D-Wave machines.  (However, in addition to working with the Martinis group, Google says it will also continue its partnership with D-Wave, in an apparent effort to keep reality more soap-operatically interesting than any hypothetical scenario one could make up on a blog.)

I have the great honor of knowing John Martinis, even once sharing the stage with him at a “Physics Cafe” in Aspen.  Like everyone else in our field, I profoundly admire the accomplishments of his group: they’ve achieved coherence times in the tens of microseconds, demonstrated some of the building blocks of quantum error-correction, and gotten tantalizingly close to the fault-tolerance threshold for the surface code.  (When, in D-Wave threads, people have challenged me: “OK Scott, so then which experimental quantum computing groups should be supported more?,” my answer has always been some variant of: “groups like John Martinis’s.”)

So I’m excited about this partnership, and I wish it the very best.

But I know people will ask: apart from the support and well-wishes, do I have any predictions?  Alright, here’s one.  I predict that, regardless of what happens, commenters here will somehow make it out that I was wrong.  So for example, if the Martinis group, supported by Google, ultimately succeeds in building a useful, scalable quantum computer—by emphasizing error-correction, long coherence times (measured in the conventional way), “gate-model” quantum algorithms, universality, and all the other things that D-Wave founder Geordie Rose has pooh-poohed from the beginning—commenters will claim that still most of the credit belongs to D-Wave, for whetting Google’s appetite, and for getting it involved in superconducting QC in the first place.  (The unstated implication being that, even if there were little or no evidence that D-Wave’s approach would ever lead to a genuine speedup, we skeptics still would’ve been wrong to state that truth in public.)  Conversely, if this venture doesn’t live up to the initial hopes, commenters will claim that that just proves Google’s mistake: rather than “selling out to appease the ivory-tower skeptics,” they should’ve doubled down on D-Wave.  Even if something completely different happens—let’s say, Google, UCSB, and D-Wave jointly abandon their quantum computing ambitions, and instead partner with ISIS to establish the world’s first “Qualiphate,” ruling with a niobium fist over California and parts of Oregon—I would’ve been wrong for having failed to foresee that.  (Even if I did sort of foresee it in the last sentence…)

Yet, while I’ll never live to see the blog-commentariat acknowledge the fundamental reasonableness of my views, I might live to see scalable quantum computers become a reality, and that would surely be some consolation.  For that reason, even if for no others, I once again wish the Martinis group and Google’s Quantum AI Lab the best in their new partnership.


Unrelated Announcement: Check out a lovely (very basic) introductory video on quantum computing and information, narrated by John Preskill and Spiros Michalakis, and illustrated by Jorge Cham of PhD Comics.

“How Might Quantum Information Transform Our Future?”

Tuesday, July 22nd, 2014

So, the Templeton Foundation invited me to write a 1500-word essay on the above question.  It’s like a blog post, except they pay me to do it!  My essay is now live, here.  I hope you enjoy my attempt at techno-futurist prose.  You can comment on the essay either here or over at Templeton’s site.  Thanks very much to Ansley Roan for commissioning the piece.

Waiting for BQP Fever

Tuesday, April 1st, 2014

Update (April 5): By now, three or four people have written in asking for my reaction to the preprint “Computational solution to quantum foundational problems” by Arkady Bolotin.  (See here for the inevitable Slashdot discussion, entitled “P vs. NP Problem Linked to the Quantum Nature of the Universe.”)  It gives me no pleasure to respond to this sort of thing—it would be far better to let papers this gobsmackingly uninformed about the relevant issues fade away in quiet obscurity—but since that no longer seems to be possible in the age of social media, my brief response is here.


(note: sorry, no April Fools post, just a post that happens to have gone up on April Fools)

This weekend, Dana and I celebrated our third anniversary by going out to your typical sappy romantic movie: Particle Fever, a documentary about the Large Hadron Collider.  As it turns out, the movie was spectacularly good; anyone who reads this blog should go see it.  Or, to offer even higher praise:

If watching Particle Fever doesn’t cause you to feel in your bones the value of fundamental science—the thrill of discovery, unmotivated by any application—then you are not truly human.  You are a barnyard animal who happens to walk on its hind legs.

Indeed, I regard Particle Fever as one of the finest advertisements for science itself ever created.  It’s effective precisely because it doesn’t try to tell you why science is important (except for one scene, where an economist asks a physicist after a public talk about the “return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”).  Instead, the movie simply shows you the lives of particle physicists, of people who take for granted the urgency of knowing the truth about the basic constituents of reality.  And in showing you the scientists’ quest, it makes you feel as they feel.  Incidentally, the movie also shows footage of Congressmen ridiculing the uselessness of the Superconducting Supercollider, during the debates that led to the SSC’s cancellation.  So, gently, implicitly, you’re invited to choose: whose side are you on?

I do have a few, not quite criticisms of the movie, but points that any viewer should bear in mind while watching it.

First, it’s important not to come away with the impression that Particle Fever shows “what science is usually like.”  Sure, there are plenty of scenes that any scientist would find familiar: sleep-deprived postdocs; boisterous theorists correcting each other’s statements over Chinese food; a harried lab manager walking to the office oblivious to traffic.  On the other hand, the decades-long quest to find the Higgs boson, the agonizing drought of new data before the one big money shot, the need for an entire field to coalesce around a single machine, the whole careers hitched to specific speculative scenarios that this one machine could favor or disfavor—all of that is a profoundly abnormal situation in the history of science.  Particle physics didn’t used to be that way, and other parts of science are not that way today.  Of course, the fact that particle physics became that way makes it unusually suited for a suspenseful movie—a fact that the creators of Particle Fever understood perfectly and exploited to the hilt.

Second, the movie frames the importance of the Higgs search as follows: if the Higgs boson turned out to be relatively light, like 115 GeV, then that would favor supersymmetry, and hence an “elegant, orderly universe.”  If, on the other hand, the Higgs turned out to be relatively heavy, like 140 GeV, then that would favor anthropic multiverse scenarios (and hence a “messy, random universe”).  So the fact that the Higgs ended up being 125 GeV means the universe is coyly refusing to tell us whether it’s orderly or random, and more research is needed.

In my view, it’s entirely appropriate for a movie like this one to relate its subject matter to big, metaphysical questions, to the kinds of questions anyone can get curious about (in contrast to, say, “what is the mechanism of electroweak symmetry breaking?”) and that the scientists themselves talk about anyway.  But caution is needed here.  My lay understanding, which might be wrong, is as follows: while it’s true that a lighter Higgs would tend to favor supersymmetric models, the only way to argue that a heavier Higgs would “favor the multiverse,” is if you believe that a multiverse is automatically favored by a lack of better explanations.  More broadly, I wish the film had made clearer that the explanation for (some) apparent “fine-tunings” in the Standard Model might be neither supersymmetry, nor the multiverse, nor “it’s just an inexplicable accident,” but simply some other explanation that no one has thought of yet, but that would emerge from a better understanding of quantum field theory.  As one example, on reading up on the subject after watching the film, I was surprised to learn that a very conservative-sounding idea—that of “asymptotically safe gravity”—was used in 2009 to predict the Higgs mass right on the nose, at 126.3 ± 2.2 GeV.  Of course, it’s possible that this was just a lucky guess (there were, after all, lots of Higgs mass predictions).  But as an outsider, I’d love to understand why possibilities like this don’t seem to get discussed more (there might, of course, be perfectly good reasons that I don’t know).

Third, for understandable dramatic reasons, the movie focuses almost entirely on the “younger generation,” from postdocs working on ATLAS and CMS detectors, to theorists like Nima Arkani-Hamed who are excited about the LHC because of its ability to test scenarios like supersymmetry.  From the movie’s perspective, the creation of the Standard Model itself, in the 60s and 70s, might as well be ancient history.  Indeed, when Peter Higgs finally appears near the end of the film, it’s as if Isaac Newton has walked onstage.  At several points, I found myself wishing that some of the original architects of the Standard Model, like Steven Weinberg or Sheldon Glashow, had been interviewed to provide their perspectives.  After all, their model is really the one that’s been vindicated at the LHC, not (so far) any of the newer ideas like supersymmetry or large extra dimensions.

OK, but let me come to the main point of this post.  I confess that my overwhelming emotion on watching Particle Fever was one of regret—regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown.

See, from my perspective, there’s a lot to envy about the high-energy physicists.  Most importantly, they don’t perceive any need to justify what they do in terms of practical applications.  Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN.  But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you.

Now, no marketing consultant would ever in a trillion years endorse such an out-of-touch, elitist sales pitch.  But the remarkable fact is that the message has more-or-less worked.  While the cancellation of the SSC was a setback, the high-energy physicists did succeed in persuading the world to pony up the $11 billion needed to build the LHC, and to gain the information that the mass of the Higgs boson is about 125 GeV.

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)  The reasons to build quantum computers are things that could make any buzzword-spouting dullard nod in recognition: cracking uncrackable encryption, finding bugs in aviation software, sifting through massive data sets, maybe even curing cancer, predicting the weather, or finding aliens.  And all of this could be yours in a few short years—or some say it’s even commercially available today.  So, if you check back in a few years and it’s still not on store shelves, probably it went the way of flying cars or moving sidewalks: another technological marvel that just failed to materialize for some reason.

Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field—because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself.  All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes—that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms?  What happens when it turns out that the real applications of quantum computing—like breaking RSA and simulating quantum systems—are nice, but not important enough by themselves to justify the cost?  (E.g., when the imminent risk of a quantum computer simply causes people to switch from RSA to other cryptographic codes?  Or when the large polynomial overheads of quantum simulation algorithms limit their usefulness?)  Finally, what happens when it turns out that the promises of useful quantum computers in 5-10 years were wildly unrealistic?

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing—the case based on basic science—because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.

Many people would say it’s impossible to make the latter pitch, that funders and laypeople would never understand it or buy it.  But there’s an $11-billion, 17-mile ring under Geneva that speaks against their cynicism.

Anyway, let me end this “movie review” with an anecdote.  The other day a respected colleague of mine—someone who doesn’t normally follow such matters—asked me what I thought about D-Wave.  After I’d given my usual spiel, he smiled and said:

“See Scott, but you could imagine scientists of the 1400s saying the same things about Columbus!  He had no plan that could survive academic scrutiny.  He raised money under the false belief that he could reach India by sailing due west.  And he didn’t understand what he’d found even after he’d found it.  Yet for all that, it was Columbus, and not some academic critic on the sidelines, who discovered the new world.”

With this one analogy, my colleague had eloquently summarized the case for D-Wave, a case often leveled against me much more verbosely.  But I had an answer.

“I accept your analogy!” I replied.  “But to me, Columbus and the other conquerors of the Americas weren’t heroes to be admired or emulated.  Motivated by gold and spices rather than knowledge, they spread disease, killed and enslaved millions in one of history’s greatest holocausts, and burned the priceless records of the Maya and Inca civilizations so that the world would never even understand what was lost.  I submit that, had it been undertaken by curious and careful scientists—or at least people with a scientific mindset—rather than by swashbucklers funded by greedy kings, the European exploration and colonization of the Americas could have been incalculably less tragic.”

The trouble is, when I say things like that, people just laugh at me knowingly.  There he goes again, the pie-in-the-sky complexity theorist, who has no idea what it takes to get anything done in the real world.  What an amusingly contrary perspective he has.

And that, in the end, is why I think Particle Fever is such an important movie.  Through the stories of the people who built the LHC, you’ll see how it really is possible to reach a new continent without the promise of gold or the allure of lies.