Archive for the ‘Quantum’ Category

Me interviewed by John Horgan (the author of “The End of Science”)

Thursday, April 21st, 2016

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

Grading Trudeau on quantum computing

Sunday, April 17th, 2016

Update (4/19): Inspired by Trudeau’s performance (which they clocked at 35 seconds), Maclean’s magazine asked seven quantum computing researchers—me, Krysta Svore, Aephraim Steinberg, Barry Sanders, Davide Venturelli, Martin Laforest, and Murray Thom—to also explain quantum computing in 35 seconds or fewer.  You can see all the results here (here’s the audio from my entry).


The emails starting hitting me like … a hail of maple syrup from the icy north.  Had I seen the news?  Justin Trudeau, the dreamy young Prime Minister of Canada, visited the Perimeter Institute for Theoretical Physics in Waterloo, one of my favorite old haunts.  At a news conference at PI, as Trudeau stood in front of a math-filled blackboard, a reporter said to him: “I was going to ask you to explain quantum computing, but — when do you expect Canada’s ISIL mission to begin again, and are we not doing anything in the interim?”

Rather than answering immediately about ISIL, Trudeau took the opportunity to explain quantum computing:

“Okay, very simply, normal computers work, uh, by [laughter, applause] … no no no, don’t interrupt me.  When you walk out of here, you will know more … no, some of you will know far less about quantum computing, but most of you … normal computers work, either there’s power going through a wire, or not.  It’s 1, or a 0, they’re binary systems.  Uh, what quantum states allow for is much more complex information to be encoded into a single bit.  Regular computer bit is either a 1 or a 0, on or off.  A quantum state can be much more complex than that, because as we know [speeding up dramatically] things can be both particle and wave at the same times and the uncertainty around quantum states [laughter] allows us to encode more information into a much smaller computer.  So, that’s what exciting about quantum computing and that’s… [huge applause] don’t get me going on this or we’ll be here all day, trust me.”

What marks does Trudeau get for this?  On the one hand, the widespread praise for this reply surely says more about how low the usual standards for politicians are, and about Trudeau’s fine comic delivery, than about anything intrinsic to what he said.  Trudeau doesn’t really assert much here: basically, he just says that normal computers work using 1’s and 0’s, and that quantum computers are more complicated than that in some hard-to-explain way.  He gestures toward the uncertainty principle and wave/particle duality, but he doesn’t say anything about the aspects of QM most directly relevant to quantum computing—superposition or interference or the exponential size of Hilbert space—nor does he mention what quantum computers would or wouldn’t be used for.

On the other hand, I’d grade Trudeau’s explanation as substantially more accurate than what you’d get from a typical popular article.  For pay close attention to what the Prime Minister never says: he never says that a qubit would be “both 0 and 1 at the same time,” or any equivalent formulation.  (He does say that quantum states would let us “encode more information into a much smaller computer,” but while Holevo’s Theorem says that’s false for a common interpretation of “information,” it’s true for other reasonable interpretations.)  The humorous speeding up as he mentions particle/wave duality and the uncertainty principle clearly suggests that he knows it’s more subtle than just “0 and 1 at the same time,” and he also knows that he doesn’t really get it and that the journalists in the audience don’t either.  When I’m grading exams, I always give generous partial credit for honest admissions of ignorance.  B+.

Anyway, I’d be curious to know who at PI prepped Trudeau for this, and what they said.  Those with inside info, feel free to share in the comments (anonymously if you want!).

(One could also compare against Obama’s 2008 answer about bubblesort, which was just a mention of a keyword by comparison.)

Update: See also a Motherboard article where Romain Alléaume, Amr Helmy, Michele Mosca, and Aephraim Steinberg rate Trudeau’s answer, giving it 7/10, no score, 9/10, and 7/10 respectively.

Quantum. Crypto. Things happen. I blog.

Sunday, March 6th, 2016

1. A bunch of people emailed me to ask about the paper “Realization of a scalable Shor algorithm”: a joint effort by the groups of my MIT colleague Ike Chuang and of Innsbruck’s Rainer Blatt.  The paper has been on the arXiv since July, but last week everyone suddenly noticed it because it appeared in Science.  See also the articles in MIT News and IEEE Spectrum.

Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5.  Now, one might object that the “15=3×5 theorem” has by now been demonstrated many times using quantum computing (indeed, Chuang himself was involved in the historic first such demonstration, with Neil Gershenfeld in 1997).  Furthermore, if one counts demonstrations not based on quantum computing, some people have claimed even earlier precedents for that theorem.

Nevertheless, as far as I can tell, the new work is a genuine milestone in experimental QC, because it dispenses with most of the precompilation tricks that previous demonstrations of Shor’s algorithm used.  “Precompilation tricks” are a fancier term for “cheating”: i.e., optimizing a quantum circuit in ways that would only make sense if you already assumed that 15 was, indeed, 3×5.  So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before.

Of course, as I’m sure the authors would acknowledge, the word “scalable” in their title admits multiple interpretations, rather like the word “possible.”  (It’s possible to buy strawberry Mentos, and it’s also possible to convert the Sun into computronium, but for different senses of “possible.”)  As I wrote in the comments section of my last post:

There are still all the difficulties of integrating a huge number of qubits—which, in ion-trap implementations, would almost certainly mean having many traps that can communicate with each other using gate teleportation—as well as implementing quantum fault-tolerance (meaning: doing 2-qubit gates at the fault-tolerance threshold, moving qubits around to the right places, pumping in fresh qubits, pumping out dirty ones, etc).  Those all remain major engineering problems for the future.

See also this comment by Vaughan Pratt, who remarks: “the MIT press release … would appear to have translated [‘scalable’] to mean that RSA was now approaching its best-by date, although the paper itself makes no such claim.”

In any case, regardless of how long it takes until we can factor enormous numbers like 91, congratulations to the MIT and Innsbruck groups on what’s certainly progress toward scalable ion-trap QC!

2. Other people wrote to ask about a striking recent preprint of Kaplan, Leurent, Leverrier, and Naya-Plasencia, which points out how Simon’s algorithm—i.e., the forerunner of Shor’s algorithm—can be used to break all sorts of practical private-key authentication schemes in quantum polynomial time, assuming the adversary can query the scheme being attacked on a coherent superposition of inputs.  In practice, this assumption is unlikely to hold, unless the adversary gets the actual obfuscated code of the scheme being attacked (in which case it holds).  Also, this is not the first time Simon’s algorithm has been used to attack cryptography; previous work in the same spirit by Kuwakado and Morii showed how to use Simon’s algorithm to break the 3-round Feistel scheme and the Even-Mansour scheme, again if we assume superposition queries.

Even so, Kaplan et al. seem to pretty dramatically expand the range of “practical” cryptosystems that are known to be vulnerable to Simon attacks in the superposed-query model.  I suspect this will force a revision in how we talk about Simon’s algorithm: from “useless, but theoretically important, and historically important because it led to Shor’s algorithm” to “actually maybe not that useless.”  (See here for a previous attempt of mine to give an interesting “explicit” problem that Simon’s algorithm solves in polynomial time, but that’s classically hard.  Alas, my candidate problem turned out to be classically easy.)  This is analogous to the revision that “Einstein-certified randomness” and the RUV theorem recently forced in how we talk about Bell’s inequality: we can no longer tell students that Bell’s work was important because of the conceptual point it proved about local hidden variables, and because of all the other stuff it led to, even though it obviously has no applications in and of itself.  Now it does have applications in and of itself.

To a quantum complexity theorist like me, who doesn’t know nearly as much applied crypto as he should, the real news in the Kaplan et al. paper is not that Simon’s algorithm can break the sorts of systems they study.  Rather, it’s that so many systems that are vulnerable to Simon attack exist and are used in the first place!  Once people understand the problem, I doubt it will be hard to design schemes of similar efficiency that remain quantum-secure even in the superposed-query model (under some plausible assumption, like that an underlying one-way function is quantum-secure).  Indeed, recent work of Boneh and Zhandry, among others, has already taken significant steps in that direction.  So the situation doesn’t seem “as bad” as it was with public-key crypto, where once Shor’s algorithm comes along, the plausibly quantum-secure alternatives that we currently know (like lattice-based crypto and quantum key distribution) are either much less efficient than RSA and Diffie-Hellman, or else require new hardware.  Still, the new observations about Simon’s algorithm show us how the history of quantum computing could have unfolded differently: rather than Simon → Shor → everyone gets excited (because their crypto is now vulnerable), people could’ve gotten cryptographically excited immediately after Simon.

3. Speaking of Diffie-Hellman, belated congratulations to Whitfield Diffie and Martin Hellman for an extremely well-deserved Turing Award!

4. At MIT’s weekly quantum information group meeting, Aram Harrow spoke about his new paper with Ed Farhi, “Quantum Supremacy through the Quantum Approximate Optimization Algorithm.”  Using the same arguments developed around 2010 by me and Alex Arkhipov, and (independently) by Bremner, Jozsa, and Shepherd, this paper shows that, even though the recently-developed QAOA/Quinoa quantum optimization algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever that algorithm does do, at least there’s no polynomial-time classical algorithm that samples from the same distribution over outputs, unless the polynomial hierarchy collapses.

In other words: even if the algorithm fails at its original goal, it’s still hard for a classical computer to reproduce its exact pattern of failure!  Hence: Quantum Supremacy.

A secondary goal of Aram and Eddie’s paper is to make the Aaronson-Arkhipov and Bremner et al. arguments more accessible to physicists, by decreasing the amount of “weird complexity theory” invoked.  (I suppose I’ve asked for this—for physicists to de-complexify complexity theory—by telling everyone for years how easy quantum mechanics becomes once you take away the physics!)  I’ll leave it to physicists to judge how well Aram and Eddie succeed at their pedagogical goal, but I’m thrilled by any such effort to communicate across fields.  Aram’s talk would surely have served that same educational purpose, had it not gotten derailed partway through by Donald Trump jokes from the audience.  (My contribution: “Aram, will you disavow support from quantum supremacists?”)


Unrelated Update: Some people might be interested in this brief interview with Michael Cerullo, who read The Ghost in the Quantum Turing Machine and wanted to ask me about “the relevance of quantum mechanics to brain preservation, uploading, and identity.”

Edging in: the biggest science news of 2015

Sunday, January 3rd, 2016

For years, I was forced to endure life with my nose up against the glass of the Annual Edge Question.  What are you optimistic about?  Ooh! ooh! Call on me!  I’m optimistic about someday being able to prove my pessimistic beliefs (like P≠NP).  How is the Internet changing the way you think?  Ooh, ooh! I know! Google and MathOverflow are saving me from having to think at all!  So then why are they only asking Steven Pinker, Freeman Dyson, Richard Dawkins, David Deutsch, some random other people like that?

But all that has changed.  This year, I was invited to participate in Edge for the first time.  So, OK, here’s the question:

What do you consider the most interesting recent [scientific] news?  What makes it important?

My response is here.  I wasn’t in love with the question, because of what I saw as an inherent ambiguity in it: the news that’s most interesting to me, that I have a comparative advantage in talking about, and that people probably want to hear me talk about (e.g., progress in quantum computing), is not necessarily what I’d regard as the most important in any objective sense (e.g., climate change).  So, I decided to write my answer precisely about my internal tension in what I should consider most interesting: should it be the recent progress by John Martinis and others toward building a quantum computer?  Or should it be the melting glaciers, or something else that I’m confident will affect the future of the world?  Or possibly the mainstream attention now being paid to the AI-risk movement?  But if I really want to nerd out, then why not Babai’s graph isomorphism algorithm?  Or if I actually want to be honest about what excited me, then why not the superquadratic separations between classical and quantum query complexities for a total Boolean function, by Ambainis et al. and my student Shalev Ben-David?  On the other hand, how can I justify even caring about such things while the glaciers are melting?

So, yeah, my response tries to meditate on all those things.  My original title was “How nerdy do you want it?,” but John Brockman of Edge had me change it to something blander (“How widely should we draw the circle?”), and made a bunch of other changes from my usual style.  Initially I chafed at having an editor for what basically amounted to a blog post; on the other hand, I’m sure I would’ve gotten in trouble much less often on this blog had I had someone to filter my words for me.

Anyway, of course I wasn’t the only person to write about the climate crisis.  Robert Trivers, Laurence Smith, and Milford Wolpoff all wrote about it as well (Trivers most chillingly and concisely), while Max Tegmark wrote about the mainstreaming of AI risk.  John Naughton even wrote about Babai’s graph isomorphism breakthrough (though he seems unaware that the existing GI algorithms were already extremely fast in practice, and therefore makes misleading claims about the new algorithm’s practical applications).  Unsurprisingly, no one else wrote about breakthroughs in quantum query complexity: you’ll need to go to my essay for that!  A bit more surprisingly, no one besides me wrote about progress in quantum computing at all (if we don’t count the loophole-free Bell test).

Anyway, on reflection, 2015 actually was a pretty awesome year for science, no matter how nerdy you want it or how widely you draw the circle.  Here are other advances that I easily could’ve written about but didn’t:

I’ve now read all (more or less) of this year’s Edge responses.  Even though some of the respondents pushed personal hobbyhorses like I’d feared, I was impressed by how easy it was to discern themes: advances that kept cropping up in one answer after another and that one might therefore guess are actually important (or at least, are currently perceived to be important).

Probably at the top of the list was a new gene-editing technique called CRISPR: Randolph Neese, Paul Dolan, Eric Topol, Mark Pagel, and Stuart Firestein among others all wrote about this, and about its implications for creating designer humans.

Also widely-discussed was the discovery that most psychology studies fail to replicate (I’d long assumed as much, but apparently this was big news in psychology!): Nicholas Humphrey, Stephen Kosslyn, Jonathan Schooler, Ellen Winner, Judith Rich Harris, and Philip Tetlock all wrote about that.

Then there was the Pluto flyby, which Juan Enriquez, Roger Highfield, and Nicholas Christakis all wrote about.  (As Christakis, Master of Silliman College at Yale, was so recently a victim of a social-justice mob, I found it moving how he simply ignored those baying for his head and turned his attention heavenward in his Edge answer.)

Then there was progress in deep learning, including Google’s Deep Dream (those images of dogs in nebulae that filled your Facebook wall) and DeepMind (the program that taught itself how to play dozens of classic video games).  Steve Omohundro, Andy Clark, Jamshed Bharucha, Kevin Kelly, David Dalrymple, and Alexander Wissner-Gross all wrote about different aspects of this story.

And recent progress in SETI, which Yuri Milner (who’s given $100 million for it) and Mario Livio wrote about.

Unsurprisingly, a bunch of high-energy physicists wrote about high-energy physics at the LHC: how the Higgs boson was found (still news?), how nothing other than the Higgs boson was found (the biggest news?), but how there’s now the slightest hint of a new particle at 750 GeV.  See Lee Smolin, Garrett Lisi, Sean Carroll, and Sarah Demers.

Finally, way out on the Pareto frontier of importance and disgustingness was the recently-discovered therapeutic value of transplanting one person’s poop into another person’s intestines, which Joichi Ito, Pamela Rosenkranz, and Alan Alda all wrote about (it also, predictably, featured in a recent South Park episode).

Without further ado, here are 27 other answers that struck me in one way or another:

  • Steven Pinker on happy happy things are getting better (and we can measure it)
  • Freeman Dyson on the Dragonfly astronomical observatory
  • Jonathan Haidt on how prejudice against people of differing political opinions was discovered to have surpassed racial, gender, and religious prejudice
  • S. Abbas Raza on Piketty’s r>g
  • Rebecca Newberger Goldstein, thoughtful as usual, on the recent study that said it’s too simple to say female participation is lower in STEM fields—rather, female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort
  • Bill Joy on recent advances on reducing CO2 emissions
  • Paul Steinhardt on recent observations saying that, not only were the previous “B-modes from inflation” just galactic dust, but there are no real B-modes to within the current detection limits, and this poses a problem for inflation (I hadn’t heard about this last part)
  • Aubrey de Grey on new antibiotics that are grown in the soil rather than in lab cultures
  • John Tooby on the evolutionary rationale for germline engineering
  • W. Tecumseh Fitch on the coming reality of the “Jurassic Park program” (bringing back extinct species through DNA splicing—though probably not dinosaurs, whose DNA is too degraded)
  • Keith Devlin on the new prospect of using massive datasets (from MOOCs, for example) to actually figure out how students learn
  • Richard Muller on how air pollution in China has become one of the world’s worst problems (imagine every child in Beijing being force-fed two packs of cigarettes per day)
  • Ara Norenzayan on the demographic trends in religious belief
  • James Croak on amazing advances in battery technology (which were news to me)
  • Buddhini Samarasinghe on (among other things) the power of aspirin to possibly prevent cancer
  • Todd Sacktor on a new treatment for Parkinson’s
  • Charles Seife on the imminent availability of data about pretty much everything in our lives
  • Susan Blackmore on “that dress” and what it revealed about the human visual system
  • Brian Keating on experiments that should soon tell us the neutrinos’ masses (again, I hadn’t heard about these)
  • Michael McCullough on something called “reproductive religiosity theory,” which posits that the central purpose of religions is to enforce social norms around mating and reproduction (for what it’s worth, I’d always regarded that as obvious; it’s even expounded in the last chapter of Quantum Computing Since Democritus)
  • Greg Cochran on the origin of Europeans
  • David Buss on the “mating crisis among educated women”
  • Ed Regis on how high-fat diets are better (except, isn’t this the principle behind Atkins, and isn’t this pretty old news by now?)
  • Melanie Swan on blockchain-based cryptography, such as Bitcoin (though it wasn’t entirely clear to me what point Swan was making about it)
  • Paul Davies on LIGO getting ready to detect its first gravitational waves
  • Samuel Arbesman on how weather prediction has gotten steadily better (rendering our culture’s jokes about the perpetually-wrong weatherman outdated, with hardly anyone noticing)
  • Alison Gopnik on how the ubiquity of touchscreen devices like the iPad means that toddlers can now master computers, and this is something genuinely new under the sun (I can testify from personal experience that she’s onto something)

Then there were three answers for which the “progress” being celebrated, seemed to me to be progress racing faster into WrongVille:

  • Frank Tipler on how one can conclude a priori that there must be a Big Crunch to our future (and hence, the arena for Tiplerian theology) in order to prevent the black hole information paradox from arising, all recent cosmological evidence to the contrary be damned.
  • Ross Anderson on an exciting conference whose participants aim to replace quantum mechanics with local realistic theories.  (Anderson, in particular, is totally wrong that you can get Bell inequality violation from “a combination of local action and global correlation,” unless the global correlation goes as far as a ‘t-Hooft-like superdeterministic conspiracy.)
  • Gordon Kane on how the big news is that the LHC should soon see superparticles.  (This would actually be fine except that Kane omits the crucial context, that he’s been predicting superparticles just around the corner again and again for the past twenty years and they’ve never shown up)

Finally, two responses by old friends that amused me.  The science-fiction writer Rudy Rucker just became aware of the discovery of the dark energy back in 1998, and considers that to be exciting scientific news (yes, Rudy, so it was!).  And Michael Vassar —the Kevin Bacon or Paul Erdös of the rationalist world, the guy who everyone‘s connected to somehow—writes something about a global breakdown of economic rationality, $20 bills on the sidewalk getting ignored, that I had trouble understanding (though the fault is probably mine).

6.S899 Student Project Showcase!

Tuesday, December 22nd, 2015

As 2015 winds down, I thought I’d continue my tradition of using this blog to showcase some awesome student projects from my graduate class.  (For the previous project showcases from Quantum Complexity Theory, see here, here, and here.  Also see here for the showcase from Philosophy and Theoretical Computer Science.)

This fall, I taught 6.S899, a one-time “Seminar on Physics and Computation” that focused on BosonSampling, complexity and quantum gravity, and universality of physical systems.  There were also lots of guest lectures and student presentations.  Unfortunately, we didn’t do any notes or recordings.

Fortunately, though, the students did do projects, which were literature reviews some of which ventured into original research, and all nine have agreed to share their project reports here!  So enjoy, thanks so much to the students for making it a great class, and happy holidays.


Update (Dec. 23): Here are two conference announcements that I’ve been asked to make: Innovations in Theoretical Computer Science (ITCS) 2016, January 14-16 in Cambridge MA, and the Fifth Women in Theory Workshop, at the Simons Institute in Berkeley, May 22-25, 2016.

Google, D-Wave, and the case of the factor-10^8 speedup for WHAT?

Wednesday, December 9th, 2015

Update (Dec. 16):  If you’re still following this, please check out an important comment by Alex Selby, the discoverer of Selby’s algorithm, which I discussed in the post.  Selby queries a few points in the Google paper: among other things, he disagrees with their explanation of why his classical algorithm works so well on D-Wave’s Chimera graph (and with their prediction that it should stop working for larger graphs), and he explains that Karmarkar-Karp is not the best known classical algorithm for the Number Partitioning problem.  He also questions whether simulated annealing is the benchmark against which everything should be compared (on the grounds that “everything else requires fine-tuning”), pointing out that SA itself typically requires lots of tuning to get it to work well.

Update (Dec. 11): MIT News now has a Q&A with me about the new Google paper. I’m really happy with how the Q&A turned out; people who had trouble understanding this blog post might find the Q&A easier. Thanks very much to Larry Hardesty for arranging it.

Meanwhile, I feel good that there seems to have been actual progress in the D-Wave debate! In previous rounds, I had disagreed vehemently with some of my MIT colleagues (like Ed Farhi and Peter Shor) about the best way to respond to D-Wave’s announcements. Today, though, at our weekly group meeting, there was almost no daylight between any of us. Partly, I’m sure, it’s that I’ve learned to express myself better; partly it’s that the “trigger” this time was a serious research paper by a group separate from D-Wave, rather than some trash-talking statement from Geordie Rose. But mostly it’s that, thanks to the Google group’s careful investigations, this time pretty much anyone who knows anything agrees about all the basic facts, as I laid them out in this blog post and in the Q&A. All that remains are some small differences in emotional attitude: e.g., how much of your time do you want to spend on a speculative, “dirty” approach to quantum computing (which is far ahead of everyone else in terms of engineering and systems integration, but which still shows no signs of an asymptotic speedup over the best classical algorithms, which is pretty unsurprising given theoretical expectations), at a time when the “clean” approaches might finally be closing in on the long-sought asymptotic quantum speedup?

Another Update: Daniel Lidar was nice enough to email me an important observation, and to give me permission to share it here.  Namely, the D-Wave 2X has a minimum annealing time of 20 microseconds.  Because of this, the observed running times for small instance sizes are artificially forced upward, making the growth rate in the machine’s running time look milder than it really is.  (Regular readers might remember that exactly the same issue plagued previous D-Wave vs. classical performance comparisons.)  Correcting this would certainly decrease the D-Wave 2X’s predicted speedup over simulated annealing, in extrapolations to larger numbers of qubits than have been tested so far (although Daniel doesn’t know by how much).  Daniel stresses that he’s not criticizing the Google paper, which explicitly mentions the minimum annealing time—just calling attention to something that deserves emphasis.


In retrospect, I should’ve been suspicious, when more than a year went by with no major D-Wave announcements that everyone wanted me to react to immediately. Could it really be that this debate was over—or not “over,” but where it always should’ve been, in the hands of experts who might disagree vehemently but are always careful to qualify speedup claims—thereby freeing up the erstwhile Chief D-Wave Skeptic for more “””rewarding””” projects, like charting a middle path through the Internet’s endless social justice wars?

Nope.

As many of you will have seen by now, on Monday a team at Google put out a major paper reporting new experiments on the D-Wave 2X machine.  (See also Hartmut Neven’s blog post about this.)  The predictable popularized version of the results—see for example here and here—is that the D-Wave 2X has now demonstrated a factor-of-100-million speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is “truly a quantum computer.”  In the comment sections of one my previous posts, D-Wave investor Steve Jurvetson even tried to erect a victory stele, by quoting Karl Popper about falsification.

In situations like this, the first thing I do is turn to Matthias Troyer, who’s arguably the planet’s most balanced, knowledgeable, trustworthy interpreter of quantum annealing experiments. Happily, in collaboration with Ilia Zintchenko and Ethan Brown, Matthias was generous enough to write a clear 3-page document putting the new results into context, and to give me permission to share it on this blog. From a purely scientific standpoint, my post could end right here, with a link to their document.

Then again, from a purely scientific standpoint, the post could’ve ended even earlier, with the link to the Google paper itself!  For this is not a case where the paper hides some crucial issue that the skeptics then need to ferret out.  On the contrary, the paper’s authors include some of the most careful people in the business, and the paper explains the caveats as clearly as one could ask.  In some sense, then, all that’s left for me or Matthias to do is to tell you what you’d learn if you read the paper!

So, OK, has the D-Wave 2X demonstrated a factor-108 speedup or not?  Here’s the shortest answer that I think is non-misleading:

Yes, there’s a factor-108 speedup that looks clearly asymptotic in nature, and there’s also a factor-108 speedup over Quantum Monte Carlo. But the asymptotic speedup is only if you compare against simulated annealing, while the speedup over Quantum Monte Carlo is only constant-factor, not asymptotic. And in any case, both speedups disappear if you compare against other classical algorithms, like that of Alex Selby. Also, the constant-factor speedup probably has less to do with quantum mechanics than with the fact that D-Wave built extremely specialized hardware, which was then compared against a classical chip on the problem of simulating the specialized hardware itself (i.e., on Ising spin minimization instances with the topology of D-Wave’s Chimera graph). Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, let alone to speedups over the best known classical algorithms that are also asymptotic or also of practical importance. Indeed, all of these points also remain uncertain for quantum annealing as a whole.

To expand a bit, there are really three separate results in the Google paper:

  1. The authors create Chimera instances with tall, thin energy barriers blocking the way to the global minimum, by exploiting the 8-qubit “clusters” that play such a central role in the Chimera graph.  In line with a 2002 theoretical prediction by Farhi, Goldstone, and Gutmann (a prediction we’ve often discussed on this blog), they then find that on these special instances, quantum annealing reaches the global minimum exponentially faster than classical simulated annealing, and that the D-Wave machine realizes this advantage.  As far as I’m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine, at least within the 8-qubit clusters.  On the other hand, the authors point out that there are other classical algorithms, like that of Selby (building on Hamze and de Freitas), which group together the 8-bit clusters into 256-valued mega-variables, and thereby get rid of the energy barrier that kills simulated annealing.  These classical algorithms are found empirically to outperform the D-Wave machine.  The authors also match the D-Wave machine’s asymptotic performance (though not the leading constant) using Quantum Monte Carlo, which (despite its name) is a classical algorithm often used to find quantum-mechanical ground states.
  2. The authors make a case that the ability to tunnel past tall, thin energy barriers—i.e., the central advantage that quantum annealing has been shown to have over classical annealing—might be relevant to at least some real-world optimization problems.  They do this by studying a classic NP-hard problem called Number Partitioning, where you’re given a list of N positive integers, and your goal is to partition the integers into two subsets whose sums differ from each other by as little as possible.  Through numerical studies on classical computers, they find that quantum annealing (in the ideal case) and Quantum Monte Carlo should both outperform simulated annealing, by roughly equal amounts, on random instances of Number Partitioning.  Note that this part of the paper doesn’t involve any experiments on the D-Wave machine itself, so we don’t know whether calibration errors, encoding loss, etc. will kill the theoretical advantage over simulated annealing.  But even if not, this still wouldn’t yield a “true quantum speedup,” since (again) Quantum Monte Carlo is a perfectly-good classical algorithm, whose asymptotics match those of quantum annealing on these instances.
  3. Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about 108 times faster than Quantum Monte Carlo running on a single-core classical computer.  But, extremely interestingly, they also find that this speedup does not grow with problem size; instead it simply saturates at ~108.  In other words, this is a constant-factor speedup rather than an asymptotic one.  Now, obviously, solving a problem “only” 100 million times faster (rather than asymptotically faster) can still have practical value!  But it’s crucial to remember that this constant-factor speedup is only observed for the Chimera instances—or in essence, for “the problem of simulating the D-Wave machine itself”!  If you wanted to solve something of practical importance, you’d first need to embed it into the Chimera graph, and it remains unclear whether any of the constant-factor speedup would survive that embedding.  In any case, while the paper isn’t explicit about this, I gather that the constant-factor speedup disappears when one compares against (e.g.) the Selby algorithm, rather than against QMC.

So then, what do I say to Steve Jurvetson?  I say—happily, not grudgingly!—that the new Google paper provides the clearest demonstration so far of a D-Wave device’s capabilities.  But then I remind him of all the worries the QC researchers had from the beginning about D-Wave’s whole approach: the absence of error-correction; the restriction to finite-temperature quantum annealing (moreover, using “stoquastic Hamiltonians”), for which we lack clear evidence for a quantum speedup; the rush for more qubits rather than better qubits.  And I say: not only do all these worries remain in force, they’ve been thrown into sharper relief than ever, now that many of the side issues have been dealt with.  The D-Wave 2X is a remarkable piece of engineering.  If it’s still not showing an asymptotic speedup over the best known classical algorithms—as the new Google paper clearly explains that it isn’t—then the reasons are not boring or trivial ones.  Rather, they seem related to fundamental design choices that D-Wave made over a decade ago.

The obvious question now is: can D-Wave improve its design, in order to get a speedup that’s asymptotic, and that holds against all classical algorithms (including QMC and Selby’s algorithm), and that survives the encoding of a “real-world” problem into the Chimera graph?  Well, maybe or maybe not.  The Google paper returns again and again to the subject of planned future improvements to the machine, and how they might clear the path to a “true” quantum speedup. Roughly speaking, if we rule out radical alterations to D-Wave’s approach, there are four main things one would want to try, to see if they helped:

  1. Lower temperatures (and thus, longer qubit lifetimes, and smaller spectral gaps that can be safely gotten across without jumping up to an excited state).
  2. Better calibration of the qubits and couplings (and thus, ability to encode a problem of interest, like the Number Partitioning problem mentioned earlier, to greater precision).
  3. The ability to apply “non-stoquastic” Hamiltonians.  (D-Wave’s existing machines are all limited to stoquastic Hamiltonians, defined as Hamiltonians all of whose off-diagonal entries are real and non-positive.  While stoquastic Hamiltonians are easier from an engineering standpoint, they’re also the easiest kind to simulate classically, using algorithms like QMC—so much so that there’s no consensus on whether it’s even theoretically possible to get a true quantum speedup using stoquastic quantum annealing.  This is a subject of active research.)
  4. Better connectivity among the qubits (thereby reducing the huge loss that comes from taking problems of practical interest, and encoding them in the Chimera graph).

(Note that “more qubits” is not on this list: if a “true quantum speedup” is possible at all with D-Wave’s approach, then the 1000+ qubits that they already have seem like more than enough to notice it.)

Anyway, these are all, of course, things D-Wave knows about and will be working on in the near future. As well they should! But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they’ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup. We just can’t say for sure that they won’t see one.

In the meantime, while it’s sometimes easy to forget during blog-discussions, the field of experimental quantum computing is a proper superset of D-Wave, and things have gotten tremendously more exciting on many fronts within the last year or two.  In particular, the group of John Martinis at Google (Martinis is one of the coauthors of the Google paper) now has superconducting qubits with orders of magnitude better coherence times than D-Wave’s qubits, and has demonstrated rudimentary quantum error-correction on 9 of them.  They’re now talking about scaling up to ~40 super-high-quality qubits with controllable couplings—not in the remote future, but in, like, the next few years.  If and when they achieve that, I’m extremely optimistic that they’ll be able to show a clear quantum advantage for something (e.g., some BosonSampling-like sampling task), if not necessarily something of practical importance.  IBM Yorktown Heights, which I visited last week, is also working (with IARPA funding) on integrating superconducting qubits with many-microsecond coherence times.  Meanwhile, some of the top ion-trap groups, like Chris Monroe’s at the University of Maryland, are talking similarly big about what they expect to be able to do soon. The “academic approach” to QC—which one could summarize as “understand the qubits, control them, keep them alive, and only then try to scale them up”—is finally bearing some juicy fruit.

(At last week’s IBM conference, there was plenty of D-Wave discussion; how could there not be? But the physicists in attendance—I was almost the only computer scientist there—seemed much more interested in approaches that aim for longer-laster qubits, fault-tolerance, and a clear asymptotic speedup.)

I still have no idea when and if we’ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on.  But it’s now looking like only a matter of years until Gil Kalai, and the other quantum computing skeptics, will be forced to admit they were wrong—which was always the main application I cared about anyway!

So yeah, it’s a heady time for QC, with many things coming together faster than I’d expected (then again, it was always my personal rule to err on the side of caution, and thereby avoid contributing to runaway spirals of hype).  As we stagger ahead into this new world of computing—bravely, coherently, hopefully non-stoquastically, possibly fault-tolerantly—my goal on this blog will remain what it’s been for a decade: not to prognosticate, not to pick winners, but merely to try to understand and explain what has and hasn’t already been shown.


Update (Dec. 10): Some readers might be interested in an economic analysis of the D-Wave speedup by commenter Carl Shulman.

Another Update: Since apparently some people didn’t understand this post, here are some comments from a Y-Combinator thread about the post that might be helpful:

(1) [T]he conclusion of the Google paper is that we have probable evidence that with enough qubits and a big enough problem it will be faster for a very specific problem compared to a non-optimal classical algorithm (we have ones that are for sure better).

This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave’s computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn’t help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can’t beat an even better classical algorithm (Selby’s) at all, even in a way that won’t scale.

Scott’s central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it’s possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott’s central gripe with D-Wave’s approach is that they don’t have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them even bigger doesn’t seem like a solution.

(2) DWave machines are NOT gate quantum computers; they call their machine quantum annealing machines. It is not known the complexity class of problems that can be solved efficiently by quantum annealing machines, or if that class is equivalent to classical machines.

The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a ‘quantum speedup’. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.

What the paper does seem to be showing is that the machine in question is actually fundamentally quantum in nature; it’s just not clear yet that that the type of quantum computer it is is an improvement over classical ones.

(3) [I]t isn’t called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain and it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It’s hard to imagine what they might be when we’re still so early in the process, and still have no good idea what either the practical or theoretical limits are.

(4) The popular perception of quantum computers as “doing things in parallel” is very misleading. A quantum computer lets you perform computation on a superposed state while maintaining that superposition. But that only helps if the structure of the problem lets you somehow “cancel out” the incorrect results leaving you with the single correct one. [There’s hope for the world! –SA]

A breakthrough on QMA(2)?

Friday, October 30th, 2015

Last night, Martin Schwarz posted a preprint to the arXiv that claims to show the new complexity class containment QMA(2) ⊆ EXP.  (See also his brief blog post about this result.)  Here QMA(2) means Quantum Merlin-Arthur with two Merlins—i.e., the set of languages for which a “yes” answer can be witnessed by two unentangled quantum states, |ψ〉⊗|φ〉, on polynomially many qubits each, which are checked by a polynomial-time quantum algorithm—while EXP means deterministic exponential time.  Previously, the best upper bound we had was the trivial QMA(2) ⊆ NEXP (Nondeterministic Exponential Time), which comes from guessing exponential-size classical descriptions of the two quantum proofs.

Whether QMA(2) is contained in EXP is a problem that had fascinated me for a decade.  With Salman Beigi, Andy Drucker, Bill Fefferman, and Peter Shor, we discussed this problem in our 2008 paper The Power of Unentanglement.  That paper (with an additional ingredient supplied by Harrow and Montanaro) shows how to prove that a 3SAT instance of size n is satisfiable, using two unentangled quantum proofs with only Õ(√n) qubits each.  This implies that searching over all n-qubit unentangled proofs must take at least exp(n2) time, unless 3SAT is solvable in 2o(n) time (i.e., unless the Exponential Time Hypothesis is false).  However, since EXP is defined as the set of problems solvable in 2p(n) time, for any polynomial p, this is no barrier to QMA(2) ⊆ EXP being true—it merely constrains the possible techniques that could prove such a containment.

In trying to prove QMA(2) ⊆ EXP, the fundamental difficulty is that you need to optimize over unentangled quantum states only.  That might sound easier than optimizing over all states (including the entangled ones), but ironically, it’s harder!  The reason why it’s harder is that optimizing over all quantum states (say, to find the one that’s accepted by some measurement with the maximum probability) is a convex optimization problem: in fact, it boils down to finding the principal eigenvector of a Hermitian matrix.  By contrast, optimizing over only the separable states is a non-convex optimization problem, which is NP-hard to solve exactly (treating the dimension of the Hilbert space as the input size n)—meaning that the question shifts to what sorts of approximations are possible.

Last week, I had the pleasure of speaking with Martin in person, when I visited Vienna, Austria to give a public lecture at the wonderful new research institute IST.  Martin was then ironing out some final wrinkles in his proof, and I got to watch him in action—in particular, to see the care and detachment with which he examined the possibility that his proof might imply too much (e.g., that NP-complete problems are solvable in quasipolynomial time).  Fortunately, his proof turned out not to imply anything of the kind.

The reason why it didn’t is directly related to the most striking feature of Martin’s proof—namely, that it’s non-relativizing, leaving completely open the question of whether QMA(2)A ⊆ EXPA relative to all oracles A.  To explain how this is possible requires saying a bit about how the proof works.  The obvious way to prove QMA(2) ⊆ EXP—what I had assumed from the beginning was the only realistic way—would be to give a quasipolynomial-time approximation algorithm for the so-called Best Separable State or BSS problem.  The BSS problem, as defined in this paper by Russell Impagliazzo, Dana Moshkovitz, and myself (see also this one by Barak et al.), is as follows: you’re given as input an n2×n2 Hermitian matrix A, with all its eigenvalues in [0,1].  Your goal is to find length-n unit vectors, u and w, that maximize

(uT⊗wT)A(u⊗w),

to within an additive error of ±ε, for some constant ε.

Of course, if we just asked for a length-n2 unit vector v that maximized vTAv, we’d be asking for the principal eigenvector of A, which is easy to find in polynomial time.  By contrast, from the ABDFS and Harrow-Montanaro results, it follows that the BSS problem, for constant ε, cannot be solved in poly(n) time, unless 3SAT is solvable in 2o(n) time.  But this still leaves the possibility that BSS is solvable in nlog(n) time—and that possibility would immediately imply QMA(2) ⊆ EXP.  So, as I and others saw it, the real challenge here was to find a quasipolynomial-time approximation algorithm for BSS—something that remained elusive, although Brandao-Christandl-Yard made partial progress towards it.

But now Martin comes along, and proves QMA(2) ⊆ EXP in a way that sidesteps the BSS problem.  The way he does it is by using the fact that, if a problem is in QMA(2), then we don’t merely know a Hermitian operator A corresponding to the measurement of |ψ〉⊗|φ〉: rather, we know an actual polynomial-size sequence of quantum gates that get multiplied together to produce A.  Using that fact, Chailloux and Sattath showed that a natural variant of the QMA-complete Local Hamiltonians problem, which they call Separable Sparse Hamiltonians, is complete for QMA(2).  Thus, it suffices for Martin to show how to solve the Separable Sparse Hamiltonians problem in singly-exponential time.  This he does by using perturbation theory gadgets to reduce Separable Sparse Hamiltonians to Separable Local Hamiltonians with an exponentially-small promise gap, and then using a result of Shi and Wu to solve the latter problem in singly-exponential time.  All in all, given the significance of the advance, Martin’s paper is remarkably short; a surprising amount of it boils down to deeply understanding some not-especially-well-known results that were already in the literature.

One obvious problem left open is whether the full BSS problem—rather than just the special case of it corresponding to QMA(2)—is solvable in quasipolynomial time after all.  A second obvious problem is whether the containment QMA(2) ⊆ EXP can be improved to QMA(2) ⊆ PSPACE, or even (say) QMA(2) ⊆ PP.  (By comparison, note that QMA ⊆ PP, by a result of Kitaev and Watrous.)


Update (Nov. 10): I thought I should let people know that a serious concern has been raised by an expert about the correctness of the proof—and in particular, about the use of perturbation theory gadgets. Martin tells me that he’s working on a fix, and I very much hope he’ll succeed, but not much to do for now except let the scientific process trundle along (which doesn’t happen at blog-speed).

Six announcements

Monday, September 21st, 2015
  1. I did a podcast interview with Julia Galef for her series “Rationally Speaking.”  See also here for the transcript (which I read rather than having to listen to myself stutter).  The interview is all about Aumann’s Theorem, and whether rational people can agree to disagree.  It covers a lot of the same ground as my recent post on the same topic, except with less technical detail about agreement theory and more … well, agreement.  At Julia’s suggestion, we’re planning to do a follow-up podcast about the particular intractability of online disagreements.  I feel confident that we’ll solve that problem once and for all.  (Update: Also check out this YouTube video, where Julia offers additional thoughts about what we discussed.)
  2. When Julia asked me to recommend a book at the end of the interview, I picked probably my favorite contemporary novel: The Mind-Body Problem by Rebecca Newberger Goldstein.  Embarrassingly, I hadn’t realized that Rebecca had already been on Julia’s show twice as a guest!  Anyway, one of the thrills of my life over the last year has been to get to know Rebecca a little, as well as her husband, who’s some guy named Steve Pinker.  Like, they both live right here in Boston!  You can talk to them!  I was especially pleased two weeks ago to learn that Rebecca won the National Humanities Medal—as I told Julia, Rebecca Goldstein getting a medal at the White House is the sort of thing I imagine happening in my ideal fantasy world, making it a pleasant surprise that it happened in this one.  Huge congratulations to Rebecca!
  3. The NSA has released probably its most explicit public statement so far about its plans to move to quantum-resistant cryptography.  For more see Bruce Schneier’s Crypto-Gram.  Hat tip for this item goes to reader Ole Aamot, one of the only people I’ve ever encountered whose name alphabetically precedes mine.
  4. Last Tuesday, I got to hear Ayaan Hirsi Ali speak at MIT about her new book, Heretic, and then spend almost an hour talking to students who had come to argue with her.  I found her clear, articulate, and courageous (as I guess one has to be in her line of work, even with armed cops on either side of the lecture hall).  After the shameful decision of Brandeis in caving in to pressure and cancelling Hirsi Ali’s commencement speech, I thought it spoke well of MIT that they let her speak at all.  The bar shouldn’t be that low, but it is.
  5. From far away on the political spectrum, I also heard Noam Chomsky talk last week (my first time hearing him live), about the current state of linguistics.  Much of the talk, it struck me, could have been given in the 1950s with essentially zero change (and I suspect Chomsky would agree), though a few parts of it were newer, such as the speculation that human languages have many of the features they do in order to minimize the amount of computation that the speaker needs to perform.  The talk was full of declarations that there had been no useful work whatsoever on various questions (e.g., about the evolutionary function of language), that they were total mysteries and would perhaps remain total mysteries forever.
  6. Many of you have surely heard by now that Terry Tao solved the Erdös Discrepancy Problem, by showing that for every infinite sequence of heads and tails and every positive integer C, there’s a positive integer k such that, if you look at the subsequence formed by every kth flip, there comes a point where the heads outnumber tails or vice versa by at least C.  This resolves a problem that’s been open for more than 80 years.  For more details, see this post by Timothy Gowers.  Notably, Tao’s proof builds, in part, on a recent Polymath collaborative online effort.  It was a big deal last year when Konev and Lisitsa used a SAT-solver to prove that there’s always a subsequence with discrepancy at least 3; Tao’s result now improves on that bound by ∞.

Bell inequality violation finally done right

Tuesday, September 15th, 2015

A few weeks ago, Hensen et al., of the Delft University of Technology and Barcelona, Spain, put out a paper reporting the first experiment that violates the Bell inequality in a way that closes off the two main loopholes simultaneously: the locality and detection loopholes.  Well, at least with ~96% confidence.  This is big news, not only because of the result itself, but because of the advances in experimental technique needed to achieve it.  Last Friday, two renowned experimentalists—Chris Monroe of U. of Maryland and Jungsang Kim of Duke—visited MIT, and in addition to talking about their own exciting ion-trap work, they did a huge amount to help me understand the new Bell test experiment.  So OK, let me try to explain this.

While some people like to make it more complicated, the Bell inequality is the following statement. Alice and Bob are cooperating with each other to win a certain game (the “CHSH game“) with the highest possible probability. They can agree on a strategy and share information and particles in advance, but then they can’t communicate once the game starts. Alice gets a uniform random bit x, and Bob gets a uniform random bit y (independent of x).  Their goal is to output bits, a and b respectively, such that a XOR b = x AND y: in other words, such that a and b are different if and only if x and y are both 1.  The Bell inequality says that, in any universe that satisfies the property of local realism, no matter which strategy they use, Alice and Bob can win the game at most 75% of the time (for example, by always outputting a=b=0).

What does local realism mean?  It means that, after she receives her input x, any experiment Alice can perform in her lab has a definite result that might depend on x, on the state of her lab, and on whatever information she pre-shared with Bob, but at any rate, not on Bob’s input y.  If you like: a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y.  Likewise, b=b(y,w) is a function of y and w, but not of x.  Perhaps the best way to explain local realism is that it’s the thing you believe in, if you believe all the physicists babbling about “quantum entanglement” just missed something completely obvious.  Clearly, at the moment two “entangled” particles are created, but before they separate, one of them flips a tiny coin and then says to the other, “listen, if anyone asks, I’ll be spinning up and you’ll be spinning down.”  Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly “knows” to be spinning up—oooh, spooky! mysterious!  Anyway, if that’s how you think it has to work, then you believe in local realism, and you must predict that Alice and Bob can win the CHSH game with probability at most 3/4.

What Bell observed in 1964 is that, even though quantum mechanics doesn’t let Alice send a signal to Bob (or vice versa) faster than the speed of light, it still makes a prediction about the CHSH game that conflicts with local realism.  (And thus, quantum mechanics exhibits what one might not have realized beforehand was even a logical possibility: it doesn’t allow communication faster than light, but simulating the predictions of quantum mechanics in a classical universe would require faster-than-light communication.)  In particular, if Alice and Bob share entangled qubits, say $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}},$$ then there’s a simple protocol that lets them violate the Bell inequality, winning the CHSH game ~85% of the time (with probability (1+1/√2)/2 > 3/4).  Starting in the 1970s, people did experiments that vindicated the prediction of quantum mechanics, and falsified local realism—or so the story goes.

The violation of the Bell inequality has a schizophrenic status in physics.  To many of the physicists I know, Nature’s violating the Bell inequality is so trivial and obvious that it’s barely even worth doing the experiment: if people had just understood and believed Bohr and Heisenberg back in 1925, there would’ve been no need for this whole tiresome discussion.  To others, however, the Bell inequality violation remains so unacceptable that some way must be found around it—from casting doubt on the experiments that have been done, to overthrowing basic presuppositions of science (e.g., our own “freedom” to generate random bits x and y to send to Alice and Bob respectively).

For several decades, there was a relatively conservative way out for local realist diehards, and that was to point to “loopholes”: imperfections in the existing experiments which meant that local realism was still theoretically compatible with the results, at least if one was willing to assume a sufficiently strange conspiracy.

Fine, you interject, but surely no one literally believed these little experimental imperfections would be the thing that would rescue local realism?  Not so fast.  Right here, on this blog, I’ve had people point to the loopholes as a reason to accept local realism and reject the reality of quantum entanglement.  See, for example, the numerous comments by Teresa Mendes in my Whether Or Not God Plays Dice, I Do post.  Arguing with Mendes back in 2012, I predicted that the two main loopholes would both be closed in a single experiment—and not merely eventually, but in, like, a decade.  I was wrong: achieving this milestone took only a few years.

Before going further, let’s understand what the two main loopholes are (or rather, were).

The locality loophole arises because the measuring process takes time and Alice and Bob are not infinitely far apart.  Thus, suppose that, the instant Alice starts measuring her particle, a secret signal starts flying toward Bob’s particle at the speed of light, revealing her choice of measurement setting (i.e., the value of x).  Likewise, the instant Bob starts measuring his particle, his doing so sends a secret signal flying toward Alice’s particle, revealing the value of y.  By the time the measurements are finished, a few microseconds later, there’s been plenty of time for the two particles to coordinate their responses to the measurements, despite being “classical under the hood.”

Meanwhile, the detection loophole arises because in practice, measurements of entangled particles—especially of photons—don’t always succeed in finding the particles, let alone ascertaining their properties.  So one needs to select those runs of the experiment where Alice and Bob both find the particles, and discard all the “bad” runs where they don’t.  This by itself wouldn’t be a problem, if not for the fact that the very same measurement that reveals whether the particles are there, is also the one that “counts” (i.e., where Alice and Bob feed x and y and get out a and b)!

To someone with a conspiratorial mind, this opens up the possibility that the measurement’s success or failure is somehow correlated with its result, in a way that could violate the Bell inequality despite there being no real entanglement.  To illustrate, suppose that at the instant they’re created, one entangled particle says to the other: “listen, if Alice measures me in the x=0 basis, I’ll give the a=1 result.  If Bob measures you in the y=1 basis, you give the b=1 result.  In any other case, we’ll just evade detection and count this run as a loss.”  In such a case, Alice and Bob will win the game with certainty, whenever it gets played at all—but that’s only because of the particles’ freedom to choose which rounds will count.  Indeed, by randomly varying their “acceptable” x and y values from one round to the next, the particles can even make it look like x and y have no effect on the probability of a round’s succeeding.

Until a month ago, the state-of-the-art was that there were experiments that closed the locality loophole, and other experiments that closed the detection loophole, but there was no single experiment that closed both of them.

To close the locality loophole, “all you need” is a fast enough measurement on photons that are far enough apart.  That way, even if the vast Einsteinian conspiracy is trying to send signals between Alice’s and Bob’s particles at the speed of light, to coordinate the answers classically, the whole experiment will be done before the signals can possibly have reached their destinations.  Admittedly, as Nicolas Gisin once pointed out to me, there’s a philosophical difficulty in defining what we mean by the experiment being “done.”  To some purists, a Bell experiment might only be “done” once the results (i.e., the values of a and b) are registered in human experimenters’ brains!  And given the slowness of human reaction times, this might imply that a real Bell experiment ought to be carried out with astronauts on faraway space stations, or with Alice on the moon and Bob on earth (which, OK, would be cool).  If we’re being reasonable, however, we can grant that the experiment is “done” once a and b are safely recorded in classical, macroscopic computer memories—in which case, given the speed of modern computer memories, separating Alice and Bob by half a kilometer can be enough.  And indeed, experiments starting in 1998 (see for example here) have done exactly that; the current record, unless I’m mistaken, is 18 kilometers.  (Update: I was mistaken; it’s 144 kilometers.)  Alas, since these experiments used hard-to-measure photons, they were still open to the detection loophole.

To close the detection loophole, the simplest approach is to use entangled qubits that (unlike photons) are slow and heavy and can be measured with success probability approaching 1.  That’s exactly what various groups did starting in 2001 (see for example here), with trapped ions, superconducting qubits, and other systems.  Alas, given current technology, these sorts of qubits are virtually impossible to move miles apart from each other without decohering them.  So the experiments used qubits that were close together, leaving the locality loophole wide open.

So the problem boils down to: how do you create long-lasting, reliably-measurable entanglement between particles that are very far apart (e.g., in separate labs)?  There are three basic ideas in Hensen et al.’s solution to this problem.

The first idea is to use a hybrid system.  Ultimately, Hensen et al. create entanglement between electron spins in nitrogen vacancy centers in diamond (one of the hottest—or coolest?—experimental quantum information platforms today), in two labs that are about a mile away from each other.  To get these faraway electron spins to talk to each other, they make them communicate via photons.  If you stimulate an electron, it’ll sometimes emit a photon with which it’s entangled.  Very occasionally, the two electrons you care about will even emit photons at the same time.  In those cases, by routing those photons into optical fibers and then measuring the photons, it’s possible to entangle the electrons.

Wait, what?  How does measuring the photons entangle the electrons from whence they came?  This brings us to the second idea, entanglement swapping.  The latter is a famous procedure to create entanglement between two particles A and B that have never interacted, by “merely” entangling A with another particle A’, entangling B with another particle B’, and then performing an entangled measurement on A’ and B’ and conditioning on its result.  To illustrate, consider the state

$$ \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} \otimes \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} $$

and now imagine that we project the first and third qubits onto the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}.$$

If the measurement succeeds, you can check that we’ll be left with the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}$$ in the second and fourth qubits, even though those qubits were not entangled before.

So to recap: these two electron spins, in labs a mile away from each other, both have some probability of producing a photon.  The photons, if produced, are routed to a third site, where if they’re both there, then an entangled measurement on both of them (and a conditioning on the results of that measurement) has some nonzero probability of causing the original electron spins to become entangled.

But there’s a problem: if you’ve been paying attention, all we’ve done is cause the electron spins to become entangled with some tiny, nonzero probability (something like 6.4×10-9 in the actual experiment).  So then, why is this any improvement over the previous experiments, which just directly measured faraway entangled photons, and also had some small but nonzero probability of detecting them?

This leads to the third idea.  The new setup is an improvement because, whenever the photon measurement succeeds, we know that the electron spins are there and that they’re entangled, without having to measure the electron spins to tell us that.  In other words, we’ve decoupled the measurement that tells us whether we succeeded in creating an entangled pair, from the measurement that uses the entangled pair to violate the Bell inequality.  And because of that decoupling, we can now just condition on the runs of the experiment where the entangled pair was there, without worrying that that will open up the detection loophole, biasing the results via some bizarre correlated conspiracy.  It’s as if the whole experiment were simply switched off, except for those rare lucky occasions when an entangled spin pair gets created (with its creation heralded by the photons).  On those rare occasions, Alice and Bob swing into action, measuring their respective spins within the brief window of time—about 4 microseconds—allowed by the locality loophole, seeking an additional morsel of evidence that entanglement is real.  (Well, actually, Alice and Bob swing into action regardless; they only find out later whether this was one of the runs that “counted.”)

So, those are the main ideas (as well as I understand them); then there’s lots of engineering.  In their setup, Hensen et al. were able to create just a few heralded entangled pairs per hour.  This allowed them to produce 245 CHSH games for Alice and Bob to play, and to reject the hypothesis of local realism at ~96% confidence.  Jungsang Kim explained to me that existing technologies could have produced many more events per hour, and hence, in a similar amount of time, “particle physics” (5σ or more) rather than “psychology” (2σ) levels of confidence that local realism is false.  But in this type of experiment, everything is a tradeoff.  Building not one but two labs for manipulating NV centers in diamond is extremely onerous, and Hensen et al. did what they had to do to get a significant result.

The basic idea here, of using photons to entangle longer-lasting qubits, is useful for more than pulverizing local realism.  In particular, the idea is a major part of current proposals for how to build a scalable ion-trap quantum computer.  Because of cross-talk, you can’t feasibly put more than 10 or so ions in the same trap while keeping all of them coherent and controllable.  So the current ideas for scaling up involve having lots of separate traps—but in that case, one will sometimes need to perform a Controlled-NOT, or some other 2-qubit gate, between a qubit in one trap and a qubit in another.  This can be achieved using the Gottesman-Chuang technique of gate teleportation, provided you have reliable entanglement between the traps.  But how do you create such entanglement?  Aha: the current idea is to entangle the ions by using photons as intermediaries, very similar in spirit to what Hensen et al. do.

At a more fundamental level, will this experiment finally convince everyone that local realism is dead, and that quantum mechanics might indeed be the operating system of reality?  Alas, I predict that those who confidently predicted that a loophole-free Bell test could never be done, will simply find some new way to wiggle out, without admitting the slightest problem for their previous view.  This prediction, you might say, is based on a different kind of realism.

D-Wave Open Thread

Wednesday, August 26th, 2015

A bunch of people have asked me to comment on D-Wave’s release of its 1000-qubit processor, and a paper by a group including Cathy McGeoch saying that the machine is 1 or 2 orders of faster (in annealing time, not wall-clock time) than simulated annealing running on a single-core classical computer.  It’s even been suggested that the “Scott-signal” has been shining brightly for a week above Quantham City, but that Scott-man has been too lazy and out-of-shape even to change into his tights.

Scientifically, it’s not clear if much has changed.  D-Wave now has a chip with twice as many qubits as the last one.  That chip continues to be pretty effective at finding its own low-energy states: indeed, depending on various details of definition, the machine can even find its own low-energy states “faster” than some implementation of simulated annealing running on a single-core chip.  Of course, it’s entirely possible that Matthias Troyer or Sergei Isakov or Troels Ronnow or someone like that will be able to find a better implementation of simulated annealing that closes even the modest gap—as happened the last time—but I’ll give the authors the benefit of the doubt that they put good-faith effort into optimizing the classical code.

More importantly, I’d say it remains unclear whether any of the machine’s performance on the instances tested here can be attributed to quantum tunneling effects.  In fact, the paper explicitly states (see page 3) that it’s not going to consider such questions, and I think the authors would agree that you could very well see results like theirs, even if what was going on was fundamentally classical annealing.  Also, of course, it’s still true that, if you wanted to solve a practical optimization problem, you’d first need to encode it into the Chimera graph, and that reduction entails a loss that could hand a decisive advantage to simulated annealing, even without the need to go to multiple cores.  (This is what I’ve described elsewhere as essentially all of these performance comparisons taking place on “the D-Wave machine’s home turf”: that is, on binary constraint satisfaction problems that have precisely the topology of D-Wave’s Chimera graph.)

But, I dunno, I’m just not feeling the urge to analyze this in more detail.  Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic.  By this point, there may have been enough D-Wave announcements that papers realize they no longer need to cover each one like an extraterrestrial landing.  And there are more hats in the ring now, with John Martinis at Google seeking to build superconducting quantum annealing machines but with ~10,000x longer coherence times than D-Wave’s, and with IBM Research and some others also trying to scale superconducting QC.  The realization has set in, I think, that both D-Wave and the others are in this for the long haul, with D-Wave currently having lots of qubits, but with very short coherence times and unclear prospects for any quantum speedup, and Martinis and some others having qubits of far higher quality, but not yet able to couple enough of them.

The other issue is that, on my flight from Seoul back to Newark, I watched two recent kids’ movies that were almost defiant in their simple, unironic, 1950s-style messages of hope and optimism.  One was Disney’s new live-action Cinderella; the other was Brad Bird’s Tomorrowland.  And seeing these back-to-back filled me with such positivity and good will that, at least for these few hours, it’s hard to summon my usual crusty self.  I say, let’s invent the future together, and build flying cars and jetpacks in our garages!  Let a thousand creative ideas bloom for how to tackle climate change and the other crises facing civilization!  (Admittedly, mass-market flying cars and jetpacks are probably not a step forward on climate change … but, see, there’s that negativity coming back.)  And let another thousand ideas bloom for how to build scalable quantum computers—sure, including D-Wave’s!  Have courage and be kind!

So yeah, if readers would like to discuss the recent D-Wave paper further (especially those who know something about it), they’re more than welcome to do so in the comments section.  But I’ve been away from Dana and Lily for two weeks, and will endeavor to spend time with them rather than obsessively reloading the comments (let’s see if I succeed).

As a small token of my goodwill, I enclose two photos from my last visit to a D-Wave machine, which occurred when I met with some grad students in Waterloo this past spring.  As you can see, I even personally certified that the machine was operating as expected.  But more than that: surpassing all reasonable expectations for quantum AI, this model could actually converse intelligently, through a protruding head resembling that of IQC grad student Sarah Kaiser.

dwavedwave2