Archive for the ‘Complexity’ Category

The Collision Lower Bound After 12 Years

Sunday, July 7th, 2013

Streaming video is now available for the talks at the QStart conference, a couple weeks ago at Hebrew University in Jerusalem.  If you’re the sort of person who likes watching quantum information talks, then check out the excellent ones by Ray Laflamme, John Martinis, Umesh Vazirani, Thomas Vidick, Jacob Bekenstein, and many others.

My own contribution—the first “backwards-facing, crusty, retrospective” talk I’ve ever given—was called The Collision Lower Bound After 12 Years (click here for the slides—and to answer the inevitable question, no, I have no idea how to open PowerPoint files in your favorite free-range, organic computing platform).  Briefly, the collision lower bound is the theorem that even a quantum computer needs at least ~n1/3 steps to find a duplicate in a long list of random numbers between 1 and n, even assuming the list is long enough that there are many, many duplicates to be found.  (Moreover, ~n1/3 steps are known to suffice, by the BHT algorithm, a clever adaptation of Grover’s search algorithm.  Also, for simplicity a “step” means a single access to the list, though of course a quantum algorithm can access multiple list elements in superposition and it still counts as one step.)

By comparison, for classical algorithms, ~√n steps are necessary and sufficient to find a collision, by the famous Birthday Paradox.  So, just like for Grover’s search problem, a quantum computer could give you a modest speedup over classical for the collision problem, but only a modest one.  The reason this is interesting is that, because of the abundance of collisions to be found, the collision problem has a great deal more structure than Grover’s search problem (though it has less structure than Shor’s period-finding problem, where there famously is an exponential quantum speedup).

One “obvious” motivation for the collision problem is that it models the problem of breaking collision-resistant hash functions (like SHA-256) in cryptography.  In particular, if there were a superfast (e.g., log(n)-time) quantum algorithm for the collision problem, then there could be no CRHFs secure against quantum attack.  So the fact that there’s no such algorithm at least opens up the possibility of quantum-secure CRHFs.  However, there are many other motivations.  For example, the collision lower bound rules out the most “simpleminded” approach to a polynomial-time quantum algorithm for the Graph Isomorphism problem (though, I hasten to add, it says nothing about more sophisticated approaches).  The collision problem is also closely related to Statistical Zero Knowledge (SZK) proof protocols, so that the collision lower bound leads to an oracle relative to which SZK is not in BQP.

Probably the most bizarre motivation to other people, but for some reason the most important one to me back in 2001, is that the collision problem is closely related to the problem of sampling the entire trajectories of hidden variables, in hidden-variable theories such as Bohmian mechanics.  The collision lower bound provides strong evidence that this trajectory-sampling problem is hard even for a quantum computer—intuitively because a QC can’t keep track of the correlations between the hidden-variable positions at different times.  The way I like to put it is that if, at the moment of your death, your entire life history flashed before you in an instant (and if a suitable hidden-variable theory were true, and if you’d performed an appropriate quantum interference experiment on your own brain during your life), then you really could solve the collision problem in only O(1) steps.  Interestingly, you still might not be able to solve NP-complete problems—I don’t know!  But you could at least do something that we think is hard for a quantum computer.

I proved the first collision lower bound in 2001 (actually, a week or so after the 9/11 attacks), after four months of sleepless nights and failed attempts.  (Well actually, I only got the weaker lower bound of ~n1/5; the ~n1/3 was a subsequent improvement due to Yaoyun Shi.  Before ~n1/5, no one could even rule out that a quantum computer could solve the collision problem with a constant number of steps (!!), independent of n—say, 4 steps.)  It was the first thing I’d proved of any significance, and probably the most important thing I did while in grad school.  I knew it was one of the favorite problems of my adviser, Umesh Vazirani, so I didn’t even tell Umesh I was working on it until I’d already spent the whole summer on it.  I figured he’d think I was nuts.


Bonus Proof Explanation!

The technique that ultimately worked was the polynomial method, which was introduced to quantum computing four years prior in a seminal paper of Beals et al.  In this technique, you first suppose by contradiction that a quantum algorithm exists to solve your problem that makes very few accesses to the input bits—say, T.  Then you write out the quantum algorithm’s acceptance probability (e.g., the probability that the algorithm outputs “yes, I found what I was looking for”) as a multivariate polynomial p in the input bits.  It’s not hard to prove that p has degree at most 2T, since the amplitudes in the quantum algorithm can be written as degree-T polynomials (each input access increases the degree by at most 1, and unitary transformations in between input accesses don’t increase the degree at all); then squaring the amplitudes to get probabilities doubles the degree.  (This is the only part of the method that uses anything specific to quantum mechanics!)

Next, you choose some parameter k related to the problem of interest, and you let q(k) be the expectation of p(X) over all inputs X with the parameter equal to k.  For example, with the collision problem, it turns out that the “right” choice to make is to set k=1 if each number appears exactly once in your input list, k=2 if each number appears exactly twice, k=3 if each number appears exactly three times, and so on.  Then—here comes the “magic” part—you show that q(k) itself is a univariate polynomial in k, again of degree at most 2T.  This magical step is called “symmetrization”; it can be traced at least as far back as the famous 1969 book Perceptrons by Marvin Minsky and Seymour Papert.  In the case of the collision problem, I still have no explanation, 12 years later, for why symmetrization works: all I can say is that you do the calculation, and you cancel lots of things from both the numerator and the denominator, and what comes out at the end is a low-degree polynomial in k.  (It’s precisely because I would never have predicted such a “zany coincidence,” that I had to stumble around in the dark for 4 months before I finally discovered by chance that the polynomial method worked.)

Anyway, after applying symmetrization, you’re left with a low-degree univariate polynomial q with some very interesting properties: for example, you need 0≤q(k)≤1 for positive integers k, since then q(k) represents an averaged probability that your quantum algorithm does something.  You also need q(1) to be close to 0, since if k=1 then there no collisions to be found, and you need q(2) to be close to 1, since if k=2 then there are lots of collisions and you’d like your algorithm to find one.  But now, you can appeal to a theorem of A. A. Markov from the 1890s, which implies that no low-degree polynomial exists with those properties!  Hence your original efficient quantum algorithm can’t have existed either: indeed, you get a quantitative lower bound (a tight one, if you’re careful) on the number of input accesses your algorithm must have made.  And that, modulo some nasty technicalities (e.g., what if k doesn’t evenly divide the size of your list?), is how the collision lower bound works.


So, in the first half of my QStart talk, I explain the collision lower bound and its original motivations (and a little about the proof, but no more than what I said above).  Then in the second half, I survey lots of extensions and applications between 2002 and the present, as well as the many remaining open problems.  For example, I discuss the tight lower bound of Ambainis et al. for the “index erasure” problem, Belovs’s proof of the element distinctness lower bound using the adversary method, and my and Ambainis’s generalization of the collision lower bound to arbitrary symmetric problems.  I also talk about Mark Zhandry’s recent breakthrough (sorry, am I not allowed to use that word?) showing that the GGM construction of pseudorandom functions is secure against quantum adversaries, and how Zhandry’s result can be seen—in retrospect, anyway—as yet another application of the collision lower bound.

Probably of the most general interest, I discuss how Daniel Harlow and Patrick Hayden invoked the collision lower bound in their striking recent paper on the AMPS black hole “firewall” paradox.  In particular they argued that, in order to uncover the apparent violation of local quantum field theory at the heart of the paradox, an observer falling into a black hole would probably need to solve a QSZK-complete computational problem.  And of course, the collision lower bound furnishes our main piece of evidence that QSZK-complete problems really should require exponential time even for quantum computers.  So, Harlow and Hayden argue, the black hole would already have evaporated before the observer had even made a dent in the requisite computation.

Now, the Harlow-Hayden paper, and the AMPS paradox more generally, really deserve posts of their own—just as soon as I learn enough to decide what I think about them.  For now, I’ll simply say that, regardless of how convinced you are by Harlow and Hayden’s argument (and, a bit like with my free-will essay, it’s not clear how convinced the authors themselves are!), it’s one of the most ambitious syntheses of computational complexity and physics I’ve ever seen.  You can disagree with it, but to read the paper (or watch the talk, streaming video from Strings’2013 here) is to experience the thrill of seeing black hole physics related to complexity theory by authors who really know both.

(In my own talk on the collision lower bound, the short segment about Harlow-Hayden generated more questions and discussion than the rest of the talk combined—with me being challenged to defend their argument, even with Patrick Hayden right there in the audience!  I remarked later that that portion of the talk was itself a black hole for audience interest.)

In totally unrelated news, Quantum Computing Since Democritus made Scientific American’s list of best summer books!  I can’t think of a more appropriate honor, since if there’s any phrase that captures what QCSD is all about, “sizzling summer beach read” would be it.  Apparently there will even be an online poll soon, where y’all can go and vote for QCSD as your favorite.  Vote early and often, and from multiple IP addresses!

“Closer to Truth”

Wednesday, May 1st, 2013

Two years ago, when I attended the FQXi conference on a ship from Norway to Denmark, I (along with many other conference participants) was interviewed by Robert Lawrence Kuhn, who produces a late-night TV program called “Closer to Truth.”  I’m pleased to announce (hat tip: Sean Carroll) that four videos from my interview are finally available online:

  • Is the Universe a Computer?
  • (like a politician, I steer the question toward “what kind of computer is the universe?,” then start talking about P vs. NP, quantum computing, and the holographic principle)

  • What Does Quantum Theory Mean?
  • (here I mostly talk about the idea of computational intractability as a principle of physics)

  • Quantum Computing Mysteries
  • (basics of quantum mechanics and quantum computing)

  • Setting Time Aright (about the differences between time and space, the P vs. PSPACE problem, and computing with closed timelike curves)

(No, I didn’t choose the titles!)

For regular readers of this blog, there’s probably nothing new in these videos, but for those who are “just tuning in,” they provide an extremely simple and concise introduction to what I care about and why.  I’m pretty happy with how they came out.

Once you’re finished with me (or maybe even before then…), click here for the full list of interviewees, which includes David Albert, Raphael Bousso, Sean Carroll, David Deutsch, Rebecca Goldstein, Seth Lloyd, Marvin Minsky, Roger Penrose, Lenny Susskind, Steven Weinberg, and many, many others who might be of interest to Shtetl-Optimized readers.

“So You Think Quantum Computing Is Bunk?”

Friday, April 12th, 2013

On Wednesday, I gave a fun talk with that title down the street at Microsoft Research New England.  Disappointingly, no one in the audience did seem to think quantum computing was bunk (or if they did, they didn’t speak up): I was basically preaching to the choir.  My PowerPoint slides are here.  There’s also a streaming video here, but watch it at your own risk—my stuttering and other nerdy mannerisms seemed particularly bad, at least in the short initial segment that I listened to.  I really need media training.  Anyway, thanks very much to Boaz Barak for inviting me.

Two P vs. NP updates (neither of them technical)

Tuesday, April 2nd, 2013
lilymeme

“Meme” courtesy of my brother David

First news item: it’s come to my attention that yesterday, an MIT professor abused his power over students for a cruel April Fools’ Day prank involving the P vs. NP problem.  His email to the students is below.

I assume most of you already heard the news that a Caltech grad student, April Felsen, announced a 400-page proof of P≠NP last week.  While I haven’t yet completely digested the argument, it’s already clear that Felsen (who I actually knew back when she was an MIT undergrad) has changed theoretical computer science forever, bringing in new tools from K-theory to higher topos theory to solve the biggest problem there was.

Alas, Felsen’s proof has the “short-term” effect of making the existing 6.045 seem badly outdated.  So, after long reflection, I’ve made a decision that not all of you are going to like, but that I believe is the right one intellectually.  I’ve decided to reorient the entire course to focus on Felsen’s result, starting with tomorrow’s lecture.

And further, I decided to rewrite Thursday’s midterm to focus almost entirely on this new material.  That means that, yes, you’re going to have THREE DAYS to learn at least the basics of algebraic topology and operator algebras, as used in Felsen’s proof.  To do that, you might need to drop everything else (including sleep, unfortunately), and this might prove to be the most strenuous and intense thing you’ve ever done.  But it will also be an experience that will enrich your minds and ennoble your souls, and that you’ll be proud to tell your grandchildren about.  And of course we’ll be there to help out.  So let’s get started!

All the best,
Scott


Second news item: many of you have probably heard that Lance Fortnow’s The Golden Ticket—the first popular book about the P vs. NP problem—is now out.  (The title refers to Roald Dahl’s Charlie and the Chocolate Factory, which involved a few chocolate bars that had coveted golden tickets inside the wrappers, along with millions of chocolate bars that didn’t.)  I read it last week, and I think it’s excellent: a book I’ll happily recommend to family and friends who want the gentlest introduction to complexity theory that exists.

Some context: for more than a decade, people have been telling me that I should write a popular book about P vs. NP, and I never did, and now Lance has.  So I’m delighted to say that reading Lance’s book quickly cured me of any regrets I might have felt.  For not only is The Golden Ticket a great book, but better yet, it’s not a book that I ever could’ve written.

Here’s why: every time I would have succumbed to the temptation to explain something too complicated for the world’s journalists, literary humanists, and pointy-haired bosses—something like relativization, or natural proofs, or arithmetization, or Shannon’s counting argument, or Ladner’s Theorem, or coNP, or the reasons to focus on polynomial time—every time, Lance somehow manages to resist the temptation, and to stick to cute stories, anecdotes, and practical applications.  This is really, truly a popular book: as Lance points out himself, in 162 pages of discussing the P vs. NP question, he never even formally defines P and NP!

But it goes beyond that: in the world of The Golden Ticket, P vs. NP is important because, if P=NP, then people could design more effective cancer therapies, solve more crimes, and better predict which baseball games would be closely-matched and exciting (yes, really).  P vs. NP is also important because it provides a unifying framework for understanding current technological trends, like massively-parallel computing, cloud computing, big data, and the Internet of things.  Meanwhile, quantum computing might or might not be possible in principle, but either way, it’s probably not that relevant because it won’t be practical for a long time.

In short, Lance has written precisely the book about P vs. NP that the interested layperson or IT professional wants and needs, and precisely the book that I couldn’t have written.  I would’ve lost patience by around page 20, and exclaimed:

“You want me to justify the P vs. NP problem by its relevance to baseball??  Why shouldn’t baseball have to justify itself by its relevance to P vs. NP?  Pshaw!  Begone from the house of study, you cretinous fools, and never return!”

My favorite aspect of The Golden Ticket was its carefully-researched treatment of the history of the P vs. NP problem in the 50s, 60s, and 70s, both in the West and in the Soviet Union (where it was called the “perebor” problem).  Even complexity theorists will learn countless tidbits—like how Leonid Levin was “discovered” at age 15, and how the powerful Sergey Yablonsky stalled Soviet perebor research by claiming to have solved the problem when he’d done nothing of the kind.  The historical chapter (Chapter 5) is alone worth the price of the book.

I have two quibbles.  First, throughout the book, Lance refers to a hypothetical world where P=NP as the “Beautiful World.”  I would’ve called that world the “Hideous World”!  For it’s a world where technical creativity is mostly worthless, and where the mathematical universe is boring, flat, and incomprehensibly comprehensible.  Here’s an analogy: suppose a video game turned out to have a bug that let you accumulate unlimited points just by holding down a certain button.  Would anyone call that game the “Beautiful Game”?

My second disagreement concerns quantum computing.  Overall, Lance gives an admirably-accurate summary, and I was happy to see him throw cold water on breathless predictions about QC and other quantum-information technologies finding practical applications in the near future.  However, I think he goes beyond the truth when he writes:

[W]e do not know how to create a significant amount of entanglement in more than a handful of quantum bits.  It might be some fundamental rule of nature that prevents significant entanglement for any reasonable length of time.  Or it could just be a tricky engineering problem.  We’ll have to let the physicists sort that out.

The thing is, physicists do know how to create entanglement among many thousands or even millions of qubits—for example, in condensed-matter systems like spin lattices, and in superconducting Josephson junctions.  The problem is “merely” that they don’t know how to control the entanglement in the precise ways needed for quantum computing.  But as with much quantum computing skepticism, the passage above doesn’t seem to grapple with just how hard it is to kill off scalable QC.  How do you cook up a theory that can account for the massively-entangled states that have already been demonstrated, but that doesn’t give you all of BQP?

But let me not harp on these minor points, since The Golden Ticket has so many pleasant features.  One of them is its corny humor: even in Lance’s fantasy world where a proof of P=NP has led to a cure for cancer, it still hasn’t led to a cure for the common cold.  Another nice feature is the book’s refreshing matter-of-factness: Lance makes it clear that he believes that

(a) P≠NP,
(b) the conjecture is provable but won’t be proven in the near future, and
(c) if we ever meet an advanced extraterrestrial civilization, they’ll also have asked the P vs. NP question or something similar to it.

Of course we can’t currently prove any of the above statements, just like we can’t prove the nonexistence of Bigfoot.  But Lance refuses to patronize his readers by pretending to harbor doubts that he quite reasonably doesn’t.

In summary, if you’re the sort of person who stops me in elevators to say that you like my blog even though you never actually understand anything in it, then stop reading Shtetl-Optimized right now and go read Lance’s book.  You’ll understand it and you’ll enjoy it.

And now it’s off to class, to apologize for my April Fools prank and to teach the Cook-Levin Theorem.

Quantum Computing Since Democritus: The Buzz Intensifies

Thursday, March 21st, 2013

Update (March 22): The Kindle edition of Quantum Computing Since Democritus is now available, for the low price of $15.40!  (Not factorial.)  Click here to get it from amazon.com, or here to get it from amazon.co.uk.  And let me know how it looks (I haven’t seen it yet).  Another Update: Just saw the Kindle edition, and the figures and formulas came out great!  It’s a product I stand behind with pride.

In the meantime, I regret to say that the marketing for this book is getting crasser and more exploitative by the day.

lily-qcsd

lily-qcsd2


It seems like wherever I go these days, all anyone wants to talk about is Quantum Computing Since Democritus—the sprawling new book by Scott Aaronson, published by Cambridge University Press and available for order now.  Among leading figures in quantum information science—many of them well-known to Shtetl-Optimized readers—the book is garnering the sort of hyperbolic praise that would make Shakespeare or Tolstoy blush:

“I laughed, I cried, I fell off my chair – and that was just reading the chapter on Computational Complexity.  Aaronson is a tornado of intellectual activity: he rips our brains from their intellectual foundations; twists them through a tour of physics, mathematics, computer science, and philosophy; stuffs them full of facts and theorems; tickles them until they cry ‘Uncle’; and then drops them, quivering, back into our skulls.  Aaronson raises deep questions of how the physical universe is put together and why it is put together the way it is.  While we read his lucid explanations we can believe – at least while we hold the book in our hands – that we understand the answers, too.” –Seth Lloyd

“Scott Aaronson has written a beautiful and highly original synthesis of what we know about some of the most fundamental questions in science: What is information? What does it mean to compute? What is the nature of mind and of free will?” –Michael Nielsen

“Not since Richard Feynman’s Lectures on Physics has there been a set of lecture notes as brilliant and as entertaining.  Aaronson leads the reader on a wild romp through the most important intellectual achievements in computing and physics, weaving these seemingly disparate fields into a captivating narrative for our modern age of information.  Aaronson wildly runs through the fields of physics and computers, showing us how they are connected, how to understand our computational universe, and what questions exist on the borders of these fields that we still don’t understand.   This book is a poem disguised as a set of lecture notes.  The lectures are on computing and physics, complexity theory and mathematical logic and quantum physics.  The poem is made up of proofs, jokes, stories, and revelations, synthesizing the two towering fields of computer science and physics into a coherent tapestry of sheer intellectual awesomeness.” –Dave Bacon

After months of overhearing people saying things like the above—in the halls of MIT, the checkout line at Trader Joe’s, the bathroom, anywhere—I finally had to ask in annoyance: “is all this buzz justified?  I mean, I’m sure the book is as deep, hilarious, and worldview-changing as everyone says it is.  But, after all, it’s based off lecture notes that have long been available for free on the web.  And Aaronson, being the magnanimous, open-access-loving saint that he is, has no plans to remove the online notes, even though he could really use the royalties from book sales to feed his growing family.  Nor does Cambridge University Press object to his principled decision.”

“No, you don’t understand,” they told me.  “Word on the street has it that the book is extensively updated for 2013—that it’s packed with new discussions of things like algebrization, lattice-based cryptography, the QIP=PSPACE theorem, the ‘quantum time travel controversy,’ BosonSampling, black-hole firewalls, and even the Australian models episode.  They say it took years of painstaking work, by Aaronson and his student Alex Arkhipov, to get the notes into book form: fixing mistakes, clarifying difficult points, smoothing out rough edges, all while leaving intact the original’s inimitable humor.  I even heard Aaronson reveals he’s changed his mind about certain things since 2006.  How could you not want such a labor of love on your bookshelf?”

Exasperated, I finally exclaimed: “But the book isn’t even out yet in North America!  Amazon.com says it won’t ship until April 30.”

“Sure,” one gas-station attendant replied to me, “but the secret is, it’s available now from Amazon.co.uk.  Personally, I couldn’t wait a month, so I ordered it shipped to me from across the pond.  But if you’re a less hardcore quantum complexity theory fan, and you live in North America, you can also preorder the book from Amazon.com, and they’ll send it to you when it arrives.”

Much as the hype still grated, I had to admit that I’d run out of counterarguments, so I looked into ordering a copy for myself.

Silvio and Shafi win Turing Award

Wednesday, March 13th, 2013

Today I break long radio silence to deliver some phenomenal news.  Two of the people who I eat lunch with every week—my MIT CSAIL colleagues Silvio Micali and Shafi Goldwasser—have won a well-deserved Turing Award, for their fundamental contributions to cryptography from the 1980s till today.  (I see that Lance just now beat me to a blog post about this.  Dammit, Lance!)

I won’t have to tell many readers of this blog that the names Goldwasser and Micali—or more often, the initials “G” and “M”—are as ubiquitous as Alice and Bob in modern cryptography, from the GGM construction of pseudorandom functions (discussed before on this blog), to the classic GMR paper that introduced the world to interactive proofs.  Besides that, Shafi and Silvio are known as two of the more opinionated and colorful characters of theoretical computer science—and as I learned last week, Silvio is also an awesome party host, who has perfect taste in sushi (as well as furniture and many other things).

I wish I could go on right now talking about Shafi and Silvio—and even more, that I could join the celebration that will happen at MIT this afternoon.  But I’m about to board a flight to LAX, to attend the 60th birthday symposium of longtime friend, extraordinary physicist, and sometime Shtetl-Optimized commenter John Preskill.  (I’ll also be bringing you coverage of that symposium, including slides from my talk there on hidden variables.)  So, leave your congratulations, etc. in the comments section, and I’ll see them when I land!

TCS+ online seminars

Tuesday, January 29th, 2013

Good news, everyone!  Anindya De, Oded Regev, and my postdoc Thomas Vidick are launching an online theoretical computer science seminar series called TCS+, modeled after the successful Q+ quantum information seminars run by Daniel Burgarth and Matt Leifer.  The inaugural TCS+ lecture will be on Wednesday Feb. 6, at noon Eastern Standard Time.  Ronald de Wolf, longtime friend both of this blog and of its author, will be speaking on Exponential Lower Bounds for Polytopes in Combinatorial Optimization, his STOC’2012 Best Paper with Samuel Fiorini, Serge Massar, Sebastian Pokutta and Hans Raj Tiwary.  This is the paper that used ideas originally from quantum communication complexity to solve a 20-year-old problem in classical optimization: namely, to rule out the possibility of proving P=NP by reducing the Traveling Salesman Problem to certain kinds of linear programs.  Ronald previously gave the talk at MIT, and it rocked.  See Thomas’s blog for details about how to watch.

“Quantum Information and the Brain”

Thursday, January 24th, 2013

A month and a half ago, I gave a 45-minute lecture / attempted standup act with the intentionally-nutty title above, for my invited talk at the wonderful NIPS (Neural Information Processing Systems) conference at Lake Tahoe.  Video of the talk is now available at VideoLectures net.  That site also did a short written interview with me, where they asked about the “message” of my talk (which is unfortunately hard to summarize, though I tried!), as well as the Aaron Swartz case and various other things.  If you just want the PowerPoint slides from my talk, you can get those here.

Now, I could’ve just given my usual talk on quantum computing and complexity.  But besides increasing boredom with that talk, one reason for my unusual topic was that, when I sent in the abstract, I was under the mistaken impression that NIPS was at least half a “neuroscience” conference.  So, I felt a responsibility to address how quantum information science might intersect the study of the brain, even if the intersection ultimately turned out to be the empty set!  (As I say in the talk, the fact that people have speculated about connections between the two, and have sometimes been wrong but for interesting reasons, could easily give me 45 minutes’ worth of material.)

Anyway, it turned out that, while NIPS was founded by people interested in modeling the brain, these days it’s more of a straight machine learning conference.  Still, I hope the audience there at least found my talk an amusing appetizer to their hearty meal of kernels, sparsity, and Bayesian nonparametric regression.  I certainly learned a lot from them; while this was my first machine learning conference, I’ll try to make sure it isn’t my last.

(Incidentally, the full set of NIPS videos is here; it includes great talks by Terry Sejnowski, Stanislas Dehaene, Geoffrey Hinton, and many others.  It was a weird honor to be in such distinguished company — I wouldn’t have invited myself!)

Zork’s bloogorithm

Wednesday, January 9th, 2013

If you have opinions about quantum computing, and haven’t yet read through the discussion following my “response to Dyakonov” post, you’re missing out.  The comments—by QC researchers (Preskill, Kuperberg, Gottesman, Fitzsimons…), skeptics (Dyakonov, Kalai, …), and interested outsiders alike—are some of the most interesting I’ve seen in this two-decade-old debate.

At the risk of crass immodesty, I just posted a comment whose ending amused me so much, I had to promote it to its own post.  My starting point was an idea that several skeptics, including Dyakonov, have articulated in this debate, and which I’ll paraphrase as follows:

Sure, quantum computing might be “possible in principle.”  But only in the same sense that teaching a donkey how to read, transmuting large amounts of lead into gold, or doing a classical computation in the center of the Sun are “possible in principle.”  In other words, the task is at the same time phenomenally difficult, and fundamentally arbitrary and quixotic even if you did somehow achieve it.

Since I considered this argument an important one, I wrote a response, which stressed how quantum computing is different both because it strives to solve problems that flat-out can’t feasibly be solved any other way if standard complexity conjectures are correct, and because the goal—namely, expanding the human race’s computational powers beyond classical polynomial time—is not at all an arbitrary one.  However, I then felt the need to expand on the last point, since it occurred to me that it’s both central to this debate and almost never discussed explicitly.

How do I know that the desire for computational power isn’t just an arbitrary human quirk?

Well, the reason I know is that math isn’t arbitrary, and computation is nothing more or less than the mechanizable part of solving math problems.

Let me put it this way: if we ever make contact with an advanced extraterrestrial civilization, they might have three sexes and five heads. But they, too, will have encountered the problem of factoring integers into primes. Indeed, because they’ll inhabit the same physical universe as we do, they’ll even have encountered the problem of simulating quantum physics. And therefore, putting the two together, they’ll almost certainly have discovered something like Shor’s algorithm — though they’ll call it “Zork’s bloogorithm” or whatever.

Happy New Year! My response to M. I. Dyakonov

Wednesday, January 2nd, 2013

A couple weeks ago M. I. Dyakonov, a longtime quantum computing skeptic, published a new paper setting out his arguments (maybe “grievances” is a more accurate word) against quantum computing research.  Looking for a way to procrastinate from other work I have to do, I decided to offer some thoughts in response.

To me, perhaps the most striking aspect of Dyakonov’s paper is what it doesn’t claim.  Unlike Leonid Levin, Oded Goldreich, and several other quantum computing skeptics I’ve engaged, Dyakonov never seriously entertains the possibility of a general principle that would explain why scalable quantum computing is not possible.  (Thus, my $100K prize presumably isn’t relevant to him.)  He even ridicules discussion of such a principle (see the end of this post).  The unwillingness to say that scalable QC can’t work, or to articulate a reason why, saves Dyakonov from the need to explore what else would need to be true about the physical world if scalable QC were impossible.  For example, would there then be an efficient algorithm to simulate arbitrary quantum systems on a classical computer—or at least, all quantum systems that can plausibly arise in Nature?  Dyakonov need not, and does not, evince any curiosity about such questions.  In his game, it’s only the quantum computing proponents who are on trial; there’s no need for examination of the other side.

That being so, Dyakonov focuses on what he sees as unrealistic assumptions in known versions of the Quantum Fault-Tolerance Theorem, covering well-trodden ground but with some strange twists.  He accuses quantum computing researchers of a “widespread belief that the |0〉 and |1〉 states ‘in the computational basis’ are something absolute, akin to the on/off states of an electrical switch, or of a transistor in a digital computer.”  He then follows with a somewhat-patronizing discussion of how no continuous quantity can be manipulated perfectly, and how |0〉 and |1〉 are just arbitrary labels whose meanings could change over time due to drift in the preparation and measurement devices.  Well, yes, it’s obvious that |0〉 and |1〉 don’t have absolute meanings, but is it not equally obvious that we can give them meanings, through suitable choices of initial states, gates, and measurement settings?  And if the meanings of |0〉 and |1〉 drift over time, due to the imprecision of our devices … well, if the amount of drift is upper-bounded by some sufficiently small constant, then we can regard it as simply yet another source of noise, and apply standard fault-tolerance methods to correct it.  If the drift is unbounded, then we do need better devices.

(Fault-tolerance mavens: please use the comments for more detailed discussion!  To my inexpert eyes, Dyakonov doesn’t seem to engage the generality of the already-known fault-tolerance theorems—a generality traceable to the fact that what powers those results is ultimately just the linearity of quantum mechanics, not some fragile coincidence that one expects to disappear with the slightest change in assumptions.  But I’m sure others can say more.)

Anyway, from his discussion of fault-tolerance, Dyakonov concludes only that the possibility of scalable quantum computing in the real world should be considered an open question.

Surprisingly—since many QC skeptics wouldn’t be caught dead making such an argument—Dyakonov next turns around and says that, well, OK, fine, even if scalable QCs can be built, they still won’t be good for much.  Shor’s factoring algorithm is irrelevant, since people would simply switch to other public-key cryptosystems that appear secure even against quantum attack.  Simulating quantum physics “would be an interesting and useful achievement, but hardly revolutionary, unless we understand this term in some very narrow sense.”  And what about Grover’s algorithm?  In an endnote, Dyakonov writes:

Quantum algorithms that provide (with an ideal quantum computer!) only polynomial speed-up compared to digital computing, like the Grover algorithm, became obsolete due to the polynomial slow-down imposed by error correction.

The above is flat-out mistaken.  The slowdown imposed by quantum error-correction is polylogarithmic, not polynomial, so it doesn’t come close to wiping out the Grover speedup (or the subexponential speedups that might be achievable, e.g., with the adiabatic algorithm, which Dyakonov doesn’t mention).

But disregarding the polylog/polynomial confusion (which recurs elsewhere in the article), and other technical issues about fault-tolerance, up to this point many quantum computing researchers could happily agree with Dyakonov—and have said similar things many times themselves.  Dyakonov even quotes Dorit Aharonov, one of the discoverers of quantum fault-tolerance, writing, “In a sense, the question of noisy quantum computation is theoretically closed. But a question still ponders our minds: Are the assumptions on the noise correct?”

(And as for QC researchers coming clean about limitations of quantum computers?  This is just hearsay, but I’m told there’s a QC researcher who actually chose “Quantum computers are not known to be able to solve NP-complete problems in polynomial time” as the tagline for his blog!)

Dyakonov fumes about how popular articles, funding agency reports, and so forth have overhyped progress in quantum computing, leaving the conditions out of theorems and presenting incremental advances as breakthroughs.  Here I sadly agree.  As readers of Shtetl-Optimized can hopefully attest, I’ve seen it as my professional duty to spend part of my life battling cringeworthy quantum computing claims.  Every week, it feels like I talk to another journalist who tries to get me to say that this or that QC result will lead to huge practical applications in the near future, since that’s what the editor is looking for.  And every week I refuse to say it, and try to steer the conversation toward “deeper” scientific questions.  Sometimes I succeed and sometimes not, but at least I never hang up the phone feeling dirty.

On the other hand, it would be interesting to know whether, in the history of science, there’s ever been a rapidly-developing field, of interest to large numbers of scientists and laypeople alike, that wasn’t surrounded by noxious clouds of exaggeration, incomprehension, and BS.  I can imagine that, when Isaac Newton published his Principia, a Cambridge University publicist was there to explain to reporters that the new work proved that the Moon was basically an apple.

But none of that is where Dyakonov loses me.  Here’s where he does: from the statements

A) The feasibility of scalable quantum computing in the physical world remains open, and

B) The applications of quantum computing would probably be real but specialized,

he somehow, unaided by argument, arrives at the conclusion

C) Quantum computing is a failed, pathological research program, which will soon die out and be of interest only to sociologists.

Let me quote from his conclusion at length:

I believe that, in spite of appearances, the quantum computing story is nearing its end, not because somebody proves that it is impossible, but rather because 20 years is a typical lifetime of any big bubble in science, because too many unfounded promises have been made, because people get tired and annoyed by almost daily announcements of new “breakthroughs”, because all the tenure positions in quantum computing are already occupied, and because the proponents are growing older and less zealous, while the younger generation seeks for something new …

In fact, quantum computing is not so much a scientific, as a sociological problem which has expanded out of all proportion due to the US system of funding scientific research (which is now being copied all over the world). While having some positive sides, this system is unstable against spontaneous formation of bubbles and mafia-like structures. It pushes the average researcher to wild exaggerations on the border of fraud and sometimes beyond. Also, it is much easier to understand the workings of the funding system, than the workings of Nature, and these two skills only rarely come together.

The QC story says a lot about human nature, the scientific community, and the society as a whole, so it deserves profound psycho-sociological studies, which should begin right now, while the main actors are still alive and can be questioned.

In case the message isn’t yet clear enough, Dyakonov ends by comparing quantum computing to the legend of Nasreddin, who promised the Sultan that he could teach a donkey how to read.

Had he [Nasreddin] the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature.  So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.

Second, he could say that his research may, with some modifications, be generalized to other animals, like goats and sheep, as well as to insects, like ants, gnats, and flies, and this will have a tremendous potential for improving national security: these beasts could easily cross the enemy lines, read the secret plans, and report them back to us.

Dyakonov chose his example carefully.  Turnabout: consider the first person who had the idea of domesticating a wild donkey, teaching the beast to haul people’s stuff on its back.  If you’d never seen a domestic animal before, that idea would sound every bit as insane as donkey literacy.  And indeed, it probably took hundreds of years of selective breeding before it worked well.

In general, if there’s no general principle saying that X can’t work, the truth might be that X can probably never work, but the reasons are too messy to articulate.   Or the truth might be that X can work.  How can you ever find out, except by, y’know, science?  Try doing X.  If you fail, try to figure out why.  If you figure it out, share the lessons with others.  Look for an easier related problem Y that you can solve.  Think about whether X is impossible; if you could show its impossibility, that might advance human knowledge even more than X itself would have.  If the methods you invented for X don’t work, see if they work for some other, unrelated problem Z.  Congratulations!  You’ve just reinvented quantum computing research.  Or really, any kind of research.

But there’s something else that bothers me about Dyakonov’s donkey story: its specificity.  Why fixate on teaching a donkey, only a donkey, how to read?  Earlier in his article, Dyakonov ridicules the diversity of physical systems that have been considered as qubits—electron spin qubits, nuclear spin qubits, Josephson superconducting qubits, cavity photon qubits, etc.—seeing the long list as symptomatic of some deep pathology in the field.  Yet he never notices the tension with his donkey story.  Isn’t it obvious that, if Nasreddin had been a quantum computing experimentalist, then after failing to get good results with donkeys, he’d simply turn his attention to teaching cows, parrots, snakes, elephants, dolphins, or gorillas how to read?  Furthermore, while going through the zoo, Nasreddin might discover that he could teach gorillas how to recognize dozens of pictorial symbols: surely a nice partial result.  But maybe he’d have an even better idea: why not build his own reading machine?  The machine could use a camera to photograph the pages of a book, and a computer chip to decode the letters.  If one wanted, the machine could be even be the size and shape of a donkey, and could emit braying sounds.  Now, maybe Nasreddin would fail to build this reading machine, but even then, we know today that it would have been a noble failure, like those of Charles Babbage or Ted Nelson.  Nasreddin would’ve failed only by being too far ahead of his time.

Update (Jan. 7): See Dyakonov’s response to this post, and my response to his response.