Archive for the ‘Embarrassing Myself’ Category

More Wrong Things I Said in Papers

Friday, July 29th, 2016

Two years ago, I wrote a blog post entitled PostBQP Postscripts, owning up to not one but four substantive mathematical errors that I’d made over the years in my published papers, and which my students and colleagues later brought to my sheepish attention.  Fortunately, none of these errors affected the papers’ main messages; they just added interesting new twists to the story.  Even so, I remember feeling at the time like undergoing this public repentance was soul-cleansing intellectual hygiene.  I also felt like writing one big “post of shame” was easier than writing a bunch of separate errata and submitting them to journals, while also reaching a wider audience (and, therefore, doing an even better soul-cleansing job).

So I resolved that, anytime I’d saved up enough errata, I’d do another sackcloth-and-ashes post.  Which brings us to today.  Without further ado:


I. Quantum Money Falling Down

My and Paul Christiano’s explicit public-key quantum money scheme—the one based on low-degree polynomials—has now been fully broken.  To clarify, our abstract hidden-subspace scheme—the one that uses a classical black-box to test membership in the subspaces—remains totally fine.  Indeed, we unconditionally proved the security of the black-box scheme, and our security proof stands.  In the paper, though, we also stuck our necks out further, and conjectured that you could instantiate the black box, by publishing random low-degree polynomials that vanish on the subspaces you want to hide.  While I considered this superfluous, at Paul’s insistence, we also recommended adding completely-random “noise polynomials” for extra security.

Our scheme was broken in two stages.  First, in 2014, Pena et al. broke the noiseless version of our scheme, using Gröbner-basis methods, over fields of characteristic greater than 2.  Over F2—the field we happened to use in our scheme—Pena et al. couldn’t quite prove that their attack worked, but they gave numerical evidence that at least it finds the subspaces in nO(log n) time.  Note that nothing in Pena et al.’s attack is specific to quantum money: indeed, their attack consists of a purely classical algorithm, which efficiently solves the general classical problem of recovering large subspaces from polynomials that hide them.

At that point, at least the noisy version of our scheme—the one Paul had insisted we include—was still standing!  Indeed, the Gröbner-basis attack seemed to break down entirely when some of the polynomials were random garbage.

Later, though, Paul and Or Sattath realized that a quantum trick—basically, the single-copy tomography of Farhi et al.—can identify which polynomials are the noisy ones, provided we’re given a legitimate quantum money state to start with.  As a consequence, the problem of breaking the noisy scheme can be reduced to the problem of breaking the noiseless scheme—i.e., the problem that Pena et al. already essentially solved.

As bad as this sounds, it has an interesting positive consequence.  In our paper, Paul and I had actually given a security reduction for our money scheme based on low-degree polynomials.  In particular, we showed that there’s no polynomial-time quantum algorithm to counterfeit our money states, unless there’s a polynomial-time quantum algorithm that finds a basis for a subspace S≤F2n of dimension n/2 with Ω(2-n/2) success probability, given a collection of low-degree polynomials p1,…,pm and q1,…,qm (m=O(n)) most of which vanish on S and its dual subspace respectively, but that are otherwise random.  So, running our reduction backwards, the only possible conclusion from the break is that there is such a quantum algorithm!  Yet we would’ve had no idea how to find that quantum algorithm without going through quantum money—nor do we know a classical algorithm for the problem, or even a quantum algorithm with Ω(1) success probability.

In the meantime, the problem of designing a public-key quantum money scheme, with good cryptographic evidence for its security, remains open.  It’s plausible that there’s some other, more secure way to instantiate my and Paul’s hidden subspace scheme, for example using lattices.  And even before we’ve found such a way, we can use indistinguishability obfuscation as a stopgap.  We could also seek cryptographic evidence for the security of other kinds of public-key quantum money, like Farhi et al.’s based on knot invariants.

A paper about all this is on our to-do stack. In the meantime, for further details, see Lecture 9 in my Barbados lecture notes.


II. A De-Merlinization Mistake

In my 2006 paper QMA/qpoly ⊆ PSPACE/poly: De-Merlinizing Quantum Protocols, the technical core of the complexity result was a new quantum information lemma that I called the “Quantum OR Bound” (Lemma 14 in the paper).

Basically, the Quantum OR Bound says that, if we have an unknown quantum state ρ, as well as a collection of measurements M1,…,Mn that we might want to make on ρ, then we can distinguish the case that (a) every Mi rejects ρ with overwhelming probability, from the case that (b) at least one Mi accepts ρ with high probability.  And we can do this despite having only one copy of ρ, and despite the fact that earlier measurements might corrupt ρ, thereby compromising the later measurements.  The intuition is simply that, if the earlier measurements corrupted ρ substantially, that could only be because some of them had a decent probability of accepting ρ, meaning that at any rate, we’re not in case (a).

I’ve since reused the Quantum OR Bound for other problems—most notably, a proof that private-key quantum money requires either a computational assumption or a huge database maintained by the bank (see Theorem 8.3.1 in my Barbados lecture notes).

Alas, Aram Harrow and Ashley Montanaro recently discovered that my proof of the Quantum OR Bound is wrong.  It’s wrong because I neglected the possibility of “Zeno-like behavior,” in which repeated measurements on a quantum state would gradually shift the state far away from its starting point, without ever having a significant probability of rejecting the state.  For some reason, I assumed without any adequate argument that choosing the measurements at random, rather than in a predetermined order, would solve that problem.

Now, I might actually be right that randomizing the measurements is enough to solve the Zeno problem!  That remains a plausible conjecture, which Harrow and Montanaro could neither confirm nor refute.  In the meantime, though, Harrow and Montanaro were able to recover my QMA/qpoly⊆PSPACE/poly theorem, and all the other conclusions known to follow from the Quantum OR Bound (including some new ones that they discover), by designing a new measurement procedure whose soundness they can prove.

Their new procedure is based on an elegant, obvious-in-retrospect idea that somehow never occurred to me.  Namely, instead of just applying Mi‘s to ρ, one can first put a control qubit into an equal superposition of the |0〉 and |1〉 states, and then apply Mi‘s conditioned on the control qubit being in the |1〉 state.  While doing this, one can periodically measure the control qubit in the {|+〉,|-〉} basis, in order to check directly whether applying the Mi‘s has substantially corrupted ρ.  (If it hasn’t, one will always get the outcome |+〉; if it has, one might get |-〉.)  Substantial corruption, if detected, then tells us that some Mi‘s must have had non-negligible probabilities of accepting ρ.


III. Almost As Good As True

One lemma that I’ve used even more than the Quantum OR Bound is what I’ve called the “Almost As Good As New Lemma,” and what others in the field have called the “Gentle Measurement Lemma.”

I claimed a proof of the AAGANL in my 2004 paper Limitations of Quantum Advice and One-Way Communication (Lemma 2.2 there), and have used the lemma in like half a dozen later papers.  Alas, when I lectured at Barbados, Sasha Razborov and others discovered that my proof of the AAGANL was missing a crucial step!  More concretely, the proof I gave there works for pure states but not for mixed states.  For mixed states, the trouble is that I take a purification of the mixed state—something that always exists mathematically—but then illegally assume that the measurement I’m analyzing acts on the particular purification I’ve conjured up.

Fortunately, one can easily fix this problem by decomposing the state ρ into a mixture of pure states, then applying my earlier argument to each pure state separately, and finally using Cauchy-Schwarz (or just the convexity of the square-root function) to recombine the results.  Moreover, this is exactly what other people’s proofs of the Gentle Measurement Lemma did do, though I’d never noticed it before Barbados—I just idly wondered why those other proofs took twice as long as mine to do the same work!  For a correct proof, see Lemma 1.3.1 in the Barbados lecture notes.


IV. Oracle Woes

In my 2010 paper BQP and the Polynomial Hierarchy, I claimed to construct oracles A relative to which BQP⊄BPPpath and BQP⊄SZK, even while making only partial progress toward the big prize, which would’ve been an oracle relative to which BQP⊄PH.  Not only that: I claimed to show that any problem with a property called “almost k-wise independence”—one example being the Forrelation (or Fourier Checking) problem that I introduced in that paper—was neither in BPPpath nor in SZK.  But I showed that Forrelation is in BQP, thus yielding the separations.

Alas, this past spring Lijie Chen, who was my superb visiting student from Tsinghua University, realized that my proofs of these particular separations were wrong.  Not only that, they were wrong because I implicitly substituted a ratio of expectations for an expectation of ratios (!).  Again, it might still be true that almost k-wise independent problems can be neither in BPPpath nor in SZK: that remains an interesting conjecture, which Lijie was unable to resolve one way or the other.  (On the other hand, I showed here that almost k-wise independent problems can be in PH.)

But never fear!  In a recent arXiv preprint, Lijie has supplied correct proofs for the BQP⊄BPPpath and BQP⊄SZK oracle separations—using the same Forrelation problem that I studied, but additional properties of Forrelation besides its almost k-wise independence.  Lijie notes that my proofs, had they worked, would also have yielded an oracle relative to which BQP⊄AM, which would’ve been a spectacular result, nontrivial progress toward BQP⊄PH.  His proofs, by contrast, apply only to worst-case decision problems rather than problems of distinguishing two probability distributions, and therefore don’t imply anything about BQP vs. AM.  Anyway, there’s other cool stuff in his paper too.


V. We Needed More Coffee

This is one I’ve already written about on this blog, but just in case anyone missed it … in my, Sean Carroll, and Lauren Ouellette’s original draft paper on the coffee automaton, the specific rule we discuss doesn’t generate any significant amount of complexity (in the sense of coarse-grained entropy).  We wrongly thought it did, because of a misinterpretation of our simulation data.  But as Brent Werness brought to our attention, not only does a corrected simulation not show any complexity bump, one can rigorously prove there’s no complexity bump.  And we could’ve realized all this from the beginning, by reflecting that pure random diffusion (e.g., what cream does in coffee when you don’t stir it with a spoon) doesn’t actually produce interesting tendril patterns.

On the other hand, Brent proposed a different rule—one that involves “shearing” whole regions of cream and coffee across each other—that does generate significant complexity, basically because of all the long-range correlations it induces.  And not only do we clearly see this in simulations, but the growth of complexity can be rigorously proven!  Anyway, we have a long-delayed revision of the paper that will explain all this in more detail, with Brent as well as MIT student Varun Mohan now added as coauthors.


If any of my colleagues feel inspired to write up their own “litanies of mathematical error,” they’re welcome to do so in the comments!  Just remember: you don’t earn any epistemic virtue points unless the errors you reveal actually embarrass you.  No humblebragging about how you once left out a minus sign in your paper that won the Fields Medal.

Me interviewed by John Horgan (the author of “The End of Science”)

Thursday, April 21st, 2016

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

The universe has a high (but not infinite) Sleep Number

Friday, February 12th, 2016

As everyone knows, this was a momentous week in the history of science.  And I don’t need to tell you why: the STOC and CCC accepted paper lists finally came out.

Haha, kidding!  I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).

By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined.  By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton).  But this was detected, in possibly the highest-precision measurement ever made.

As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space?  I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2.  Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close.  Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3).  Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter.  Would you feel that?  Not sure.  At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point.  (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.)  Now, the black holes themselves were orbiting about 200 miles from each other before they merged.  So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!

In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight.  People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.”  But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.

As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong.  Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.

PostBQP Postscripts: A Confession of Mathematical Errors

Sunday, November 30th, 2014

tl;dr: This post reveals two errors in one of my most-cited papers, and also explains how to fix them.  Thanks to Piotr Achinger, Michael Cohen, Greg Kuperberg, Ciaran Lee, Ryan O’Donnell, Julian Rosen, Will Sawin, Cem Say, and others for their contributions to this post.


If you look at my Wikipedia page, apparently one of the two things in the world that I’m “known for” (along with algebrization) is “quantum Turing with postselection.”  By this, Wikipedia means my 2004 definition of the complexity class PostBQP—that is, the class of decision problems solvable in bounded-error quantum polynomial time, assuming the ability to postselect (or condition) on certain measurement outcomes—and my proof that PostBQP coincides with the classical complexity PP (that is, the class of decision problems expressible in terms of whether the number of inputs that cause a given polynomial-time Turing machine to accept does or doesn’t exceed some threshold).

To explain this a bit: even without quantum mechanics, it’s pretty obvious that, if you could “postselect” on exponentially-unlikely events, then you’d get huge, unrealistic amounts of computational power.  For example (and apologies in advance for the macabre imagery), you could “solve” NP-complete problems in polynomial time by simply guessing a random solution, then checking whether the solution is right, and shooting yourself if it happened to be wrong!  Conditioned on still being alive (and if you like, appealing to the “anthropic principle”), you must find yourself having guessed a valid solution—assuming, of course, that there were any valid solutions to be found.  If there weren’t any, then you’d seem to be out of luck!  (Exercise for the reader: generalize this “algorithm,” so that it still works even if you don’t know in advance whether your NP-complete problem instance has any valid solutions.)

So with the PostBQP=PP theorem, the surprise was not that postselection gives you lots of computational power, but rather that postselection combined with quantum mechanics gives you much more power even than postselection by itself (or quantum mechanics by itself, for that matter).  Since PPP=P#P, the class PP basically captures the full difficulty of #P-complete counting problems—that is, not just solving an NP-complete problem, but counting how many solutions it has.  It’s not obvious that a quantum computer with postselection can solve counting problems, but that’s what the theorem shows.  That, in turn, has implications for other things: for example, I showed it can be used to prove classical facts about PP, like the fact that PP is closed under intersection (the Beigel-Reingold-Spielman Theorem), in a straightforward way; and it’s also used to show the hardness of quantum sampling problems, in the work of Bremner-Jozsa-Shepherd as well as my BosonSampling work with Arkhipov.

I’m diffident about being “known for” something so simple; once I had asked the question, the proof of PostBQP=PP took me all of an hour to work out.  Yet PostBQP ended up being a hundred times more influential for quantum computing theory than things on which I expended a thousand times more effort.  So on balance, I guess I’m happy to call PostBQP my own.

That’s why today’s post comes with a special sense of intellectual responsibility.  Within the last month, it’s come to my attention that there are at least two embarrassing oversights in my PostBQP paper from a decade ago, one of them concerning the very definition of PostBQP.  I hasten to clarify: once one fixes up the definition, the PostBQP=PP theorem remains perfectly valid, and all the applications of PostBQP that I mentioned above—for example, to reproving Beigel-Reingold-Spielman, and to the hardness of quantum sampling problems—go through just fine.  But if you think I have nothing to be embarrassed about: well, read on.


The definitional subtlety came clearly to my attention a few weeks ago, when I was lecturing about PostBQP in my 6.845 Quantum Complexity Theory graduate class.  I defined PostBQP as the class of languages L⊆{0,1}* for which there exists a polynomial-time quantum Turing machine M such that, for all inputs x∈{0,1}*,

  • M(x) “succeeds” (determined, say, by measuring its first output qubit in the {|0>,|1>} basis) with nonzero probability.
  • If x∈L, then conditioned on M(x) succeeding, M(x) “accepts” (determined, say, by measuring its second output qubit in the {|0>,|1>} basis) with probability at least 2/3.
  • If x∉L, then conditioned on M(x) succeeding, M(x) accepts with probability at most 1/3.

I then had to reassure the students that PostBQP, so defined, was a “robust” class: that is, that the definition doesn’t depend on stupid things like which set of quantum gates we allow. I argued that, even though we’re postselecting on exponentially-unlikely events, it’s still OK, because the Solovay-Kitaev Theorem lets us approximate any desired unitary to within exponentially-small error, with only a polynomial increase in the size of our quantum circuit. (Here we actually need the full power of the Solovay-Kitaev Theorem, in contrast to ordinary BQP, where we only need part of the power.)

A student in the class, Michael Cohen, immediately jumped in with a difficulty: what if M(x) succeeded, not with exponentially-small probability, but with doubly-exponentially-small probability—say, exp(-2n)?  In that case, one could no longer use the Solovay-Kitaev Theorem to show the irrelevance of the gate set.  It would no longer even be clear that PostBQP⊆PP, since the PP simulation might not be able to keep track of such tiny probabilities.

Thinking on my feet, I replied that we could presumably choose a set of gates—for example, gates involving rational numbers only—for which doubly-exponentially-small probabilities would never arise.  Or if all else failed, we could simply add to the definition of PostBQP that M(x) had to “succeed” with probability at least 1/exp(n): after all, that was the only situation I ever cared about anyway, and the only one that ever arose in the applications of PostBQP.

But the question still gnawed at me: was there a problem with my original, unamended definition of PostBQP?  If we weren’t careful in choosing our gate set, could we have cancellations that produced doubly-exponentially-small probabilities?  I promised I’d think about it more.

By a funny coincidence, just a couple weeks later, Ciaran Lee, a student at Oxford, emailed me the exact same question.  So on a train ride from Princeton to Boston, I decided to think about it for real.  It wasn’t hard to show that, if the gates involved square roots of rational numbers only—for example, if we’re dealing with the Hadamard and Toffoli gates, or the cos(π/8) and CNOT gates, or other standard gate sets—then every measurement outcome has at least 1/exp(n) probability, so there’s no problem with the definition of PostBQP.  But I didn’t know what might happen with stranger gate sets.

As is my wont these days—when parenting, teaching, and so forth leave me with almost no time to concentrate on math—I posted the problem to MathOverflow.  Almost immediately, I got incisive responses.  First, Piotr Achinger pointed out that, if we allow arbitrary gates, then it’s easy to get massive cancellations.  In more detail, let {an} be extremely-rapidly growing sequence of integers, say with an+1 > exp(an).  Then define

$$ \alpha = \sum_{n=1}^{\infty} 0.1^{a_n}. $$

If we write out α in decimal notation, it will consist of mostly 0’s, but with 1’s spaced further and further apart, like so: 0.1101000000000001000….  Now consider a gate set that involves α as well as 0.1 and -0.1 as matrix entries.  Given n qubits, it’s not hard to see that we can set up an interference experiment in which one of the paths leading to a given outcome E has amplitude α, and the other paths have amplitudes $$ -(0.1^{a_1}), -(0.1^{a_2}), \ldots, -(0.1^{a_k}), $$ where k is the largest integer such that ak≤n. In that case, the total amplitude of E will be about $$0.1^{a_{k+1}},$$ which for most values of n is doubly-exponentially small in n. Of course, by simply choosing a faster-growing sequence {an}, we can cause an even more severe cancellation.

Furthermore, by modifying the above construction to involve two crazy transcendental numbers α and β, I claim that we can set up a PostBQP computation such that deciding what happens is arbitrarily harder than PP (though still computable)—say, outside of exponential space, or even triple-exponential space. Moreover, we can do this despite the fact that the first n digits of α and β remain computable in O(n) time. The details are left as an exercise for the interested reader.

Yet even though we can engineer massive cancellations with crazy gates, I still conjectured that nothing would go wrong with “normal” gates: for example, gates involving algebraic amplitudes only. More formally, I conjectured that any finite set A=(a1,…,ak) of algebraic numbers is “tame,” in the sense that, if p is any degree-n polynomial with integer coefficients at most exp(n) in absolute value, then p(a1,…,ak)≠0 implies |p(a1,…,ak)|≥1/exp(n). And indeed, Julian Rosen on MathOverflow found an elegant proof of this fact. I’ll let you read it over there if you’re interested, but briefly, it interprets the amplitude we want as one particular Archimedean valuation of a certain element of a number field, and then lower-bounds the amplitude by considering the product of all Archimedean and non-Archimedean valuations (the latter of which involves the p-adic numbers). Since this was a bit heavy-duty for me, I was grateful when Will Sawin reformulated the proof in linear-algebraic terms that I understood.

And then came the embarrassing part. A few days ago, I was chatting with Greg Kuperberg, the renowned mathematician and author of our climate-change parable. I thought he’d be interested in this PostBQP progress, so I mentioned it to him. Delicately, Greg let me know that he had recently proved the exact same results, for the exact same reason (namely, fixing the definition of PostBQP), for the latest revision of his paper How Hard Is It to Approximate the Jones Polynomial?. Moreover, he actually wrote to me in June to tell me about this! At the time, however, I regarded it as “pointless mathematical hairsplitting” (who cares about these low-level gate-set issues anyway?). So I didn’t pay it any attention—and then I’d completely forgotten about Greg’s work when the question resurfaced a few months later. This is truly a just punishment for looking down on “mathematical hairsplitting,” and not a lesson I’ll soon forget.

Anyway, Greg’s paper provides yet a third proof that the algebraic numbers are tame, this one using Galois conjugates (though it turns out that, from a sufficiently refined perspective, Greg’s proof is equivalent to the other two).

There remains one obvious open problem here, one that I noted in the MathOverflow post and in which Greg is also extremely interested. Namely, we now know that it’s possible to screw up PostBQP using gates with amplitudes that are crazy transcendental numbers (closely related to the Liouville numbers). And we also know that, if the gates have algebraic amplitudes, then everything is fine: all events have at least 1/exp(n) probability. But what if the gates have not-so-crazy transcendental amplitudes, like 1/e, or (a bit more realistically) cos(2)?  I conjecture that everything is still fine, but the proof techniques that worked for the algebraic case seem useless here.

Stepping back, how great are the consequences of all this for our understanding of PostBQP? Fortunately, I claim that they’re not that great, for the following reason. As Adleman, DeMarrais, and Huang already noted in 1997—in the same paper that proved BQP⊆PP—we can screw up the definition even of BQP, let alone PostBQP, using a bizarre enough gate set. For example, suppose we had a gate G that mapped |0> to x|0>+y|1>, where y was a real number whose binary expansion encoded the halting problem (for example, y might equal Chaitin’s Ω).  Then by applying G more and more times, we could learn more and more bits of y, and thereby solve an uncomputable problem in the limit n→∞.

Faced with this observation, most quantum computing experts would say something like: “OK, but this is silly! It has no physical relevance, since we’ll never come across a magical gate like G—if only we did! And at any rate, it has nothing to do with quantum computing specifically: even classically, one could imagine a coin that landed heads with probability equal to Chaitin’s Ω. Therefore, the right way to deal with this is simply to define BQP in such a way as to disallow such absurd gates.” And indeed, that is what’s done today—usually without even remarking on it.

Now, it turns out that even gates that are “perfectly safe” for defining BQP, can turn “unsafe” when it comes to defining PostBQP. To screw up the definition of PostBQP, it’s not necessary that a gate involve uncomputable (or extremely hard-to-compute) amplitudes: the amplitudes could all be easily computable, but they could still be “unsafe” because of massive cancellations, as in the example above involving α. But one could think of this as a difference of degree, rather than of kind. It’s still true that there’s a large set of gates, including virtually all the gates anyone has ever cared about in practice (Toffoli, Hadamard, π/8, etc. etc.), that are perfectly safe for defining the complexity class; it’s just that the set is slightly smaller than it was for BQP.


The other issue with the PostBQP=PP paper was discovered by Ryan O’Donnell and Cem Say.  In Proposition 3 of the paper, I claim that PostBQP = BQPPostBQP||,classical, where the latter is the class of problems solvable by a BQP machine that’s allowed to make poly(n) parallel, classical queries to a PostBQP oracle.  As Ryan pointed out to me, nothing in my brief argument for this depended on quantum mechanics, so it would equally well show that PostBPP = BPPPostBPP||, where PostBPP (also known as BPPpath) is the classical analogue of PostBQP, and BPPPostBPP|| is the class of problems solvable by a BPP machine that can make poly(n) parallel queries to a PostBPP oracle.  But BPPPostBPP|| clearly contains BPPNP||, which in turn contains AM—so we would get AM in PostBPP, and therefore AM in PostBQP=PP.  But Vereshchagin gave an oracle relative to which AM is not contained in PP.  Since there was no nonrelativizing ingredient anywhere in my argument, the only possible conclusion is that my argument was wrong.  (This, incidentally, provides a nice illustration of the value of oracle results.)

In retrospect, it’s easy to pinpoint what went wrong.  If we try to simulate BPPPostBPP|| in PostBPP, our random bits will be playing a dual role: in choosing the queries to be submitted to the PostBPP oracle, and in providing the “raw material for postselection,” in computing the responses to those queries.  But in PostBPP, we only get to postselect once.  When we do, the two sets of random bits that we’d wanted to keep separate will get hopelessly mixed up, with the postselection acting on the “BPP” random bits, not just on the “PostBPP” ones.

How can we fix this problem?  Well, when defining the class BQPPostBQP||,classical, suppose we require the queries to the PostBQP oracle to be not only “classical,” but deterministic: that is, they have to be generated in advance by a P machine, and can’t depend on any random bits whatsoever.  And suppose we define BPPPostBPP||,classical similarly.  In that case, it’s not hard to see that the equalities BQPPostBQP||,classical = PostBQP and BPPPostBPP||,classical = PostBPP both go through.  You don’t actually care about this, do you?  But Ryan O’Donnell and Cem Say did, and that’s good enough for me.


I wish I could say that these are the only cases of mistakes recently being found in decade-old papers of mine, but alas, such is not the case.  In the near future, my student Adam Bouland, MIT undergrad Mitchell Lee, and Singapore’s Joe Fitzsimons will post to the arXiv a paper that grew out of an error in my 2005 paper Quantum Computing and Hidden Variables. In that paper, I introduced a hypothetical generalization of the quantum computing model, in which one gets to see the entire trajectory of a hidden variable, rather than just a single measurement outcome. I showed that this generalization would let us solve problems somewhat beyond what we think we can do with a “standard” quantum computer. In particular, we could solve the collision problem in O(1) queries, efficiently solve Graph Isomorphism (and all other problems in the Statistical Zero-Knowledge class), and search an N-element list in only ~N1/3 steps, rather than the ~N1/2 steps of Grover’s search algorithm. That part of the paper remains fine!

On the other hand, at the end of the paper, I also gave a brief argument to show that, even in the hidden-variable model, ~N1/3 steps are required to search an N-element list. But Mitchell Lee and Adam Bouland discovered that that argument is wrong: it fails to account for all the possible ways that an algorithm could exploit the correlations between the hidden variable’s values at different moments in time. (I’ve previously discussed this error in other blog posts, as well as in the latest edition of Quantum Computing Since Democritus.)

If we suitably restrict the hidden-variable theory, then we can correctly prove a lower bound of ~N1/4, or even (with strong enough assumptions) ~N1/3; and we do that in the forthcoming paper. Even with no restrictions, as far as we know an ~N1/3 lower bound for search with hidden variables remains true. But it now looks like proving it will require a major advance in our understanding of hidden-variable theories: for example, a proof that the “Schrödinger theory” is robust to small perturbations, which I’d given as the main open problem in my 2005 paper.

As if that weren’t enough, in my 2003 paper Quantum Certificate Complexity, I claimed (as a side remark) that one could get a recursive Boolean function f with an asymptotic gap between the block sensitivity bs(f) and the randomized certificate complexity RC(f). However, two and a half years ago, Avishay Tal discovered that this didn’t work, because block sensitivity doesn’t behave nicely under composition.  (In assuming it did, I was propagating an error introduced earlier by Wegener and Zádori.)  More broadly, Avishay showed that there is no recursively-defined Boolean function with an asymptotic gap between bs(f) and RC(f). On the other hand, if we just want some Boolean function with an asymptotic gap between bs(f) and RC(f), then Raghav Kulkarni observed that we can use a non-recursive function introduced by Xiaoming Sun, which yields bs(f)≈N3/7 and RC(f)≈N4/7. This is actually a larger separation than the one I’d wrongly claimed.

Now that I’ve come clean about all these things, hopefully the healing can begin at last.

The Ghost in the Quantum Turing Machine

Saturday, June 15th, 2013

I’ve been traveling this past week (in Israel and the French Riviera), heavily distracted by real life from my blogging career.  But by popular request, let me now provide a link to my very first post-tenure publication: The Ghost in the Quantum Turing Machine.

Here’s the abstract:

In honor of Alan Turing’s hundredth birthday, I unwisely set out some thoughts about one of Turing’s obsessions throughout his life, the question of physics and free will. I focus relatively narrowly on a notion that I call “Knightian freedom”: a certain kind of in-principle physical unpredictability that goes beyond probabilistic unpredictability. Other, more metaphysical aspects of free will I regard as possibly outside the scope of science. I examine a viewpoint, suggested independently by Carl Hoefer, Cristi Stoica, and even Turing himself, that tries to find scope for “freedom” in the universe’s boundary conditions rather than in the dynamical laws. Taking this viewpoint seriously leads to many interesting conceptual problems. I investigate how far one can go toward solving those problems, and along the way, encounter (among other things) the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb’s paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption. I also compare the viewpoint explored here to the more radical speculations of Roger Penrose. The result of all this is an unusual perspective on time, quantum mechanics, and causation, of which I myself remain skeptical, but which has several appealing features. Among other things, it suggests interesting empirical questions in neuroscience, physics, and cosmology; and takes a millennia-old philosophical debate into some underexplored territory.

See here (and also here) for interesting discussions over on Less Wrong.  I welcome further discussion in the comments section of this post, and will jump in myself after a few days to address questions (update: eh, already have).  There are three reasons for the self-imposed delay: first, general busyness.  Second, inspired by the McGeoch affair, I’m trying out a new experiment, in which I strive not to be on such an emotional hair-trigger about the comments people leave on my blog.  And third, based on past experience, I anticipate comments like the following:

“Hey Scott, I didn’t have time to read this 85-page essay that you labored over for two years.  So, can you please just summarize your argument in the space of a blog comment?  Also, based on the other comments here, I have an objection that I’m sure never occurred to you.  Oh, wait, just now scanning the table of contents…”

So, I decided to leave some time for people to RTFM (Read The Free-Will Manuscript) before I entered the fray.

For now, just one remark: some people might wonder whether this essay marks a new “research direction” for me.  While it’s difficult to predict the future (even probabilistically 🙂 ), I can say that my own motivations were exactly the opposite: I wanted to set out my thoughts about various mammoth philosophical issues once and for all, so that then I could get back to complexity, quantum computing, and just general complaining about the state of the world.

Superiority of the Latke: The Unexpected Convergence of Quantum Mechanics and Common Sense

Friday, April 26th, 2013

latke

Back in February, I gave a talk with the above title at the Annual MIT Latke-Hamentaschen Debate.  I’m pleased to announce that streaming video of my talk is now available!  (My segment starts about 10 minutes into the video, and lasts for 10 minutes.)  You can also download my PowerPoint slides here.

Out of hundreds of talks I’ve given in my life, on five continents, this is the single talk of which I’m the proudest.

Of course, before you form an opinion about the issue at hand, you should also check out the contributions of my fellow debaters.  On the sadly-mistaken hamentasch side, my favorite presentation was that of mathematician Arthur Mattuck, which starts in at 56 minutes and lasts for a full half hour (!! – the allotted time was only 8 minutes).  Mattuck relates the shapes of latkes and hamentaschen to the famous Kakeya problem in measure theory—though strangely, his final conclusions seem to provide no support whatsoever for the hamentaschen, even on Mattuck’s own terms.

Finally, what if you’re a reader for whom the very words “latke” and “hamentaschen” are just as incomprehensible as the title of this blog?  OK, here are some Cliff Notes:

  • Latkes are fried potato pancakes, traditionally eaten by Jews on Hannukah.
  • Hamentaschen are triangular fruit-filled cookies, traditionally eaten by Jews on Purim.
  • Beginning at the University of Chicago in 1946, many universities around the world have held farcical annual “debates” between faculty members (both Jewish and non-Jewish) about which of those two foods is better.  (The reason I say “farcical” is simply that, as I explain in my talk, the truth has always been overwhelmingly on one side.)  The debaters have invoked everything from feminist theory to particle physics to bolster their case.

Thanks very much to Dean of Admissions Stu Schmill for moderating, and to MIT Hillel for organizing the debate.

Update: Luboš has a new blog post announcing that he finally found a chapter in Quantum Computing Since Democritus that he likes!  Woohoo!  Whether coincidentally or not, the chapter he likes makes exactly the same points about quantum mechanics that I also make in my pro-latke presentation.

My fortune-cookie wisdom for the day

Thursday, April 18th, 2013

On Sunday afternoon, Dana, Lily, and I were in Copley Square in Boston for a brunch with friends, at the Mandarin Oriental hotel on Boylston Street.  As I now recall, I was complaining bitterly about a number of things.  First, I’d lost my passport (it’s since been found).  Second, we hadn’t correctly timed Lily’s feedings, making us extremely late for the brunch, and causing Lily to scream hysterically the entire car ride.  Third, parking (and later, locating) our car at the Prudential Center was a logistical nightmare.  Fourth, I’d recently received by email a profoundly silly paper, claiming that one of my results was wrong based on a trivial misunderstanding.  Fifth … well, there were other things that were bothering me, but I don’t remember what they were.

Then the next day, maybe 50 feet from where we’d been, the bombs went off, three innocent human beings lost their lives and many more were rendered permanently disabled.

Drawing appropriate morals is left as an exercise for the reader.


Update (Friday, 7AM): Maybe the moral is that you shouldn’t philosophize while the suspects are still on the loose. Last night (as you can read anywhere else on the web) an MIT police officer was tragically shot and killed in the line of duty, right outside the Stata Center, by one of the marathon bombers (who turn out to be brothers from Chechnya). After a busy night—which also included robbing a 7-Eleven (visiting a 7-Eleven that was coincidentally also robbed—no novelist could make this stuff up), carjacking a Mercedes two blocks from my apartment, and randomly throwing some more pressure-cooker bombs—one of the brothers was killed; the other one escaped to Watertown. A massive hunt for him is now underway. MIT is completely closed today, as is Harvard and pretty much every other university in the area—and now, it seems, all stores and businesses in the entire Boston area. The streets are mostly deserted except for police vehicles. As for us, we heard the sirens through much of the night, but didn’t know what they were about until this morning. Here’s hoping they catch the second asshole soon.

Another Update (Friday, 9AM): As the sorry details emerge about these Tsarnaev brothers, it occurs to me that there’s another moral we can draw: namely, we can remind ourselves that the Hollywood image of the evil criminal genius is almost entirely a myth. Yes, evil and genius have occasionally been found in the same person (as with a few of the Nazi scientists), but it’s evil and stupidity that are the far more natural allies. Which is the most optimistic statement I can think to make right now about the future of the human race.

Yet More Updates (Friday, 3PM): The whole Boston area is basically a ghost town now, with the streets empty on a beautiful spring day and the sound of helicopters filling the air.  I was just up on my roofdeck to watch, and never saw anything like it.  I can’t help thinking that it sets a terrible precedent to give a couple doofus amateur terrorists the power to shut down an entire metropolitan area.  Meanwhile, Andrew Sullivan points to a spectacularly stupid tweet by one Nate Bell:

I wonder how many Boston liberals spent the night cowering in their homes wishing they had an AR-15 with a hi-capacity magazine?

This sounds like a gun nut projecting his own disturbed psychology onto other people.  I’m not actually scared, but if I was, owning a gun would do nothing whatsoever to make me less scared (quite the contrary).  What would make me think I could win a gunfight against a frothing lunatic—or that I’d want to find out?  When it comes to violence, the only thing that calms my nerves is a democratic state having a near-monopoly on it.

What else?  It was chilling to watch the Tsarnaev brothers’ aunt, the one in Toronto, babble incoherently on TV about how wonderful her nephews were (a striking contrast to the remorseful uncle in Maryland).  If it emerges that anyone else in this family (including the parents, or the older brother’s wife) had any foreknowledge about the killing spree, then I very much hope they’ll face justice as well.

In other news, Lily had an eventful day too: she finally figured out how to squeeze her toy ball with her hands.

The Territory Around BQP

Monday, May 16th, 2011

A commenter named Blake Stacey pointed me to a talk entitled The Territory Around BQP: Results and Open Problems, which was given at the Perimeter Institute this past Friday, and which I’d had no idea was available on streaming video.  This talk was part of a fantastic workshop called Conceptual Foundations and Foils for Quantum Information Processing, which was about ways of changing the laws of quantum mechanics to get alternative theories that still make some sort of sense, and that might shed new light on the “tried-and-true original.”  In this particular talk, the speaker discusses a large number of ways to make the complexity class BQP (Bounded-Error Quantum Polynomial-Time) “slightly” bigger or smaller.  I’m embarrassed to admit that I watched this particular talk transfixed to the computer screen: I genuinely couldn’t predict how BQP was going to get mutilated next, and I looked forward to finding out.

My painful lesson for the week

Saturday, December 18th, 2010

Years ago, Sasha Razborov taught me one of my all-time favorite jokes.

In the 1960s, a man starts handing out leaflets in Moscow’s Red Square. Needless to say, he’s immediately apprehended by the KGB. On examining the leaflets, however, the KGB agents discover that they’re just blank pieces of paper. “What is the meaning of this?” the agents demand.

“What could I write?” exclaims the man. “It’s so obvious!”

The lesson I’ve learned this week is that the man was wrong. In politics, nothing is ever too obvious.

Physics for Doofuses: Why Beds Exist

Friday, September 3rd, 2010

I promised to blog more about research, and I will.  Unfortunately, in the one week between my world tour and the start of the fall semester, I’ve been spending less time on quantum complexity research than on sleeping on a new mattress that I bought.  This has provided ample time to ponder the following question, which I’ve decided to add to the Shtetl-Optimized Physics for Doofuses series:

Why is a soft bed more comfortable than a hard one?

At first glance, this question seems too doofusy even for a series such as this, which makes its target audience clear.  The trouble is that, while perfectly reasonable-sounding answers immediately suggest themselves, several of those answers can be shown to be wrong.

Let’s start with the most common answer: a soft bed is more comfortable than a hard bed because it molds to your shape.   The inadequacy of this answer can be seen by the following thought experiment: lie on a soft bed, and let it mold to your body.  Then imagine that the bed retains exactly the same molded shape, but is replaced by ceramic.  No longer so comfortable!

Ah, you reply, but that’s because a ceramic bed doesn’t change its shape as you shift positions throughout the night.  But this reply is still inadequate—since even if you’re lying as still as possible, it still seems clear that a soft bed is more comfortable than a hard one.

So it seems any answer needs to start from the observation that, even when you’re lying still, you’re not really lying still: you’re breathing in and out, there are tiny vibrations, etc.  The real point of a soft bed is to create a gentler potential well, which absorbs the shocks that would otherwise be caused by those sorts of small movements.

(I was tempted to say the point is to damp the movements, but that can’t be right: trampolines are designed for minimal damping, yet sleeping on a trampoline could actually be pretty comfortable.  So the essential thing a bed needs to do is simply to make way in response to small movements and vibrations.  How hard the bed tries to spring back to its original shape is a secondary question—the answer to which presumably influences, for example, whether you prefer an innerspring or a memory-foam mattress.)

So then why aren’t beds even softer than they are?  Well, the limit of infinite softness would be a bed that immediately collapsed to nothing when you lay on it, dropping you to the floor.  But even before that limit, a bed that was too soft would give you too much freedom to shift into awkward positions and thereby cause yourself back problems.  This suggests an answer to a question raised by a colleague: is the purpose of a bed to approximate, as well as possible on the earth’s surface, the experience of sleeping in zero gravity?  Unless I’m mistaken, the answer is no.  Sleeping in space would be like sleeping on a bed that was too soft, with the same potential for back problems and so forth.

Given that lying in bed is normally the least active thing we do, I find it ironic that the only reasons we lie in bed in the first place (as opposed to, say, on steel beams) are dynamical: they involve the way the bed responds to continual vibrations and movements.

I’ll be grateful if knowledgeable physicists, physiologists, or sleepers can correct any errors in the above account.  Meantime, the next time your spouse, partner, roommate, parent, etc. accuses you of lounging in bed all afternoon like a comatose dog, you can reply that nothing could be further from the truth: rather, inspired by a post on Shtetl-Optimized, you’re struggling to reconcile your modern understanding of the physics and biology of lying in bed with the prescientific, phenomenal experience of lying in bed, and thereby make yourself into a more enlightened human being.