## More Wrong Things I Said in Papers

Two years ago, I wrote a blog post entitled PostBQP Postscripts, owning up to not one but *four* substantive mathematical errors that I’d made over the years in my published papers, and which my students and colleagues later brought to my sheepish attention. Fortunately, none of these errors affected the papers’ main messages; they just added interesting new twists to the story. Even so, I remember feeling at the time like undergoing this public repentance was soul-cleansing intellectual hygiene. I also felt like writing one big “post of shame” was *easier* than writing a bunch of separate errata and submitting them to journals, while also reaching a wider audience (and, therefore, doing an even better soul-cleansing job).

So I resolved that, anytime I’d saved up enough errata, I’d do another sackcloth-and-ashes post. Which brings us to today. Without further ado:

**I. Quantum Money Falling Down**

My and Paul Christiano’s explicit public-key quantum money scheme—the one based on low-degree polynomials—has now been fully broken. To clarify, our abstract hidden-subspace scheme—the one that uses a classical black-box to test membership in the subspaces—remains totally fine. Indeed, we unconditionally proved the security of the black-box scheme, and our security proof stands. In the paper, though, we also stuck our necks out further, and conjectured that you could instantiate the black box, by publishing random low-degree polynomials that vanish on the subspaces you want to hide. While I considered this superfluous, at Paul’s insistence, we also recommended adding completely-random “noise polynomials” for extra security.

Our scheme was broken in two stages. First, in 2014, Pena et al. broke the noiseless version of our scheme, using Gröbner-basis methods, over fields of characteristic greater than 2. Over F_{2}—the field we happened to use in our scheme—Pena et al. couldn’t quite prove that their attack worked, but they gave numerical evidence that at least it finds the subspaces in n^{O(log n)} time. Note that nothing in Pena et al.’s attack is specific to quantum money: indeed, their attack consists of a purely classical algorithm, which efficiently solves the general classical problem of recovering large subspaces from polynomials that hide them.

At that point, at least the *noisy* version of our scheme—the one Paul had insisted we include—was still standing! Indeed, the Gröbner-basis attack seemed to break down entirely when some of the polynomials were random garbage.

Later, though, Paul and Or Sattath realized that a quantum trick—basically, the single-copy tomography of Farhi et al.—can identify which polynomials are the noisy ones, provided we’re given a legitimate quantum money state to start with. As a consequence, the problem of breaking the noisy scheme can be reduced to the problem of breaking the noiseless scheme—i.e., the problem that Pena et al. already essentially solved.

As bad as this sounds, it has an interesting positive consequence. In our paper, Paul and I had actually given a security reduction for our money scheme based on low-degree polynomials. In particular, we showed that there’s no polynomial-time quantum algorithm to counterfeit our money states, *unless* there’s a polynomial-time quantum algorithm that finds a basis for a subspace S≤F_{2}^{n} of dimension n/2 with Ω(2^{-n/2}) success probability, given a collection of low-degree polynomials p_{1},…,p_{m} and q_{1},…,q_{m} (m=O(n)) most of which vanish on S and its dual subspace respectively, but that are otherwise random. So, running our reduction backwards, the only possible conclusion from the break is that there *is* such a quantum algorithm! Yet we would’ve had no idea how to find that quantum algorithm without going through quantum money—nor do we know a classical algorithm for the problem, or even a quantum algorithm with Ω(1) success probability.

In the meantime, the problem of designing a public-key quantum money scheme, with good cryptographic evidence for its security, remains open. It’s plausible that there’s some other, more secure way to instantiate my and Paul’s hidden subspace scheme, for example using lattices. And even before we’ve found such a way, we can use indistinguishability obfuscation as a stopgap. We could also seek cryptographic evidence for the security of other kinds of public-key quantum money, like Farhi et al.’s based on knot invariants.

A paper about all this is on our to-do stack. In the meantime, for further details, see Lecture 9 in my Barbados lecture notes.

**II. A De-Merlinization Mistake**

In my 2006 paper QMA/qpoly ⊆ PSPACE/poly: De-Merlinizing Quantum Protocols, the technical core of the complexity result was a new quantum information lemma that I called the “Quantum OR Bound” (Lemma 14 in the paper).

Basically, the Quantum OR Bound says that, if we have an unknown quantum state ρ, as well as a collection of measurements M_{1},…,M_{n} that we might want to make on ρ, then we can distinguish the case that (a) every M_{i} rejects ρ with overwhelming probability, from the case that (b) at least one M_{i} accepts ρ with high probability. And we can do this *despite* having only one copy of ρ, and despite the fact that earlier measurements might corrupt ρ, thereby compromising the later measurements. The intuition is simply that, if the earlier measurements corrupted ρ substantially, that could only be because some of them had a decent probability of accepting ρ, meaning that at any rate, we’re not in case (a).

I’ve since reused the Quantum OR Bound for other problems—most notably, a proof that private-key quantum money requires either a computational assumption or a huge database maintained by the bank (see Theorem 8.3.1 in my Barbados lecture notes).

Alas, Aram Harrow and Ashley Montanaro recently discovered that my proof of the Quantum OR Bound is wrong. It’s wrong because I neglected the possibility of “Zeno-like behavior,” in which repeated measurements on a quantum state would gradually shift the state far away from its starting point, without ever having a significant probability of rejecting the state. For some reason, I assumed without any adequate argument that choosing the measurements at random, rather than in a predetermined order, would solve that problem.

Now, I might actually be *right* that randomizing the measurements is enough to solve the Zeno problem! That remains a plausible conjecture, which Harrow and Montanaro could neither confirm nor refute. In the meantime, though, Harrow and Montanaro were able to recover my QMA/qpoly⊆PSPACE/poly theorem, and all the other conclusions known to follow from the Quantum OR Bound (including some new ones that they discover), by designing a *new* measurement procedure whose soundness they can prove.

Their new procedure is based on an elegant, obvious-in-retrospect idea that somehow never occurred to me. Namely, instead of just applying M_{i}‘s to ρ, one can first put a control qubit into an equal superposition of the |0〉 and |1〉 states, and then apply M_{i}‘s *conditioned* on the control qubit being in the |1〉 state. While doing this, one can periodically measure the control qubit in the {|+〉,|-〉} basis, in order to check directly whether applying the M_{i}‘s has substantially corrupted ρ. (If it hasn’t, one will always get the outcome |+〉; if it has, one might get |-〉.) Substantial corruption, if detected, then tells us that some M_{i}‘s must have had non-negligible probabilities of accepting ρ.

**III. Almost As Good As True**

One lemma that I’ve used even *more* than the Quantum OR Bound is what I’ve called the “Almost As Good As New Lemma,” and what others in the field have called the “Gentle Measurement Lemma.”

I claimed a proof of the AAGANL in my 2004 paper Limitations of Quantum Advice and One-Way Communication (Lemma 2.2 there), and have used the lemma in like half a dozen later papers. Alas, when I lectured at Barbados, Sasha Razborov and others discovered that my proof of the AAGANL was missing a crucial step! More concretely, the proof I gave there works for pure states but not for mixed states. For mixed states, the trouble is that I take a purification of the mixed state—something that always exists mathematically—but then illegally assume that the measurement I’m analyzing acts on the particular purification I’ve conjured up.

Fortunately, one can easily fix this problem by decomposing the state ρ into a mixture of pure states, then applying my earlier argument to each pure state separately, and finally using Cauchy-Schwarz (or just the convexity of the square-root function) to recombine the results. Moreover, this is exactly what other people’s proofs of the Gentle Measurement Lemma *did *do, though I’d never noticed it before Barbados—I just idly wondered why those other proofs took twice as long as mine to do the same work! For a correct proof, see Lemma 1.3.1 in the Barbados lecture notes.

**IV. Oracle Woes**

In my 2010 paper BQP and the Polynomial Hierarchy, I claimed to construct oracles A relative to which BQP⊄BPP_{path} and BQP⊄SZK, even while making only partial progress toward the big prize, which would’ve been an oracle relative to which BQP⊄PH. Not only that: I claimed to show that *any* problem with a property called “almost k-wise independence”—one example being the Forrelation (or Fourier Checking) problem that I introduced in that paper—was neither in BPP_{path} nor in SZK. But I showed that Forrelation *is* in BQP, thus yielding the separations.

Alas, this past spring Lijie Chen, who was my superb visiting student from Tsinghua University, realized that my proofs of these particular separations were wrong. Not only that, they were wrong *because I implicitly substituted a ratio of expectations for an expectation of ratios* (!). Again, it might still be *true* that almost k-wise independent problems can be neither in BPP_{path} nor in SZK: that remains an interesting conjecture, which Lijie was unable to resolve one way or the other. (On the other hand, I showed here that almost k-wise independent problems *can* be in PH.)

But never fear! In a recent arXiv preprint, Lijie has supplied correct proofs for the BQP⊄BPP_{path} and BQP⊄SZK oracle separations—using the same Forrelation problem that I studied, but additional properties of Forrelation besides its almost k-wise independence. Lijie notes that my proofs, had they worked, would also have yielded an oracle relative to which BQP⊄AM, which would’ve been a spectacular result, nontrivial progress toward BQP⊄PH. His proofs, by contrast, apply only to worst-case decision problems rather than problems of distinguishing two probability distributions, and therefore don’t imply anything about BQP vs. AM. Anyway, there’s other cool stuff in his paper too.

**V. We Needed More Coffee**

This is one I’ve already written about on this blog, but just in case anyone missed it … in my, Sean Carroll, and Lauren Ouellette’s original draft paper on the coffee automaton, the specific rule we discuss *doesn’t* generate any significant amount of complexity (in the sense of coarse-grained entropy). We wrongly thought it did, because of a misinterpretation of our simulation data. But as Brent Werness brought to our attention, not only does a corrected simulation not show any complexity bump, one can rigorously *prove* there’s no complexity bump. And we could’ve realized all this from the beginning, by reflecting that pure random diffusion (e.g., what cream does in coffee when you don’t stir it with a spoon) *doesn’t* actually produce interesting tendril patterns.

On the other hand, Brent proposed a different rule—one that involves “shearing” whole regions of cream and coffee across each other—that *does* generate significant complexity, basically because of all the long-range correlations it induces. And not only do we clearly see this in simulations, but the growth of complexity can be rigorously proven! Anyway, we have a long-delayed revision of the paper that will explain all this in more detail, with Brent as well as MIT student Varun Mohan now added as coauthors.

If any of my colleagues feel inspired to write up their own “litanies of mathematical error,” they’re welcome to do so in the comments! Just remember: you don’t earn any epistemic virtue points unless the errors you reveal *actually* embarrass you. No humblebragging about how you once left out a minus sign in your paper that won the Fields Medal.

Comment #1 July 29th, 2016 at 7:50 pm

This seems like a good place to put this question. A few years ago, in one your papers https://arxiv.org/abs/1412.6507 you seemed to imply that you believed that graph isomorphism is not in BQP. Does the almost-polynomial time classical algorithm for graph isomorphism change your view on this matter at all?

Comment #2 July 29th, 2016 at 8:21 pm

Reading this post makes me curious about how it’s possible to rigorously verify all of the important math work that’s out there (whether it’s math, computer science, physics, etc.). So many times, people have told me that when they review such technical papers, they read just enough to get the idea, but they don’t rigorously verify the proofs. Then how do we really know what is right and what is not?

On a related note, this now means I can quote a relevant title from one of our favorite authors:

🙂

Comment #3 July 29th, 2016 at 9:31 pm

Joshua #1: Gasp! I didn’t even notice that language about graph isomorphism in the “Space Above BQP” paper, and would’ve insisted on rewording it if I had. (The broader point in the paper is fine. It’s just that the problem we really mean to talk about, the thing that DQP and other such models can solve but that’s believed to be intractable for quantum computers, is a much more general problem—e.g., isomorphisms of general structures, SZK, or collision-finding—of which GI is an extremely special case.)

Personally, I’ve always thought that GI is most likely in BQP, simply because GI is most likely in P. Babai’s breakthrough result was, of course, consistent with that belief, though Laci himself is the first to stress that major new ideas will be needed to get GI (or even the easier Group Isomorphism) down to P. I could also live with these problems being inherently quasipolynomial, classically and perhaps also quantumly.

Comment #4 July 29th, 2016 at 9:53 pm

Daniel #2: The short answer is that,

ifpeople care enough about a math/CS/physics paper—enough that they want to build on it, generalize it, teach it in courses, etc.—they’ll inevitably discover whatever significant errors are in the parts of the paper they care about. Or taking the contrapositive, if major errors sit for decades in a published math paper with no one finding them, it’s because no one really cared about that paper (or possibly, about the erroneous parts of the paper). And once errors are found, they’re usually very unambiguous, and there’s a strong social norm for the authors to acknowledge them. This is how the system continues working even in the presence of a constant rate of human fallibility.The other point, of course, is that the majority of errors are fixable, or affect only one part of a paper while leaving the rest intact. It’s rarer, though it certainly also happens, for a whole paper to be irreparably flawed. (Of the 80 or so accepted papers at STOC/FOCS, I think typically 0 or 1 have to be completely withdrawn. Of the 300 or so submissions, maybe 2 or 3 are typically withdrawn.)

If we take the five errors in this post as a small sample, we find:

– Three flawed proofs which turned out to be fixable with an additional idea, fully recovering the claimed main results.

– One proposed cryptographic construction which turned out to be insecure (but of course, we never

claimedto prove its security!). Our security reduction was perfectly valid; it just turned out to mean the opposite of what we thought it did. 🙂– One conjecture based on numerical data, which turned out to be false and based on a misinterpretation of the data. This was in an early preprint that was never peer-reviewed let alone published (and which we put online in the hope of getting useful feedback, a hope that was realized!).

Comment #5 July 30th, 2016 at 1:42 am

One day it will be possible to run a paper through presubmit checks that validate the proof.

And once a paper is submitted there will be continuous validation checks that run on every further submission.

You’ll have a little dashboard showing your papers. The green ones are good, the red ones are bad.

And you’ll occasionally submit a paper that gets through the presubmit checks and breaks stuff. You’ll have several people messaging you about how you broke their papers and how you’re blocking their latest one.

Have you rolled it back already? I’m getting an error saying NP ⊄ BQP.

Comment #6 July 30th, 2016 at 11:50 am

The quantum money one seems like its not a mistake- it seems like you made an interesting conjecture and it was false, and the refutation of it was interesting. This seems very different from making a mistake.

I’m sure I’ve had mistakes in papers, though I work in obscure areas so they may not have been discovered. Here is a story from a talk I gave:

Bill: The proof of the poly vdw theorem gives Ackerman-like bounds on the poly vdw numers . No better bounds are known.

Audience member: Shelah proved prim rec bounds on the poly vdw numbers five years ago.

Bill: Whoops.

Comment #7 July 30th, 2016 at 12:44 pm

But Job — what if the presubmit checks end up having unchecked errors?

“Silly, it’s errors all the way down!”

Comment #8 July 30th, 2016 at 2:53 pm

All the mistakes that has not been caught (some going as far as 2010) shows the review process in theory is BS.

Would it be so?

Comment #9 July 30th, 2016 at 3:15 pm

ItisallBS #8: Of the five things I discussed, only one (“Almost As Good As True”) involved an actual mistake in a paper published in a journal, and that one was by far the most minor and easiest to fix. The other four included two conjectures that turned out to be false (as Gasarch pointed out), and two mistakes in conference papers that I never bothered to submit to journals, but probably should have (but it’s moot now, since by now other people have written papers that fully resolve the problems, though in both cases, they leave interestingly open the question of whether my original approach could also work).

One of the most important lessons I learned from David Deutsch is that errors are a normal, healthy part of how science operates, and not something to fret about too much … as long as there’s an effective system in place to correct them.

Comment #10 July 31st, 2016 at 11:31 pm

Long ago, as I recall, a physics prof stated something like “According to Eugene Wigner, half of all math mistakes are screwing up minus signs”. I think this is probably true.

For a little entertainment in discrete math for CS major classes, I convey that story the first time I make a sign error. Then I put a chalk mark on the wall above the board to keep a running total, and estimate the likelihood of finishing the semester in single digits.

Comment #11 July 31st, 2016 at 11:50 pm

Hey Scott,

Regarding the paper that you discuss in point 5) in the post, I had sent you an email, dated 11/27/2015, asking for your thoughts. 🙂

Not sure if it missed your attention or it basically didn’t merit attention at all! 😛

Anyway, could you tell me if what follows makes any sense in the context of that paper?

The lower the entropy of a system is, the more (localized) symmetry it exhibits. I think it is this symmetry (“patterns” in the system, if you will) that helps simplify the description, thus keeping its complexity low.

However, as the entropy of a system grows substantially, another symmetry appears: the probability distribution function of the system becomes more and more symmetric, due to increasing randomness of the system.

As long as a system has with itself any form of symmetry associated, this symmetry helps reduce the complexity in its description.

Thus, in between the highly “space-symmetric” low-entropy states and the highly “probablility-symmetric” high-entropy states (near a maximally randomized state), there will be states which are just in such a spectrum of states that are neither so ordered as to exhibit spatial symmetry nor so disordered as to exhibit probabilistic symmetry. These states should exhibit considerable complexity.

Again, what do you think?

Comment #12 August 1st, 2016 at 2:21 am

Arko #11: Yes, that’s a good qualitative way to describe what’s going on in the coffee-cup paper—that you lose symmetries as you start mixing, but then regain the symmetries (macroscopically, anyway) once the system is fully mixed. The interesting part is to prove that enough symmetry gets lost, at intermediate times, for the coarse-grained entropy

everto become large.Comment #13 August 1st, 2016 at 4:34 am

There are two kinds of scientists: those that make mistakes and admit to them, and those that make mistakes and do not admit to them.

Unfortunately there are plenty of scientists that think they belong to the non-existent category of scientists that do not make mistakes.

Congratulations to Scott for putting himself in the first category!

Comment #14 August 1st, 2016 at 9:09 am

I wonder if the reportedly large number of errors in math publications is due (at least in part) to people trying to prove too much too quickly?

Gauss and Godel published very little, and I don’t think there was a single error in their works. It would be interesting to know if, by contrast, the extremely prolific great mathematicians (Euler, Erdos) paid the price for their torrential output with (initially) undetected errors…

Comment #15 August 1st, 2016 at 11:40 am

Off-topic comment: Since discussions in the comments of this blog tend to become long and tangled with cross-references, I wrote a script to make following them much easier. Maybe others can find it useful too.

(Of course, if everyone had just used

`<a href="#comment-NNNNN">`

to link to comments, I wouldn’t have to write this, but who has the time for that?)Comment #16 August 1st, 2016 at 11:41 am

[Direct installation link: here]

Comment #17 August 1st, 2016 at 11:57 am

Vadim,

The errors will be detected eventually, so what is the problem? Here are how it went for three great mathematicians:

Euler applied Newtonian physics and calculus to every problem that walked, hopped, or crawled down the road, showing everyone how to attack things. He worked harder and faster as he got older and weaker and blinder. Supposedly he died from not stopping writing to eat. (The last is likely an E.T. Bell fiction).

Gauss published little, and often only after other people published something, and Gauss dug out a paper he had not quite polished yet, with a better version. All of his published stuff was pretty much perfect. So how much did Gauss take to the grave with him? Probably plenty. Who knows what he was working on for fun. Maybe Fermat’s proof that was a bit too big to fit on the margin? It is a big loss that Gauss did not publish everything he thought about but wasn’t sure of. Probably plenty of it remains undiscovered to this day.

Riemann published a lot in his brief life, much of it revolutionary and starting new subject areas in math. For example, much of non euclidean geometry started with Riemann’s unprepared, ad hoc comments in answering a question from Gauss at his Ph.D. comps. Supposedly Riemann never wrote a correct proof in his life. I think most or all of what he published was basically correct, but his proofs were not solid. Riemann’s errors provided plenty of material to work on for generations of mathematicians. So what is wrong with that state of affairs?

Comment #18 August 1st, 2016 at 1:28 pm

It’s interesting that the conclusion based on simulation data is the one that ends up being really wrong.

The proof isn’t always in the pudding I guess?!

Comment #19 August 1st, 2016 at 1:42 pm

fred #18: I’ve actually found that to be a common occurrence. It’s

ridiculouslyhard to extract reliable conclusions from data; a few experiences trying to do empirical studies gave me enormous respect for the people who do that for their day job. Rather than simulation data leading to understanding, often it’s understanding (sometimes, though not always, in the form of a proof) that leads to realizing why your faulty simulation data was misleading you—and also why you never even needed to collect the data in the first place, had you been thinking clearly enough! 🙂 On the other hand, without going to the trouble to get the data, your brain is also much less likely to have been primed for understanding—so the two things really do feed off each other.Comment #20 August 1st, 2016 at 4:58 pm

Please allow me to join in the appreciation of yet another fine

Shtetl Optimizedessay, which brings several mathematical anecdotes and essays to mind.Simplest are the many stories relating to the algebraic geometer Solomon Lefshetz, of whom it was said: “Never an incorrect theorem; never a correct proof”. Lefshetz himself would tell the following story to illustrate (what Lefshetz regarded as) a proper disdain for excessive mathematical rigor:

A broader literature is illuminated when we consider mathematical mistakes in light of Jacobi’s maxim

“man muss immer umkehren”(commonly translated as “invert, always invert”).Applying Jacobi’s inversion maxim leads us to inquire whether realms of mathematics exist in which mistakes

never(or almost never) are seen. One such realm is the formal verification (by machine) of the theorems of mathematics that are regarded as fundamentally important and/or exceptionally beautiful. The Dutch mathematician Freek Wiedijk maintains a list of such proofs, and the remarkable “dog in the night” phenomenon is that (up to the present time) no widely accepted mathematical theorem has ever been found to be in error.How is it, we are led to wonder, that the demonstrably fallible reasoning of mathematicians like Lefshetz has reliably led humanity to results that later are proved to be correct? How is it that mathematicians can be right in their theorems very much more often than they are right in their proofs?

Terry Tao’s essay “What is good mathematics?” (2007, arXiv:math/0702396) lists twenty-one elements of good mathematics (emphasizing that no such listing can be complete); the list grounds mathematics in “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition”. These are parcellations of cognition at which human mathematicians like Lefshetz (and Tao) excel, but that escape machine-checkable formalization (by any methods that we presently know).

For concrete examples of “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition”, a good exemplar (as it seems to me) is Tao’s recent preprint “Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation” (2016, arXiv:1606.08481v1). The gratifying feature (for me) regarding this preprint, is that Tao begins by rewriting the fluid dynamical equations entirely (as Tao calls it) “the language of differential geometry.” This language, which privileges cognitive naturality and universality relative to physicality, is sufficiently conducive to Lefshetz-style “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition”, while retaining compatibility with rigor, that by it Tao can concretely aim to solve the Clay Institute’s Millennium Prize Navier Stokes Problem:

So there’s not much doubt that soft notions of “insight”, “beauty” (etc.) are entirely compatible with hard-nosed programs for tackling even the hardest open problems in mathematics.

What relevance can Tao’s methods have for folks of undistinguished mathematical talent (that would be me) whose interests are quantum dynamical rather than classical? Two considerations are relevant (as it seems to me).

The first consideration is that Tao is agnostic regarding the blow-up (or not) of Navier Stokes flows; the natural/universal modern mathematical formalism of differential and algebraic geometry is helpful either way. Similarly in regard to the achievability of Quantum Supremacy (or not), modern geometric formalisms help focus our “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition” in reading Terry Rudolph’s Supremacy-positive “Why I am optimistic about the silicon-photonic route to quantum computing” (arXiv:1607.08535), and equally Gil Kalai’s Metameracy-positive “The Quantum Computer Puzzle” (arXiv:1605.00992).

Here I have borrowed from Aram Harrow (and from my wife Connie, who works in printing) the notion of “metameracy”, here regarded as the systematic description by Tao-style geometric formalisms of quantum dynamical systems that look effectively like finite-dimensional Hilbert spaces, while at a deeper level evolving on tensor-product spaces whose algebraic structures are thermodynamically and informatically compatible with the Hilbert-space resolution of those structures (QED being a bountiful source of such systems).

Fortunately the translation of traditional quantum “bra-ket” language into Tao-style geometric language is sufficiently guided by considerations of naturality and universality, as to largely depend upon technique rather than inspiration.

The blue-collar lesson-learned (for me at least) from these Tao-style metameric constructions, is that comparably valuable to considerations of “rigor” in the study of quantum systems are Tao-style considerations of “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition”, which as illuminated by modern metameric methods, are valuable in respect to all eventualities in the quest for supremacy … that is, the supremacy of our quantum understandings over our quantum confusions.

So which is correct? Terry Rudolf’s quantum optimism or Gil Kalai’s quantum pessimism? Perhaps this isn’t the most interesting question we can ask, given that both researchers are providing us with plenty of inspirational grounds for applying Tao-style metameric methods to gain illuminating quantum dynamical insights.

Comment #21 August 1st, 2016 at 5:05 pm

Doh! Now I dearly wish I had added to the above comment the obvious (and comforting) conclusion:

“even as we make Lefshetz-style errors along the way”. 🙂

Also, thank you Scott, for your years of work and personal commitment in hosting

Shetl Optimized, and for your outstanding example of integrity in confessing mistakes and embracing their necessity in research.Comment #22 August 2nd, 2016 at 4:31 am

Joshua (I honestly can’t find the hash on my mac keyboard)2

“Reading this post makes me curious about how it’s possible to rigorously verify all of the important math work that’s out there…”

At the risk of sounding glib, proof assistants? My supervisor told me a couple of years ago that ten percent of the of papers submitted to a leading CS conference (POPL or one of those) were formally verified, and I believed him.

I guess my comment is aimed at Scott, too. It’s easy to say that formal verification would have caught the errors, but that’s an over-simplification. Better might be to say that it would have led you down different roads, although I’m struggling for the words here as usual. Anyway, there’s more to it than that, I wouldn’t want to have missed your post this morning, for example.

Comment #23 August 2nd, 2016 at 9:18 am

The problem in science today is not ‘mistakes’ but the ‘publish or perish’ attitude in general.

‘publish or perish’ means:

-fast paced writing not adequately reviewed by the authors

-less time for peer reviewers to review the papers

-incremental papers, instead of writing a full, mature version of the whole research it is broken down in several not foolproof papers.

Science is gradually having a ‘hedgefund attitude’.

For example back in the days of the Higgs experiments it was pathetic and ridiculous to watch almost everyday CERN officials to speak about their ‘research progress’, presumably to keep ‘funders’ happy. h-index, grants and funds tyranny…

Fast paced, less than 3-4 person research is error-prone.

Collaborative research based on large networks of talented people working on the same goals is the future.

Comment #24 August 2nd, 2016 at 11:42 am

James Smith, your comment reminded me of an

umkehren(see #20) from Michael Harris’ weblogMathematics Without Apologies, in his Harris’ essay “My last word (for now) on HoTT” (May 2, 2015; search engines find it):The many well-reasoned comments that Harris’ essay stimulated touch upon some very high-level considerations regarding the essential nature of what Harris amusingly calls “the problematic vocation” of mathematics. 🙂

The literature on proof assistants is vast and ever-expanding; what’s emerging is not so much one dominant proof assistant, as a strongly interacting ecosystem of proof assistants, with multiple phylums, classes, orders, families, genuses, and species in it. This accommodates Terry Tao’s view from his essay “What is Good Mathematics?” (as cited in #20) that

To borrow a metaphor from Marvin Minsky’s

Society of Mind(s), the mathematical enterprise can be appreciated as an ever-evolving “ecosystem of mind(s)” more naturally and realistically than as an ever-taller hierarchy of ever-more-subtle proofs of ever-broader theorems.Henry George Forder’s

umkehrenmaxim that “The virtue of a logical proof is not that it compels belief but that it suggests doubts” inspires us to question the integrity, stability and humanity of the ongoing evolution of the mathematical ecosystem of minds.Shinichi Mochizuki, who is the sole author of a much-discussed claimed proof of the ABC Conjecture, expresses these doubts poignantly in his just-released exposition “The Mathematics of Mutually Alien Copies: From Gaussian Integrals to Inter-Universal Teichmüller Theory” (July 2016), specifically in Mochizuki’s concluding Section 4.4, “Atavistic resemblance in the development of mathematics”:

Quite a few mathematicians are concerned about a future in which (for example) Mochizuki’s proof has been distilled into a machine-verified string of symbols, whose origins and motivations no mathematician (not even Mochizuki) can effectively articulate.

Recent stunningly strong go-playing performances by Google’s AlphaGo program have heightened these cognitive eco-concerns, in that AlphaGo plays wonderfully strong go-moves without providing any explanations (even in principle) in regard to

whyits go-moves are strong.Indeed, more-and-more simulation algorithms — dynamical flows on tensor networks in particular — similarly unite transformational simulation capacity with unsettling cognitive opacity.

Now these transformational capacities and concomitant cognitive concerns are diffusing throughout the entire STEAM enterprise, like an opaque ink diffusing into the (supposedly) clear water of our STEAM-understanding. And this observation is not any kind of proper conclusion for an essay, but rather provides a starting point for our contemplation.

Comment #25 August 2nd, 2016 at 12:34 pm

Incorrect proofs make me think…

Why has formalism fallen out of fashion for proofs? Did Goedel ruin that idea? I hope not. Is it possible to avoid proofs relying on ambiguous, non-formalized language, and instead accept the lower-level ambiguity of syntax? In such a situation, it would just be up to the human reader what they want to infer from the symbol manipulation of the proof. Why not take complexity and computability theory to the computer? Could quantum computing proofs be formalized on a classical computer? Maybe then I would be better able to follow some of these proofs and discussions 🙂

Comment #26 August 2nd, 2016 at 4:35 pm

John Sidles #20 (I’ve found the hash on my mac now)

You write:

“the list grounds mathematics in “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition”. These are parcellations of cognition at which human mathematicians like Lefshetz (and Tao) excel, but that escape machine-checkable formalization (by any methods that we presently know).”

Well, for balance, see Thomas Hales’ talk:

https://www.youtube.com/watch?v=Is_lycvOkTA

If you paraphrased parts of Prof. Hales talk you might find that many words such as “elegance”, “intuition”, etc could be attributed to the formalisation process.

In essence, his original proof was technical to say the least and accessible to all but a very few. On the other hand, large parts of what he calls the blueprint proof, that is the comprehensively revised proof amenable to formalisation, were done in what he calls ‘mathematics land’ and not, as I read it, hunched in front of a computer at all. And, for emphasis, I could repeat that many of Tao’s criteria could be attributed to both the process and outcome.

Comment #27 August 2nd, 2016 at 5:55 pm

Jon K. #25: No, the reasons why automated proof-checkers aren’t more widely used in math have nothing to do with Gödel’s Theorem.

The most important current application of automated proof-checkers,

by far, is to software verification. There you have proofs that are both (a) straightforward enough that existing programs can find them, and (b) so boring that no human being would want the job of finding them.A second, newer application is to verifying the proofs of important theorems—as in Thomas Hales’ breakthrough work on the Kepler conjecture—but so far, only

afterall the conceptual dust has settled.Of course, human mathematicians have used software for lots of things, from numerical calculations to symbolic manipulation to huge combinatorial searches. But I can’t think of a single example (can anyone else?) where a human mathematician verified their ideas with an automated proof-checker in course of doing original, exploratory research. It would just slow you down too much—as if you were an architect who couldn’t sketch anything with paper or CAD programs, and you needed to build out each idea in your head with actual wood and bricks before being confident enough to move on to the next idea.

Of course, most of the “blueprints” that mathematicians create

neverget turned into actual “houses” (that is, formalized ZFC proofs), because there’s not enough point in doing so: no one can live in these “houses,” and the real purpose of these blueprints is simply to inspire (or be reused in) other blueprints by other architects.Eventually, automated proof-checkers might become so good that human mathematicians switch to a norm of using them, but we’re not there yet.

Comment #28 August 2nd, 2016 at 7:11 pm

For mathematicians, plausibly no (for now at least). For chemists and condensed matter physicists, plausibly yes. Because the above statement, applied to chemical and condensed matter research, becomes the unexceptional assertion:

Commonly chemists now simulate thousands or even millions of chemical structures, before synthesizing even one molecule and/or material for experimental test. The pragmatic reason is that information from simulations is becoming comparably reliable to information from experiments (in that both exhibit large errors), but the simulations have become thousands or even millions of times less costly than the experiments.

To at least some older chemists and physicists simulation-centric research programs seem unnatural (even somehow immoral). In contrast, younger chemists and physicists have largely embraced the brave new world of Quantum Simulation Supremacy. At the end of the day, in the face of highly competitive academic environments, young chemists and condensed matter physicists have no viable alternative.

Should mathematicians be made uneasy, by apprehension that a comparable transformation is in the offing for mathematics? Here there is a diversity of opinion, for the simple reason that no one knows for sure.

Comment #29 August 3rd, 2016 at 4:04 am

Scott #27: Do you think we will ever get some AI proof-checker that can read mathematical proofs written in natural language and formalize them on the fly?

One thing that I find daunting is that reading proofs require a bit of “common sense”: often authors just skip some well-known or obvious argument, so that the proof is strictly speaking a non-sequitur. Reading it requires either knowing the argument or deriving it by yourself.

Comment #30 August 3rd, 2016 at 7:42 am

Mateus #29:

One thing that I find daunting is that reading proofs require a bit of “common sense”: often authors just skip some well-known or obvious argument, so that the proof is strictly speaking a non-sequitur. Reading it requires either knowing the argument or deriving it by yourself.

For precisely that reason, my

guess(let some enterprising person prove me wrong!) is that converting informal proofs into formal ones, with any decent success rate, will turn out to be AI-complete.So to take one example, theoretical computer science would grind to a halt if, anytime anyone said anything equivalent to “one can easily write a program to iterate through all possible X’s, check Y, and store Z…,” they actually had to

write the program, or say something that was mechanically translatable into such a program.Comment #31 August 3rd, 2016 at 10:57 am

How hard could it possibly be to get a robot to wave its hands?

😉

Comment #32 August 3rd, 2016 at 11:31 am

Scott #30

You write:

“But I can’t think of a single example (can anyone else?) where a human mathematician verified their ideas with an automated proof-checker in course of doing original, exploratory research.”

Again this isn’t quite right. Watch Prof Hale’s talk and you will see.

Comment #33 August 3rd, 2016 at 11:35 am

The idea of formalisation being some kind of `translation’ from `natural’ mathematics to `something a computer will understand’ isn’t really the way to look at it. Think of the process of formalisation is whittling down the proof, weeding out the unnecessary complexities. Formal mathematics is, mmm, to put it one way, simpler mathematics? Clearer mathematics, perhaps.

Comment #34 August 3rd, 2016 at 11:58 am

I think relying on “common sense” when it comes to proofs is problematic. I realize the whole process of creating publishable proofs would become tedious, but the process of formalizing a proof this way might reveal something about the nature of what is provable, and how it is provable. It also might throw a wrinkle into probabilistic proofs, proofs that implicitly rely on the axiom of choice, mathematical induction, and other concepts that relate to randomness or infinity. (Although, maybe certain results could be patched up with pseudo-randomness.) What can I say, I like these formalist ideas.

On the idea of an AGI proof-checker, I would think it would come back with lots of questions on any proof that wasn’t formalized. It also might have a preference for constructive proofs.

Comment #35 August 3rd, 2016 at 4:20 pm

James #32:

Again this isn’t quite right.

Well, we know Hales thought that all the essential ideas were there before he ever bothered with an automated proof-checker, because he submitted his paper to

Annals of Mathematicsbefore he ever bothered with an automated proof-checker.Comment #36 August 3rd, 2016 at 4:26 pm

Jon K. #34:

On the idea of an AGI proof-checker, I would think it would come back with lots of questions on any proof that wasn’t formalized.

My Singularity friends assure me that, almost immediately after the first AGI is created, it will bootstrap itself up to become incomprehensibly smarter (or at least

faster) than the smartest humans. In which case, couldn’t the AGI easily fill all the gaps in an informal proof itself, whenever the gaps were fillable at all? In any case, why would it bother asking ahuman??More generally, I should say: anyone who feels strongly about the importance of formalizing mathematics ought to lead the way, by doing original research and publishing it in a fully formalized form! If a switch to fully-formalized math ever happens, it won’t happen at the urging of those who do neither formal nor informal math. 🙂

Comment #37 August 3rd, 2016 at 5:25 pm

Jon K. #34: I think you are misunderstanding several things about proofs. First of all the “common sense” is about what needs to be said explicitly and what is assumed to be background knowledge and can be left unsaid. There is nothing mystical about it.

Second, the concepts of infinity, randomness, induction, and the axiom of choice are perfectly rigorous. I don’t see how a formalized proof would be helpful in clearing up any confusion about them. An automated theorem prover would accept them as axioms or definitions just like any other and build on top of them. In fact, I bet that there already exists several proofs that use these concepts.

Comment #38 August 3rd, 2016 at 7:04 pm

Who says I don’t do informal math? 🙂

I believe Prof. N.J. Wildberger is trying to formalize math a little more on his youtube blog. It’s an ongoing development, so I can’t tell you if any of his conclusions will be convincing, but the framework he is laying out seems to be pointing in a computability direction. It’s worth checking out if you like contrarian ideas.

Comment #39 August 3rd, 2016 at 9:32 pm

It has never been clear (clear to me at least), that there exists any very strong reason to foresee that “bootstrapping to incomprehensible smartness” is feasible even in principle.

To the extent that the anatomy of the brain is dual to the structure of cognition, then isn’t the demonstrably parcellated anatomy of the human brain (which is the subject of a very great deal of ongoing fundamental and clinical research) isomorphic to an inherently parcellated structure of human cognition in general, and mathematical cognition in particular?

To borrow Terry Tao’s parcellation of mathematical appreciation (per comment #20 and Tao’s arXiv:math/0702396), gains in the cognitive capacity for “insight”, “discovery”, “vision”, “taste”, “beauty”, “elegance”, and “intuition” (etc.) are not obviously mutually correlated by any presently-known anatomic or cognitive parcellation of brain or mind.

Enough is known, however, to be reasonably confident that human mathematical cognition is not accomplished solely by the processes of deductive ratiocination that the Singularitarians contemplate.

Fortunately, even in the face of these unknowns, we can all be entirely confident that by the time these tough questions are settled (if they ever are), plenty of good mathematics, science, engineering, medicine, and even art will have been created. Because here the entire STEAM enterprise comes into play. Which is good! 🙂

— — — —

Please let me say, too, that I am very grateful to James Smith (of #22,26,32, and 33) for his links and remarks on Thomas Hales works, which are wonderful. And in regard to the formal status of the Axiom of Choice, constructive versus nonconstructive proofs, Grothendieck Universes (etc.), Freek Wiedijk’s list of formal proofs (see #20 for a link) italicizes those proofs that are formally constructive. Obviously plenty of questions remain open; Colin McLarty’s “What does it take to prove Fermat’s last theorem? Grothendieck and the logic of number theory” (2010) is a good introduction to these exceedingly difficult, intrinsically “multiparcellate”, mathematical questions.

Comment #40 August 4th, 2016 at 10:27 pm

Although formalized proofs have now been completed for advanced theorems like the odd order theorem and prime number theorem, the more valuable purpose they’ve served has been to guarantee correctness in problems like the Kepler conjecture and four color theorem where the original proofs were computer-assisted and too complicated for humans to completely verify. In theoretical computer science it doesn’t seem like these types of unreadable proofs are common, but below I’ll list three scenarios where formalization might have modest utility.

#1: The computer-assisted proof of the NP-hardness of minimum weight triangulation, or related hardness proofs with extremely complicated gadgets. Note that for gadget reduction proofs the polynomial running time is usually obvious, so the formalization would only be needed to ensure the correctness of the gadgets, but paradoxically and fortuitously the correctness would likely be the easier part to formalize. Of the 12 problems with uncertain NP-hardness status listed by Garey and Johnson in their famous book, minimum weight triangulation was the 10th to be resolved after being open for several decades. Since JACM published it I’m not questioning the proof’s correctness or implying anything negative about the authors’ clearly impressive work, but it would surprise me if more than a handful people on Earth have taken the trouble to verify every detail. The proof involved a bunch of complicated funny looking wire gadgets, long decimals floating everywhere, and many computer-assisted optimizations.

#2: Another place where formalization might be called for is in requiring a proof assistant script to earn the million dollars for solving P versus NP, given the number of past mistakes.

#3: Finally, the proof of Bell’s theorem is simple enough that a formalization probably wouldn’t take very long, assuming it hasn’t already been done. A formalized proof would permanently end hidden variable counterexamples and objections to this apparently controversial theorem from people like Joy Christian, although the implications of the mathematics for physics would surely still be debated.

Comment #41 August 6th, 2016 at 12:51 am

Dear Scott,

In the main post, you say:

“you don’t earn any epistemic virtue points unless the errors you reveal actually embarrass you.”

Ummm… That’s fine by me, but strangely, you omit to

quantifythe extent of the embarrassment that should be felt before a comment could be posted. … Quite unlike you, Scott.—

OK. Since I had spent a little time (a couple of hours or so) on the coffee-related thingie in the past (c. 2011), might as well now note that I now feel (only so slightly) embarrassed that I had not thought of what Brent Werness did.

Namely, that there would be this contribution of convection apart from that of diffusion. Brent’s “shearing” of the “whole regions” would precisely mean convection, and your “long range correlations” probably would mean more or less the same.

—

Why is the embarrassment which I do feel so mild?

Because, I think, I could, probably, have got what Brent did, if only I were to work on it. [… Guys, don’t hit me so hard on the head; instead, please read on….]

—

Could I really have?

Yes, I think, chances are bright—provided I were to actually spend some serious time on it, i.e., if I were to write and run simulations on computer (instead of hurriedly working out a small example) and attempt to write at least a draft version of a paper (instead of a mere comment on someone else’s blog).

Two reasons: (i) NS equations are the precisely kind of stuff that I (or “mechanical” engineers like me) spend (at least) some time on, and the fluxes in the NS equations involve, on the “ma” side,

onlythese two mechanisms—diffusion and convection. If I were to actually begin simulating even an abstract version of a multi-phase fluid phenomenon, I very, very probably couldn’t have escaped wondering also about convection. It would be hard to think of only one form of the flux and not the other. (ii) Within CFD, particles-based approaches like SPH (smoothed particle hydrodynamics) and LBM (lattice Boltzmann method) are right up my ally.Of course, I was not at all active in CFD back in 2011 (or even in 2014), so the counter-chances also are appreciably strong.

—

As to

counter-chances, there does remain the issue of diffidence.Eveninmycase, thereexistsa small but definite probability that while talking with the TCS folks, I could/would have relegated my fluids-knowledge to the background. May be out of a fear of sounding not enough intelligent, abstract, mathematical, or even plain relevant. The probability in my case is admittedly small, very small, but it is finite. So, evenIcould have done that—not bring up convection to your notice.—

But still, on the whole, I

feelthat, if I were toactuallydo some research on this topic (or even if I were only to participate in it in a personal settings) Idofeel that at some time or the other, in some form or the other, at least as an informal/stupid/brain-storming kind of a query to you maths/physics/TCS folks, I would have ended up bringing up convection (or groups of coffee/milk pixels moving together as blocks) to your attention. Guess I would have done that. (In fact,anyCFD guy would have done that.)As to converting these physical insights into a suitable maths?… LOL! (One feels embarrassment only over something he is sure he

couldhave got.)All the same, good that you brought up this topic again. (I mean the coffee, not embarrassments.)

—

Hmmm… So the take-away point is: there is some kind of a correspondence between the differential non-linearity of convection and the information complexity of strings. … Sounds right. Non-linearity is the basic reason behind chaos.

—

[One last point: Here, in writing this comment, I have happily chosen to skip over the “colleagues” and “mathematical” parts of your requirements, Scott.]

[One more “last” point: Something is to be said in favor of inter- and multi-disciplinary researches.]

OK. Enough is enough. I now await your completed (revised) paper.

Best,

–Ajit

[E&OE]

Comment #42 August 6th, 2016 at 5:19 am

Here is an embarrasing story from me

I once thought i solved a problem about computational geometry. I though that i had managed to prove the problem np-hard by finding a polynomial reduction from PARTITION to this problem. Well the thing is that PARTITION has polynomial algorithms when the numbers are written in unary. And guess what, the pieces in my problem all had fixed length, so they were unary numbers. So i was actually doing an exponential reduction between PARTITION and my problem.

I thought during some days i had solved the problem until someone pointed the error to me.

After some months, i found a way to reduce 3-PARTITION to this problem.

And this is the story

Comment #43 August 6th, 2016 at 10:02 am

BPP = NEXP #40: I think you are overly optimistic to think that a formal proof of Bell’s theorem would end the hidden variable counterexamples. The mathematics of Bell’s theorem are extremely simple, there is no legitimate criticism against them. The denial of Bell’s theorem is a psychological problem, not a scientific one.

Comment #44 August 6th, 2016 at 9:58 pm

Two typos in my reply above (#41). (i) Drop the phrase `on the “ma” side’ in its entirety. (ii) Substitute “alley” in place of “ally.”

–Ajit

[E&OE]

Comment #45 August 6th, 2016 at 10:05 pm

BPP = NEXP #40, and Mateus Araújo #43:

>> “although the implications of the mathematics for physics would surely still be debated.”

That, and also the search for a new physical mechanism of which Bell’s theorem is a limiting case.

–Ajit

[E&OE]

Comment #46 August 9th, 2016 at 6:47 pm

I don’t think I’ve written enough papers to have made any major errors in them so far, but I once put some really wrong information on the OEIS. On the sequence for the number of plane partitions of n, I added a comment stating that this is the same as the number of functions from an n-set to itself up to permutation of the underlying set. This is… not the case already for n=3. Talk about a failure to check one’s numbers…

Comment #47 September 1st, 2016 at 9:05 am

In the crypto paper, Conde is part of the last name of one of the authors (and in fact, it is the usual way people would shorten it to one word), rather than a middle name. The opposite error happens often as well, too – e.g. libraries in many Hispanic areas do place Orson Scott Card on the S.

Comment #48 October 5th, 2016 at 7:07 am

Thanks for the nice mention of our work, Scott! I should mention that we just updated our paper on the arXiv. The new version includes an alternative algorithm based on Marriott-Watrous gap amplification, which has a cleaner proof of correctness and achieves slightly better bounds. Not coincidentally, the paper also has a new author added: Cedric Lin.