Just when I thought I could relax, as the waters slowly receded from the latest D-Tsunami, my inbox and Facebook feed once again lit up with inquiries—this time, asking me to confirm or deny that “A Paradox at the Heart of Mathematics Makes a Physics Problem Unanswerable.”

Uh-oh!

Luckily for my blood pressure, though, this one turned out to refer to something that more-or-less *deserves* the hype. In particular, it’s about a phenomenal 146-page paper by Cubitt, Perez-Garcia, and Wolf, which just appeared this week in *Nature* (in condensed form, of course). Incidentally, yeah, his name really is Toby Cubitt, pronounced like “qubit.” He’s a good guy.

To those in quantum computing, Cubitt et al.’s breakthrough is old news, having already been on the arXiv for almost a year (we’ve also had a talk at MIT about it). The arXiv has created a funny phenomenon, where you learn something new and cool, assimilate it, move on, and then a year later, everyone is suddenly asking you *have you seen this thing, is it for real*, etc. etc., just because the thing got some rubber stamp like acceptance to *Nature* that caused the press to pick it up. Like, dude, I was into the undecidability of the spectral gap *way* before it went mainstream.

One more amusing anecdote before we dive into the math. In his *Nature News* piece popularizing Cubitt et al.’s result, the writer Davide Castelvecchi quotes Rebecca Goldstein, the brilliant novelist and biographer of Kurt Gödel, as saying: “Turing thought more clearly about the relationship between physics and logic than Gödel did.” Here’s what happened: *Nature News* wrote to Rebecca to ask what Gödel’s own thoughts were about the relation between undecidability and physics. Rebecca passed the request along to me. So I wrote back to her, arguing that they might just as well ask what *Turing* thought, since the Cubitt et al. result is “really” about Turing-undecidability (with Gödel-undecidability just an automatic corollary), and at any rate:

I also think that Turing thought more clearly about the relationship between logic and physics than Gödel did (indeed, Gödel himself said that it was only Turing‘s analysis of the notion of computability, in terms of actual physical machines that one could imagine building, that convinced him that computability had been properly defined).

Rebecca passed that back to *Nature News*, agreeing with it, and then at some point the quote became hers. Far from being miffed about this, I consider having my forgettable words attributed to a genius like Rebecca to be one of the great honors of my life. (By pure coincidence, she and I are having lunch next week; hopefully this will butter her up.)

So, OK, let me restate Cubitt et al.’s great theorem in less pop-sciencey terms than *Nature News* used. (You could also just read the paper‘s intro, which is exceedingly clear, but what the hell—I’m here to serve.)

Suppose you have two-dimensional material made of a bunch of stationary particles, each with local Hilbert space dimension d, which are arranged on an L×L square grid (so, there are L^{2} particles in all). And suppose there’s some fixed d^{2}-dimensional Hamiltonian h, with a local copy h_{i,j}=h acting on each neighboring pair of particles (i,j). (I.e., the material is *translationally invariant*, with the same laws of physics acting throughout.) Let H be the total Hamiltonian: that is, the sum of the h_{i,j}‘s over all the neighboring (i,j)’s.

Then a huge fraction of all of physics—quantum field theory, condensed-matter physics, you name it—can be summarized as, *you’re trying to figure out the eigenvalues and eigenvectors of H*. The lowest eigenvalue, λ_{0}, tells you your material’s *ground energy*, while the higher eigenvalues, λ_{1},λ_{2},…, tell you the next discrete energy levels that the material can jump up to. The corresponding eigenvectors tell you which quantum states the material is sitting in when it has these energies: the *ground state* v_{0}, and the *excited states* v_{1},v_{2},… Those, in turn, determine basically everything you could want to know about the material: whether it superconducts, etc. etc.

Of course, the eigenvalues and eigenvectors will depend on the lattice size L. Equally obviously, for any *fixed* L, you could in principle compute all the eigenvalues and eigenvectors by just diagonalizing some huge-ass matrix. (That matrix being H.) But physicists are usually more interested in the *limiting behavior* as L goes to infinity. One of their most basic distinctions is: the material is *gapped* if λ_{1}-λ_{0}, the difference between the first excited energy and the ground energy, converges to some positive value or even grows with L as L→∞. It’s *gapless* if λ_{1}-λ_{0} converges to 0 as L→∞. (Actually, Cubitt et al. use more technical definitions of both of these concepts, but we’ll ignore that.)

Cubitt et al.’s theorem now says the following: **for some fixed, constant local dimension d, there is no algorithm that takes as input the local Hamiltonian h (say, as a d**^{2}×d^{2} matrix of algebraic numbers), and that decides whether the material is gapped or gapless. Indeed, you can reduce the halting problem to that problem, in such a way that the material will be gapped if your Turing machine halts, or gapless if it runs forever.

As an immediate corollary, there’s some 2D material—characterized by a translationally-invariant local Hamiltonian h on particles of local dimension d—such that whether the material is gapped or gapless is independent of the axioms of ZF set theory, or whatever else your favorite axioms might be. (Proof: build a Turing machine M that halts if and only if it finds an inconsistency in set theory, then run Cubitt et al.’s reduction from the halting problem. By Gödel, if set theory is consistent then it can’t prove whether M halts or not.)

Cubitt et al. never bother to work out the local dimension d that suffices for them, but it *could* be worked out, and it’s probably at least in the tens of thousands. Thus, their result leaves open the possibility that there’s an algorithm to decide gaplessness for 2D lattices of *qubits* (i.e., the special case d=2), or other “reasonably low-dimensional” quantum systems. We simply don’t know right now. Another tantalizing open question is whether there’s an algorithm to decide gaplessness for *one-dimensional* spin chains—again, even in the special case d=2. Right now, the best we have in that direction is a difficult recent result of Bravyi and Gosset, which gives an algorithm to decide gaplessness for one-dimensional, *frustration-free* chains of qubits. (Here “frustration-free,” an amusing term that does *not* well describe this subject as a whole, means that you can minimize the energy H by minimizing the energies of each h_{i,j} individually. Or, if you think of H as a SAT instance, it’s satisfiable.)

But while the exact value of d where uncomputability kicks in is still up for grabs, it’s extremely important that d is *some* fixed, universal constant, independent of the Turing machine. Indeed, as Cubitt et al. point out in their paper, this is the *only* feature that makes their new result not a trivial corollary of the uncomputability of Wang tiling. The latter is a famous result from 1966, which says that there’s no algorithm that takes as input a finite set of tiles, and that tells you whether, using unlimited copies of each tile, you could cover the entire plane (or equivalently, arbitrarily large finite regions of the plane). I.e., this is yet another “natural” math problem that secretly encodes the halting problem.

The fact that d is fixed also means that, in order to encode larger and larger Turing machines into the local Hamiltonian h (as you must, if you want to embed the halting problem), you need to use *more and more bits of precision* (!) in the ~d^{4} real numbers that define h. This then raises a question: how do you actually *extract* a description of a Turing machine from the binary expansions of the real numbers that define your Hamiltonian? To do this, Cubitt et al. use Kitaev’s phase estimation algorithm—which, interestingly, is the *only* part of their construction that uses quantum mechanics in any way. One thing that I’d love to understand better is whether the phase estimation is really essential here, or whether the analogous *classical* question, with the “Hamiltonian” given by a probability distribution over classical constraints, could also be proved to be undecidable for some fixed value of d—thereby showing that Cubitt et al.’s discovery had nothing to do with quantum mechanics.

(It’s possible that the answer to this is obvious; I didn’t think about it deeply. Note that if the “classical Hamiltonian” is also *deterministic*, then the problem must be decidable for every fixed d, since there are only finitely many possible h’s, and we could cache all the answers in a lookup table.)

Anyway, it’s now my professional duty, as the prickly, curmudgeonly blogger I am, to end the post by shooing you away from two tempting misinterpretations of the Cubitt et al. result.

First, **the result does not say—or even suggest—that there’s any real, finite physical system whose behavior is Gödel- or Turing-undecidable.** Thus, it gives no support to speculations like Roger Penrose’s, about “hypercomputing” that would exceed the capabilities of Turing machines. The reason, again, is that as soon as you fix a lattice size L, everything becomes computable. The Cubitt et al. result applies only to questions about the *limiting* behavior, as the number of particles goes to infinity. But we already knew lots of examples of physical systems for which predicting their behavior *in some infinite limit* is at least as hard as the halting problem: for instance, the Wang tiles discussed earlier, or Post rewrite systems, or even Turing machines themselves. Local Hamiltonians are a profound, nontrivial addition to that list—one that will be particularly striking to physicists, many of whom calculate the spectral gaps of at least 50 Hamiltonians between dinner and dessert. But in some sense, there was no *a-priori* reason why a problem this general, about physical systems of unbounded size, *ought to* have been computable.

Second, **the result does not say that any particular question physicists want an answer to—for example, the million-dollar Yang-Mills mass gap problem—is Gödel-undecidable.** “All it says,” is that the possibility that some real-world question of that kind *could* be undecidable isn’t totally closed off. The *Nature News* piece stresses this latter implication a lot—as, admittedly, do Cubitt et al. themselves. But to put things in perspective: four logicians proved around 1970 that there’s no algorithm to decide whether an arbitrary polynomial equation has an integer solution, thereby giving a negative solution to Hilbert’s Tenth Problem. Yet with few exceptions, “working number theorists” barely even noticed this development, nor was (say) Andrew Wiles dissuaded from proving Fermat’s Last Theorem, by the absence of a *general* algorithm to do things like what he was trying to do. (Indeed, the absence of a general algorithm was shown even earlier for equations like FLT, which have variables in the exponent.) So I doubt the mathematical physicists who calculate spectral gaps for a living will be any more terrified than the number theorists were, to learn that they’ve been laboring their entire lives on the shores of the halting problem. “Good for us, then!” they could rightly reply. “Maybe our jobs won’t be so easy to automate.”

**Update (Dec. 20):** My colleague Seth Lloyd calls my attention to a PRL paper of his from 1993, which *also* discusses the construction of physical systems that are gapped if a given Turing machine halts and gapless if it runs forever. So this basic idea has been around for a while. As I explained in the post, the main contribution of the Cubitt et al. paper is just to get undecidability into “the sort of system physicists could plausibly care about” (or for which they could’ve plausibly hoped for an analytic solution): in this case, 2D translationally-invariant nearest-neighbor Hamiltonians with bounded local dimension.