Just when I thought I could relax, as the waters slowly receded from the latest D-Tsunami, my inbox and Facebook feed once again lit up with inquiries—this time, asking me to confirm or deny that “A Paradox at the Heart of Mathematics Makes a Physics Problem Unanswerable.”

Uh-oh!

Luckily for my blood pressure, though, this one turned out to refer to something that more-or-less deserves the hype.  In particular, it’s about a phenomenal 146-page paper by Cubitt, Perez-Garcia, and Wolf, which just appeared this week in Nature (in condensed form, of course).  Incidentally, yeah, his name really is Toby Cubitt, pronounced like “qubit.”  He’s a good guy.

To those in quantum computing, Cubitt et al.’s breakthrough is old news, having already been on the arXiv for almost a year (we’ve also had a talk at MIT about it).  The arXiv has created a funny phenomenon, where you learn something new and cool, assimilate it, move on, and then a year later, everyone is suddenly asking you have you seen this thing, is it for real, etc. etc., just because the thing got some rubber stamp like acceptance to Nature that caused the press to pick it up.  Like, dude, I was into the undecidability of the spectral gap way before it went mainstream.

One more amusing anecdote before we dive into the math.  In his Nature News piece popularizing Cubitt et al.’s result, the writer Davide Castelvecchi quotes Rebecca Goldstein, the brilliant novelist and biographer of Kurt Gödel, as saying: “Turing thought more clearly about the relationship between physics and logic than Gödel did.”  Here’s what happened: Nature News wrote to Rebecca to ask what Gödel’s own thoughts were about the relation between undecidability and physics.  Rebecca passed the request along to me.  So I wrote back to her, arguing that they might just as well ask what Turing thought, since the Cubitt et al. result is “really” about Turing-undecidability (with Gödel-undecidability just an automatic corollary), and at any rate:

I also think that Turing thought more clearly about the relationship between logic and physics than Gödel did (indeed, Gödel himself said that it was only Turing‘s analysis of the notion of computability, in terms of actual physical machines that one could imagine building, that convinced him that computability had been properly defined).

Rebecca passed that back to Nature News, agreeing with it, and then at some point the quote became hers.  Far from being miffed about this, I consider having my forgettable words attributed to a genius like Rebecca to be one of the great honors of my life.  (By pure coincidence, she and I are having lunch next week; hopefully this will butter her up.)

So, OK, let me restate Cubitt et al.’s great theorem in less pop-sciencey terms than Nature News used.  (You could also just read the paper‘s intro, which is exceedingly clear, but what the hell—I’m here to serve.)

Suppose you have two-dimensional material made of a bunch of stationary particles, each with local Hilbert space dimension d, which are arranged on an L×L square grid (so, there are L2 particles in all).  And suppose there’s some fixed d2-dimensional Hamiltonian h, with a local copy hi,j=h acting on each neighboring pair of particles (i,j).  (I.e., the material is translationally invariant, with the same laws of physics acting throughout.)  Let H be the total Hamiltonian: that is, the sum of the hi,j‘s over all the neighboring (i,j)’s.

Then a huge fraction of all of physics—quantum field theory, condensed-matter physics, you name it—can be summarized as, you’re trying to figure out the eigenvalues and eigenvectors of H.  The lowest eigenvalue, λ0, tells you your material’s ground energy, while the higher eigenvalues, λ12,…, tell you the next discrete energy levels that the material can jump up to.  The corresponding eigenvectors tell you which quantum states the material is sitting in when it has these energies: the ground state v0, and the excited states v1,v2,…  Those, in turn, determine basically everything you could want to know about the material: whether it superconducts, etc. etc.

Of course, the eigenvalues and eigenvectors will depend on the lattice size L.  Equally obviously, for any fixed L, you could in principle compute all the eigenvalues and eigenvectors by just diagonalizing some huge-ass matrix.  (That matrix being H.)  But physicists are usually more interested in the limiting behavior as L goes to infinity.  One of their most basic distinctions is: the material is gapped if λ10, the difference between the first excited energy and the ground energy, converges to some positive value or even grows with L as L→∞.  It’s gapless if λ10 converges to 0 as L→∞.  (Actually, Cubitt et al. use more technical definitions of both of these concepts, but we’ll ignore that.)

Cubitt et al.’s theorem now says the following: for some fixed, constant local dimension d, there is no algorithm that takes as input the local Hamiltonian h (say, as a d2×d2 matrix of algebraic numbers), and that decides whether the material is gapped or gapless.  Indeed, you can reduce the halting problem to that problem, in such a way that the material will be gapped if your Turing machine halts, or gapless if it runs forever.

As an immediate corollary, there’s some 2D material—characterized by a translationally-invariant local Hamiltonian h on particles of local dimension d—such that whether the material is gapped or gapless is independent of the axioms of ZF set theory, or whatever else your favorite axioms might be.  (Proof: build a Turing machine M that halts if and only if it finds an inconsistency in set theory, then run Cubitt et al.’s reduction from the halting problem.  By Gödel, if set theory is consistent then it can’t prove whether M halts or not.)

Cubitt et al. never bother to work out the local dimension d that suffices for them, but it could be worked out, and it’s probably at least in the tens of thousands.  Thus, their result leaves open the possibility that there’s an algorithm to decide gaplessness for 2D lattices of qubits (i.e., the special case d=2), or other “reasonably low-dimensional” quantum systems.  We simply don’t know right now.  Another tantalizing open question is whether there’s an algorithm to decide gaplessness for one-dimensional spin chains—again, even in the special case d=2.  Right now, the best we have in that direction is a difficult recent result of Bravyi and Gosset, which gives an algorithm to decide gaplessness for one-dimensional, frustration-free chains of qubits.  (Here “frustration-free,” an amusing term that does not well describe this subject as a whole, means that you can minimize the energy H by minimizing the energies of each hi,j individually.  Or, if you think of H as a SAT instance, it’s satisfiable.)

But while the exact value of d where uncomputability kicks in is still up for grabs, it’s extremely important that d is some fixed, universal constant, independent of the Turing machine.  Indeed, as Cubitt et al. point out in their paper, this is the only feature that makes their new result not a trivial corollary of the uncomputability of Wang tiling.  The latter is a famous result from 1966, which says that there’s no algorithm that takes as input a finite set of tiles, and that tells you whether, using unlimited copies of each tile, you could cover the entire plane (or equivalently, arbitrarily large finite regions of the plane).  I.e., this is yet another “natural” math problem that secretly encodes the halting problem.

The fact that d is fixed also means that, in order to encode larger and larger Turing machines into the local Hamiltonian h (as you must, if you want to embed the halting problem), you need to use more and more bits of precision (!) in the ~d4 real numbers that define h.  This then raises a question: how do you actually extract a description of a Turing machine from the binary expansions of the real numbers that define your Hamiltonian?  To do this, Cubitt et al. use Kitaev’s phase estimation algorithm—which, interestingly, is the only part of their construction that uses quantum mechanics in any way.  One thing that I’d love to understand better is whether the phase estimation is really essential here, or whether the analogous classical question, with the “Hamiltonian” given by a probability distribution over classical constraints, could also be proved to be undecidable for some fixed value of d—thereby showing that Cubitt et al.’s discovery had nothing to do with quantum mechanics.

(It’s possible that the answer to this is obvious; I didn’t think about it deeply.  Note that if the “classical Hamiltonian” is also deterministic, then the problem must be decidable for every fixed d, since there are only finitely many possible h’s, and we could cache all the answers in a lookup table.)

Anyway, it’s now my professional duty, as the prickly, curmudgeonly blogger I am, to end the post by shooing you away from two tempting misinterpretations of the Cubitt et al. result.

First, the result does not say—or even suggest—that there’s any real, finite physical system whose behavior is Gödel- or Turing-undecidable.  Thus, it gives no support to speculations like Roger Penrose’s, about “hypercomputing” that would exceed the capabilities of Turing machines.  The reason, again, is that as soon as you fix a lattice size L, everything becomes computable.  The Cubitt et al. result applies only to questions about the limiting behavior, as the number of particles goes to infinity.  But we already knew lots of examples of physical systems for which predicting their behavior in some infinite limit is at least as hard as the halting problem: for instance, the Wang tiles discussed earlier, or Post rewrite systems, or even Turing machines themselves.  Local Hamiltonians are a profound, nontrivial addition to that list—one that will be particularly striking to physicists, many of whom calculate the spectral gaps of at least 50 Hamiltonians between dinner and dessert.  But in some sense, there was no a-priori reason why a problem this general, about physical systems of unbounded size, ought to have been computable.

Second, the result does not say that any particular question physicists want an answer to—for example, the million-dollar Yang-Mills mass gap problem—is Gödel-undecidable.  “All it says,” is that the possibility that some real-world question of that kind could be undecidable isn’t totally closed off.  The Nature News piece stresses this latter implication a lot—as, admittedly, do Cubitt et al. themselves.  But to put things in perspective: four logicians proved around 1970 that there’s no algorithm to decide whether an arbitrary polynomial equation has an integer solution, thereby giving a negative solution to Hilbert’s Tenth Problem.  Yet with few exceptions, “working number theorists” barely even noticed this development, nor was (say) Andrew Wiles dissuaded from proving Fermat’s Last Theorem, by the absence of a general algorithm to do things like what he was trying to do.  (Indeed, the absence of a general algorithm was shown even earlier for equations like FLT, which have variables in the exponent.)  So I doubt the mathematical physicists who calculate spectral gaps for a living will be any more terrified than the number theorists were, to learn that they’ve been laboring their entire lives on the shores of the halting problem.  “Good for us, then!” they could rightly reply.  “Maybe our jobs won’t be so easy to automate.”

Update (Dec. 20): My colleague Seth Lloyd calls my attention to a PRL paper of his from 1993, which also discusses the construction of physical systems that are gapped if a given Turing machine halts and gapless if it runs forever.  So this basic idea has been around for a while.  As I explained in the post, the main contribution of the Cubitt et al. paper is just to get undecidability into “the sort of system physicists could plausibly care about” (or for which they could’ve plausibly hoped for an analytic solution): in this case, 2D translationally-invariant nearest-neighbor Hamiltonians with bounded local dimension.

### 125 Responses to “Ask an unbounded question, get an uncomputable answer”

1. Dave Doty Says:

The connection to Gödel seemed tacked on. Is there anything about this particular problem that has Gödel-undecidabilty in some stronger sense than that which is always present in any Turing-undecidability result?

What I mean is, given any undecidable language L, there are infinitely many x such that the statement “x is in L” is independent of ZFC. Is there anything about this problem that involves logical independence in a deeper way than this?

2. Scott Says:

Dave #1: No, it’s exactly the same Gödel-undecidabilty that’s there in any Turing-undecidability result. Just a way to motivate the result to physicists, that’s all.

3. Ernie Davis Says:

Two naïve questions: (1) Does this result imply that the difficulty of solving this over a LxL lattice is equivalent to determine the halting problem for a Turing machine with a tape of size O(L^2) or any other particular function of L? (2) If I understand your explanation correctly, the claim is that _some_ Hamiltonian has this property; can one be sure that any such Hamiltonian is physically realizable?

4. Tim May Says:

I have been using Wang tilings and variants, like the “domino snake” puzzle (*) for many years to illustrate some real issues with predicting the future, especially the “paths” to future technologies.

(* David Harel’s book “Algorithms” has some very readable explanations, which anyone comfortable with a little bit of math or physics background can easily follow. In at least the first edition of his book–I noticed later editions covered less of this stuff and more “practical” stuff on algorithms, perhaps a telling sign–he gave a fascinating description of the “domino snake” puzzle. The idea is this. Imagine a set of domino shapes, or even little squares with some constraints that certain edge conditions had to be met to make a new “legal” shape. The idea is also behind the “monkey puzzle,” where monkey-face features have to match up. But dominos work well and aree (or used to be) familiar. So, imagine starting with some domino (or monkey tile). Then attach another to one of the sides with the constraint that the edges match to the same edges. For instance, if one edge has a “:” pattern, then the one to the right, say, must have a “:” pattern touching it. Now imagine another such domino or monkey tile some distance away, perhaps 10 squares away and 15 squares up. The problem is how to take an unlimited number of each of N such types of tiles and go from the starting tile to the ending tile such that all of the edge-to-edge constraints are met. With some types of tiles, such snakes are easy to construct. One can easily “cheat” the victim of this puzzle by just randomly constructing such a snake in some grid, maybe looping over and down and up and left and up, and up some more, then left, and so on and then declaring the last such tile to be the “finish” tile. Then remove all of the tiles in between the start and finish tile and ask the puzzle solver to find the snake that connects them while being perfectly legal. Think about this for a moment and realize that the snake may reach out from here to New York City, then down to Atlanta, and so on. It is easy to construct such situations where the ever-expanding attempts to build legal snakes (partial tilings) will take a very, very large number of attempts. However, if someone “guesses” or presents in oracular form such a snake, it is very easy to confirm the legality of the snake. Sounds a lot like some zero-knowledge protocols, right? And Harel presents puzzles involving half-planes which are even _harder_ than this, having no actual solution. I don’t really understand it, but apparently this is so. And it gets into exotic things like Presburger artithmetic.)

Now how can this possibly relate to technology and forecasting?

Simple. The past path of achievements, from say the ENIAC to the microprocessor, are always retrospectively EASY TO EXPLAIN. When presented with the snake from A to B, anybody can identify the obviousness of the path.

But anyone sitting at A, the ENIAC in this example, will see hundreds of possible paths at nearly any stage. And of course evolution in the fitness landscape is playing a huge role. In particular, market forces (e.g., how many TTL transistors did Fairchild and TI and National Semi sell in 1962 to fund the next set of products and even spin-off companies?)

We have strong reason to believe, importantly for our concerns about energy and temperatures, to believe that a stable thermonuclear reactor is possible to be built (we know about stars, for example). We even think think almost certainly can be built in some reasonable size on the surface of the earth. We have an A, where we are now, and this hypothetical B, a self-sustaining, net energy-producing fusion reactor. But for at least the past 60 years the path from A to B has not been found. Sure, some number of years from now high schoolers will be taught “And then in 2027 the zeta pinch ring the Chinese invented was used to solve the …..” The domino snake from A to B will be retrospectively obvious.

A lot like the way we are taught about the obvious physical laws. No so easy back then.

John Baez often used to quote a great line from the Chicago summer school in math, “Think deeply of simple things.”

Harel’s book influenced me more than just about any other book I’ve read, save perhaps for Gamow’s “One, Two, Three….Infinity,” read back when I was in 8th grade.

–Tim

5. anon Says:

This time I have to agree with Lubos (see his blog): this stuff is not relevant to physics reality.

6. Tim May Says:

I see from my message that I misspelled “Algorithmics,” or maybe my obnoxious as-one-types spell checker changed it and I didn’t catch the correction in time to uncorrect it.

And an interesting change from the pure-math version of the domino snake puzzle as one looks at technological (or cultural, as with “island colonization” approaches) is that the steps have strong economic and cultural components.

And that there is no “fitness landscape” to run a simulated annealing model of any kind on. (One didn’t get from the radio to the television by finding local peaks!)

(The “cost” aspects, how to pay for each attempted snake–wth lots of dead ends and failed projects, have a lot of similarities to the “linear logic” associated with type theory, some theories of Martin-Lof, and others, where computations consume resources. A merger of markets and math, much as mathematicians look at hard limits unconstrained by costs. Similarly, socioeconomic, cultural, and related issues affect the choice of possible snake paths. As but two examples, the phlogiston and luminiferous aether predispositions certainly affected which new “snakes” were tried (funded). Occasionally a kind of “knowledgequake” shifts many things and so what was once unfundable becomes the new normal.

(And this relates to the “deserts” that people often mention in connection with high energy physics. While nothing in modern HEP is “funding the next stage” in the way the discoveries of 1900 funded the 1910 era, and so on, the gap is now even greater. There may be no “oases” in the great desert that is perhaps beyond the LHC. Will a ten times more expensive Next Generation Collider be built in the next 40 years. I have my doubts. More specifically, while we can expect a fertile ground way beyond where we are in fusion, we have to crawl across a huge desert carrying all of our supplies with us with no intermediate oases or farming location to fund the next leg of the journey. The island colonization model.)

(Some say this happens in math as well. Over on some other blogs there’s a discussion about how the Mochizuki work on the “ABC Conjecture” is stalling because the gap between the few who seem to understand his math and the rest of math is too great. And that while his proofs may be “correct,” formally, but so far removed from present-day mathematics that the results are meaningless for all but a small handful who may understand the math.)

I was “in the trenches” at Intel in the 70s and 80s and am often flummoxed by the tales I hear modern journalists telling of how things unfolded. (Tellingly, near all developments were on a 2-4 year cycle. The profits made in a 2-4 year product cycle literally kept the companies alive for the R & D for the next cycle. Transistors begat mesa transistors begat diode-transistor-logic (DTL, which we went to the Moon on) began transistor-transistor-logic (TTL) begat early bipolar logic circuits and memories begat early MOSFET memories begat Intel and its ilk begat early microprocessors….nothing about this was inevitable, but “obvious” retrospectively!

FWIW, I really got interested in models of technological development and future prediction when a bunch of people I knew in the late 80s were heavily involved in nanotechnology, cryonics, agorics, and other such “Singularitarian” (a term I think I coined, around 1992) ideas. This was the Zeitgeist of a pretty significant bunch of people in the Bay Area. (I was sort of the “mundane” person because my focus was on the crypto stuff that led to Cypherpunks, and many current technologies which followed. I thought their predictions of such things as “In 20 years, this entire Valley will be based on nanotechnolgy and we will be reshaping out bodies into diamondoid structures that will live forever.” was hopelessly naive.

Not all of them were crazy, and many have gone on to do significan things, but there was definitely a smell of “we can design the future.” Seen from the perspective of the long march of science, the difficulties, the funding, the confusion, indeed, the domino snake model, I’m not at all surprised that most of the predictions never came to pass.

(I always snorted when some journalist trotted out the 1954 “prediction” from some obscure guy in the U.K. that there would one day be a million computing elements in a small box. Gordon Moore’s predictions or observations were far more grounded in physics, economics, and market realities (such as that Intel almost went out of business a few times….maybe in some other universe it did).)

I doubted even von Neumann-style replicator machines because nobody has made much progress on such replicators. (We know they exist–all around us–but we have no idea of how to code this. Chaitin or Kolmogoroff theory comes to the fore. We know such structures exist–all around us–and can even list the bits in the strings in DNA, but we know little about the “words” in the strings and the higher semantics. And the “compressive depth” or “logical depth” (a la Bennett, following on Chaitin, Kolmogorov, Solomonoff) of biological structures is unknown territory.

(I used to enjoy a little ditty I made up: “Kolmogorov, Solomonoff, Martin-Lof, and Chaitin…one of these is different.” The connection with Martin-Lof doesn’t sound like it belongs, but it really does.)

–Tim

7. Douglas Knight Says:

rubber stamp like acceptance to Nature that caused the press to pick it up

No, the press doesn’t care about the prestige of publication or the prestige of the journal Nature. They cover it because they received a press release, either by the journal or the author’s university. Indeed, that this paper is covered by Nature News suggests that Nature is committed to promoting it.

I often see bloggers posting the same article twice, seeming to have forgotten that they posted the preprint. Once I saw someone post a published article and say that it seemed to contradict something he’d read somewhere, which I think was the preprint of the same article.

Physicists don’t care about the Yang-Mills millennium problem. They already know that there is a mass gap. The problem is to get a sufficiently mathematically rigorous model of QCD to reproduce the existing physics proof. The problem is not at all about the mass gap and entirely about the existence of QCD. The mass gap is just a benchmark for testing whether a mathematical object should count as QCD. Maybe when the problem is solved the mathematical model will be of interest to physicists, but not as a contribution to the mass gap.

8. Scott Says:

Tim #5: Sorry, but any further long, rambling comments with no discernible connection to the OP will remain in my moderation queue.

9. Scott Says:

Ernie #3:

(1) Does this result imply that the difficulty of solving this over a LxL lattice is equivalent to determine the halting problem for a Turing machine with a tape of size O(L^2) or any other particular function of L?

Yes, from their proof, one could derive some concrete hardness result for the problem over LxL lattices (I’m not sure what exactly). But in any case, the hardness of approximating the ground-state energy (not exactly the spectral gap, but closely related), for translationally-invariant Hamiltonians on finite lattices, was already studied directly by Gottesman and Irani, in a paper that the new work builds on.

(2) If I understand your explanation correctly, the claim is that _some_ Hamiltonian has this property; can one be sure that any such Hamiltonian is physically realizable?

Well, they give an explicit description of the relevant Hamiltonian H (say, for which the behavior as L goes to infinity is independent of ZF set theory). It’s a 2D local Hamiltonian acting on qudits for some fixed constant value of d. So, you could certainly build a system (for some finite value of L) that had that as its effective Hamiltonian—if nothing else, by using an array of quantum computers. On the other hand, there’s no claim that this Hamiltonian will be physically “natural” (similar to the Standard Model or anything like that); indeed, it certainly won’t be.

10. Amir Says:

Scott #9:

If you can realize the universal d-dimensional Hamiltonian using an array of qubits, doesn’t this mean that the problem is undecidable for d=2?

11. Rahul Says:

So basically this is a fascinatingly elegant solution in search of a problem? 🙂

12. Scott Says:

Amir #10: I don’t know what the “universal d-dimensional Hamiltonian” even means. But in any case, yes you can simulate a d-dimensional Hamiltonian for any d using an array of qubits. But to do so, you’ll need not just nearest-neighbor couplings, but couplings to qubits at distance ~log(d) away. That’s the issue.

13. Scott Says:

Rahul #11: Didn’t you read the post? The problem is to decide whether a given 2D Hamiltonian with bounded local dimension is gapped or gapless. And the solution is that there’s no solution. 🙂

14. Scott Says:

anon #5: I promised that I wouldn’t respond to anything Lubos said until 2017. But if, hypothetically, someone were to make an argument like he makes, I would reply: fine then, the undecidability of the spectral gap isn’t a “physics” result. It belongs to another subject which is not physics, but which sits somewhere between CS, physics, and math—a subject that he obviously doesn’t like, but which I like, and which tons of physicists other than him also like (as my hypothetical person would have to admit).

And yes, of course this result doesn’t tell you anything directly about the Yang-Mills mass gap, or any other specific physics problem—I explained that myself in this post. The result is about something different and more abstract: namely, could there be a useful general characterization of which 2D Hamiltonians are gapped and which are gapless? Answering that question clearly requires nontrivial work and insights, as we can see from the contrast between the Cubitt et al. result and the Bravyi-Gosset one (neither of them easy), and the enormous space between the two that remains open.

And if this hypothetical person claimed the Cubitt et al. result was trivial and “bureaucratic” for the same reason why the results of Cantor, Turing, and Gödel were all trivial and “bureaucratic” … well then, I suppose I would rest my case right there. 🙂

15. Rahul Says:

Naive question: For a 2d material is the dimension of Hilbert space 2. And for 3d-material 3 etc.?

If so, in a practical sense is the d > 3 case physically impossible? Or am I mis-interpreting what the dimension of a Hamiltonian means.

16. Scott Says:

Rahul #15: No, there are two totally different dimensions in play here. d means the local dimension, which is the number of orthogonal quantum states each individual particle can have (whether they’re qubits, qutrits, etc). Then there’s the spatial dimension, which is fixed to 2 all throughout the Cubitt et al. paper (2 simply being the lowest spatial dimension where they can make their construction work; as I said, the 1-dimensional case is still open).

17. Will Says:

Just rephrasing to making sure I understand: they work with a 2d lattice and the dimension d refers to the number of spin degrees of freedom. So a lattice of spin-1/2 particles has d=2, a lattice of spin-2 particles would have d=5.

Is it obvious that the result would also hold for 3d or 4d lattices? Is it obvious that it should hold for relativistic qfts? One can do qcd on a 4d lattice and it’s local and translationally invariant, but it’s more complicated than the Hamiltonian studied.

18. anon Says:

Scott #14: I admit I commented too early. Reading more carefully your post you really explain clearly how things stand. Probably I am not as excited as you are for such result, although I think it deserves publication in an high impact review (as it has indeed happened).

19. matt Says:

What do you mean by a “deterministic classical Hamiltonian”? Are you saying that if the local Hamiltonian is diagonal in a product basis, which is usually what a classical Hamiltonian means for a spin system, then the problem of a gap is computable for fixed Hilbert space dimension? If so, I don’t see why you say that there are only finitely many h’s: the possible h’s are still infinitely many.

20. Scott Says:

matt #19: Sorry, just a terminological confusion. By a “deterministic classical Hamiltonian,” I meant a translationally-invariant, nearest-neighbor constraint satisfaction problem with an alphabet of size d. (So, I guess, a local Hamiltonian that’s diagonal in a product basis and has only 0’s and 1’s on the diagonal.) Clearly there are at most exp(d2) possibilities for this.

21. Anindya Says:

Hi Scott, thanks for the write-up.
One question though.
I’m finding it a bit difficult to connect:

1) there is no algorithm that takes as input the local Hamiltonian h and that decides whether the material is gapped or gapless.

and

2) As an immediate corollary, there’s some 2D material—characterized by a translationally-invariant local Hamiltonian h on particles of local dimension d—such that whether the material is gapped or gapless is independent of the axioms of ZF set theory

If I understand right, 1) means that whenever you say “Here’s my kickass algorithm which will work out whether the mass gap exists for ANY Hamiltonian”, I can pull out a tailor made Hamiltonian which trips it up.

But this leaves open the possibility that given a *fixed* Hamiltonian, H, I could always find an algorithm which decides whether H has a mass gap though it can fail for something else.
(Just like you can never have an injection from naturals to reals which hits *every* real number, but you can always make up one which hits pi)

But 2) seems to be a *much stronger* statement saying something like “There is some fixed H for which *every* algorithm which you can think of will fail to decide the mass gap question”.

Or even worse, you can create a “model” (of what exactly ?) where H has no mass gap and one where it does.
(At least that’s what I understand by, say, “The Continuum Hypothesis is undecidable in ZFC”)

Am I missing something ?

22. Scott Says:

Anindya #21: 2) is actually an immediate logical consequence of 1), and I explained why in the post. It’s because you can create a Turing machine M that halts iff there’s an inconsistency in ZFC; then if ZFC is sound, it can’t prove that M halts or that it runs forever. A different way to see it is that, if there were no machine whose behavior was unprovable, then you could solve the halting problem, by just enumerating all possible proofs that a TM halts or that it runs forever until you found one. But we know there’s no algorithm for the halting problem, so such a machine must exist.

On the other hand, as soon as you fix the Hamiltonian, there’s clearly SOME ground truth about whether it’s gapped or gapless, since this is ultimately a straightforward arithmetical question, not something about transfinite set theory like the Continuum Hypothesis. It’s just that your favorite axioms might not be able to PROVE the answer, not that there isn’t one. And you always do have an algorithm that outputs the right answer for THAT Hamiltonian: the algorithm is simply, “print ‘gapped'” or “print ‘gapless'”, whichever the truth happens to be! 🙂

23. Anindya Says:

Scott at #22:
I see. I was interpreting “X is independent of ZF” in the set theorist sense of “Both X is true and X is false are consistent with ZF”.

But you meant something a bit different. Thanks for the clarification.

24. Scott Says:

Anindya #23: No, I mean exactly the same thing that the set theorists mean. But a statement can be independent of ZF, and yet still be true in the real world—for example, that a given Turing machine runs forever, or that ZF itself is consistent. Indeed, I don’t even know what it would mean to say that statements of that kind, which are ultimately just statements about positive integers, are neither true nor false. With the Continuum Hypothesis, by contrast, you always have the option (though not the obligation) of saying that it’s true in some models of set theory and false in others, and there’s nothing more to it than that.

I am intrigued whether it’s possible to extend this result to QFT. It is interesting to imagine the properties of a universe in which it is very difficult to determine whether a mass gap exists (even with a promise that if it exists it is at least of certain magnitude!). What happens when we try to produce particles that lie below the “maybe mass gap”? As far as I understand, we will have a huge (sometimes insurmountable) potential barrier to producing them.

When we look at a finite volume of space, the gap will depend on the boundary conditions (since the gap is decidable for systems of finite spatial extent; actually in QFT it gets hairier since even in a finite volume there is an infinite number of field modes / scales but let’s assume it doesn’t matter). This means it is possible to create light particles but they will create a surrounding “field” that might have large energy somewhere far away. I guess it will be something like a pair of light particles with opposite “charges” of some kind that are interconnected by field carrying high energy but it we push them sufficiently far apart, it *might* happen (depending in undecidable way on the parameters of the model) that the field will somehow annihilate.

26. Anindya Says:

Scott #24:
“But a statement can be independent of ZF, and *yet still be true in the real world*. Indeed, I don’t even know what it would mean to say that statements of that kind, which are ultimately just statements about positive integers, are neither true nor false.”

This is precisely what was bugging me when I asked my first question.

On one hand, I am completely inclined to agree with you intuitively.

But in the context of a statement about positive integers that is independent of ZF – or some other system that can be used to model arithmetic – what would “true in the real world” even mean ?
How could we accept/dismiss such a statement except as an article of faith – something like Platonism which claims that the integers “just exist” somewhere ?

Even if the problematic Hamiltonian describes a physical system – all experimentally testable systems would be finite and hence decidable. So, what prevents the mass gap from being neither true nor false, “even in the real world” ?

27. luca turin Says:

Fantastic post, thank you Scott!

28. Michael Says:

Can you confirm if this work has any direct connection to A. Zeilinger’s work on quantum randomness : http://vcq.quantum.at/fileadmin/Publications/Logical%20independence%20and%20quantum%20randomness_Zeilinger.pdf

29. Joel David Hamkins Says:

Regarding the automatic move from computable undecidability to logical undecidability that Scott is talking about, the issue came up on MathOverflow and I wrote an answer explaining it. See my answer to: How undecidable is the spectral gap?

30. Scott Says:

Anindya #26: This is a case, in my opinion, where you should have the courage of your intuitions! 🙂 Imagine, for example, that God assured us that Goldbach’s Conjecture was independent of ZF. Even then, I would say: Goldbach’s conjecture is true in the real world, and what I mean by that, is that every even integer greater than 2 really is a sum of two primes. (Indeed, I would know that, since if there were a counterexample, then it wouldn’t be independent of ZF.) Yes, there are models of ZF that (wrongly) think Goldbach’s Conjecture is false, but in those models, the counterexamples will always turn out to be “nonstandard integers”—i.e., formal variables that are spit out by Gödel’s Completeness Theorem but that can never be cashed out into actual, explicit positive integers. Indeed, the whole point of the Completeness Theorem, in some sense, is that the existence of nonstandard models isn’t telling you about anything more than the weakness of the axioms.

Note also that the statement, “the Hamiltonian H is gapless,” is ultimately a statement of arithmetic just like Goldbach’s Conjecture is. (Formally, I believe it’s a Π2-sentence, but if we threw in a reasonable convergence bound for the gap, we could make it a Π1-sentence, which simply asserts that some particular computer program runs forever, just like Goldbach’s Conjecture asserts.)

Now for me, the clincher is this: suppose you reject this viewpoint as “Platonism,” and you say: no, for me there’s no fact of the matter about anything until it’s proven or disproven in ZF. In that case, I reply: why should you even say there’s a fact of the matter about whether something is or isn’t provable in ZF, or about whether ZF itself is consistent? You seem caught in an infinite regress, where the only way out is to admit that, while you might or might not have any intuition about what transfinite sets are that’s conceptually prior to axioms, at any rate, you do have such an intuition about the positive integers.

(Another way to see this: suppose ZF were discovered tomorrow to be inconsistent. Would we then say that maybe 2+2=5 and there are only finitely many primes? No, the obvious conclusion would just be that ZF was a bad set of axioms, one that had failed to capture what we want.)

31. Scott Says:

Will #17 and Vadim #25: Your comments raise an interesting question, which hadn’t occurred to me when I wrote the post. Namely, you could always start with a relativistic QFT, pick out a preferred reference frame, and then discretize onto a lattice, in order to put everything into the framework of Cubitt et al. If you do this, though, the local Hamiltonians you get will surely be rather special ones (the combined requirements of Lorentz-invariance, locality, and unitarity are famously constraining). So, is that special class of Hamiltonians also able to encode the halting problem, or could deciding gaplessness be easier for it? Seems like a good research topic for someone!

32. Scott Says:

Michael #28: No, there’s no connection. FWIW, I always found the talk about “axioms” and “independence” in that Zeilinger et al. paper to be strained and unnecessary (since the “formal system” is such a weak one)—whereas, in the present context, the use is just the standard one, referring to systems like PA and ZFC.

33. Rahul Says:

So, just in case someone constructed such a 2D system, i.e. with the right local dimension ‘d’ & a very large lattice size & then tried to measure empirically whether it was gapless or gapped what would he find?

He ought to observe only one option & wouldn’t that be a way of turning the uncomputable, computable?

Or is there a reason why such a system with the right Hamiltonian is impossible to make in practice? i.e. Does Nature put other independent hurdles in the way of probing an undecidable system?

Of course, “infinity” itself I’m not sure how we reach, but in most such other limiting behavior we would start seeing the effect predicted gradually at large enough “L”, right? e.g. Say an Avagadro’s number sized lattice. Although I’m not even sure what “gradually becoming uncomputable” would mean. Is there a continuum of undecidability stages?

34. Scott Says:

Rahul #33: For any finite-sized lattice that you can actually build (e.g., with an Avogadro’s number of atoms), you can in principle just calculate what the gap will be. If the system is gapless, it means that the gap eventually goes to zero as the lattice size goes to infinity, but there’s not necessarily any effective bound on when the gap falls below some given ε. So, that’s not something we could ever definitively check in finite time in our universe, any more than we could definitively check the claim that a computer program to search for counterexamples to Goldbach’s conjecture never finds one (indeed, the two cases are very closely analogous).

If you want to imagine a hypothetical universe where you could build an infinite lattice, and then somehow measure its gap to infinite precision, in finite time—well, in such a universe, the Cubitt et al. result implies that you could solve the halting problem, and thereby learn whether Goldbach is true, whether ZFC is consistent, etc. But presumably every question you asked would still have a definite arithmetical answer…

35. Aula Says:

Scott #30: “suppose ZF were discovered tomorrow to be inconsistent. … the obvious conclusion would just be that ZF was a bad set of axioms, one that had failed to capture what we want.”

Indeed. Since, logically speaking, things can only be true or false in a model, when mathematicians say that something is true or false without specifying a model, what they really mean is true or false in a sort of vague empirical model, which can only be defined as “the model in which everything we WANT to be true IS true”. Most mathematicians want the axioms of ZF to be true, because they are very useful for proving almost everything else that mathematicians want to be true, but even if those axioms were found to be inconsistent, that would just mean that some of them must be excluded from the empirical model (and probably replaced by slightly weaker axioms); the rest of the model would hardly be affected at all.

36. Charles Stromeyer Jr. Says:

I am no expert on spectral gap problems, but anyways I thought that there was already an ‘incompleteness’ result for physics which is the “free randomness” result of Colbeck & Renner that was published earlier in the journal “Nature Physics”:

http://arxiv.org/pdf/1105.3195.pdf

(The same results in this above paper also apply to the 2 previous papers by Conway & Kochen on “free will”, but I do prefer the term “free randomness”).

37. Scott Says:

Charles #36: No, the two results are unrelated. As I said before, I think the randomness of quantum measurement outcomes has fundamentally nothing to do with Gödel’s incompleteness theorem. In the former case, at least for a finite-dimensional quantum system, the axioms of set theory DO entail everything you could hope would be mathematically provable (e.g. the probability of this-or-that measurement outcome). Whereas with Cubitt et al., we really are talking about uncomputability and incompleteness in the ordinary senses.

38. Charles Stromeyer Jr. Says:

Thanks, Scott! I agree with you. Earlier, Colbeck & Renner published a paper in “Nature Communications” which argued that no extension of QT can have improved predictive power, and now Klaas Landsman has this preprint which rigorously proves this Colbeck-Renner theorem (but weakens it some):

http://arxiv.org/pdf/1509.08498v2.pdf

Now, I can see mathematically that the correctness of QT is all that is needed for prefectly free random outcomes of measurements (versus the completeness of QT which is what Cubitt et al. are considering). Thanks.

Another thing I wonder about is whether properties like anomaly cancellation and asymptotic freedom in QFT can be uncomputable. If the answer is positive this would mean we can’t even determine whether a model is non-perturbatively consistent in general. It would also open the possibility that the string theory landscape is uncomputable (did anyone study this possibility?)

40. Scott Says:

Charles #38: The line of research started by Colbeck on “certified randomness” is extremely interesting; it’s just addressing a totally different question than the Cubitt et al. work. The question addressed by Colbeck and those who came later is: how can you use quantum mechanics to generate bits that even a skeptic, who doesn’t believe in the correct functioning of your devices (or even in quantum mechanics itself), will have to agree are random? Amazingly, the Bell inequality lets you do this, provided that (a) you assume no faster-than-light communication, and (b) you have a small number of random bits at the beginning, to jump-start the randomness generation process. For more, see my article in American Scientist about this.

41. Anindya Says:

Scott #30: “Suppose ZF were discovered tomorrow to be inconsistent. Would we then say that maybe 2+2=5 and there are only finitely many primes?”

Maybe not. But I’d be willing to say that Goodstein’s Theorem was false 🙂 🙂

42. Scott Says:

Anindya #41: I wouldn’t. For to prove Goodstein’s Theorem requires just a tiny fragment of ZF, just a little ordinal arithmetic. (Incidentally, I’m assuming that by “false,” you meant “no longer proved”—otherwise that REALLY seems like an overreaction.)

43. Anindya Says:

Scott #30, Joel #29:
Okay, last question from me on the topic.

My (probably false) intuition is that something like the Continuum Hypothesis is a little different from “run of the mill” Godel undecidability for the following reason.

So, I expect a Godel undecidable statement to look like an ordinary statement in number theory – or set theory – where you potentially go on for centuries trying to prove/disprove it, offer prizes and so on, but just get nowhere.

But with the Continuum Hypothesis, you can go further and actually PROVE that it’s undecidable in ZFC.

So, is this a generic property of undecidable statements ?

That is, given a statement that is undecidable in a given axiom system, is it always possible to *prove* the undecidability by constructing models going both ways ?
(I guess this is a meta-metamathematical statement 🙂 )

Relevance to blog topic:
I have no difficulty imagining a Hamiltonian for which you are never able to prove whether a mass gap exists or not despite years of effort.
But being actually able to show that the mass gap exists AND doesn’t exist seems totally wacko (though I like Scott’s idea of saying “No, its just your axiom system that is wacko”).

My question above, if false, would give some hope that the former case is indeed, less troubling than the latter.

P.S: The authors seem to have given some thought to what an undecidable mass gap would “look like” physically.
Its the fourth answer on: http://mathoverflow.net/questions/225903/how-undecidable-is-the-spectral-gap/225946#225946

44. Scott Says:

Anindya #43: Yes, I agree it’s striking that you can actually prove CH independent of ZF. Of course, Gödel (or Rosser) sentences themselves provide other examples of provably-independent statements (in all cases, independent conditional on the axioms being consistent). In general, however, if a statement is independent, there’s no guarantee that its independence will be provable, or that the independence of its independence will be provable, etc. It could be independent turtles all the way down.

As another note—there is a MASSIVE difference between proving a statement S is independent of your axioms (neither provable nor disprovable), and “proving both S and its negation.” The latter means your axioms are inconsistent! But the former, which is what we’re talking about in cases like Cubitt et al., does not. In fact, Gödel tells us that sufficiently powerful axiom systems are consistent only BECAUSE certain statements are independent of them.

45. Anindya Says:

“There is a MASSIVE difference between proving a statement S is independent of your axioms (neither provable nor disprovable), and “proving both S and its negation.”

Of course. Sloppy language on my part. S and its negation cannot exist in the same model, and neither are being proved from axioms.

Thanks a lot for your patience with all my questions !

46. James Says:

Undecidability in theoretical physics is not surprising. Just open the book by Marian B. Pour-El and J. Ian Richards: Computability in Analysis and Physics

https://projecteuclid.org/euclid.pl/1235422916

47. Juan Says:

This is just a comment on #24.

Scott, you say that consistency of ZFC is indepentent of ZFC axioms, but (ignoring methamatematical complication, I’m not sure if I can be so cavalier here though) this would mean by Gödel completeness that there exists models of ZFC where the statament “ZFC is consistent” is false (if ZFC is “truly” consistent in the first place of course).

My model theory book says this happens because you introduce “nonstandard natural numbers” in the model, so the formalization of the statement “there exists a proof of contradiction” is not the same you would expect with the “true natural numbers” (whatever that is).

48. Juan Says:

Clarifying a bit from #24, there exists a “platonically false” model of ZFC where there exists proofs of contradiction, but they are of “nonstandard” lenght.

This can be made rigorous for weaker axiom systems like PA or a given finite fragment of ZFC where everything (true and nonstandard numbers) can be defined in ZFC.

49. Scott Says:

Juan #47: Yup. I like to describe ZF+Not(Con(ZF)) (ZF plus the assertion of its own inconsistency) as a “self-hating theory.” Perfectly consistent—like with many irrationally depressed people, you’ll never talk it out of it, no matter how long you try.

50. Rahul Says:

Scott #34:

Thanks. So, in any other context has undecidability / uncomputability been conjectured or theoretically proven to arise at a problem scale that does not invoke an infinity?

Like you point out, any result invoking an infinity is inaccessible to any sort of empirical verification because no matter at what large scale we test the actual limiting behavior cannot be tied down.

Does Nature (or an existing proof) prevent undecidability in statements involving finite times / spaces / lattices & precision?

51. Jario Says:

Babai’s paper on Graph Isomorphism is available now:

http://arxiv.org/pdf/1512.03547.pdf

52. matt Says:

Scott #20: if you mean a Hamiltonian that is diagonal in a product basis with diagonal entries being only 0 or 1, then the eigenvalues of the Hamiltonian are integers. So, the spectral gap is an integer. So, it can’t be gapless though it can have a degenerate ground state with a positive integer gap to the rest of the spectrum. I guess some people might define a Hamiltonian with such a degeneracy to be “gapless”.

53. Jan Says:

I am a bit confused about the fixed d. As you write, the input is a d^2 x d^2 matrix of algebraic numbers. Since d is fixed, the variability of the input then lies entirely in the values of these numbers. (By the way is the problem still undecidable if these are rational numbers?) So this is basically an undecidable number-theoretic problem? (Not saying that this would be unusual, just trying to clarify.)

54. Scott Says:

Rahul #50:

Does Nature (or an existing proof) prevent undecidability in statements involving finite times / spaces / lattices & precision?

Mathematically, if we know that a problem only requires enumerating a finite list of possibilities, then essentially by definition that problem is computable. This is why, any time you see a new problem that’s been proved at least as hard as the halting problem, that problem will always involve some element that goes to infinity (if the title and abstract aren’t forthcoming about this, read the main text and see! 🙂 ).

Physically, it’s conceivable that we could have lived in a universe where an infinite amount of stuff could get done in finite time (e.g., by what I call the “Zeno Computer,” that does one step in 1 second, the next in 1/2 second, the next in 1/4 second, and so on). In such a universe, of course the halting problem could be solvable in finite time.

But the current conjecture in theoretical physics—primarily because of the work of Jakob Bekenstein on black hole thermodynamics, which I blogged about before—is that we don’t live in such a universe. Rather, we seem to live in a universe that can be modeled as a quantum computer where each finite region of space stores at most ~1069 qubits per square meter of enclosing surface area (with the bound saturated only by black holes), and where those qubits are operated on at most ~1043 times per second. In such a universe, of course the halting problem would not be solvable.

55. Scott Says:

Jan #53: Yes, as I said in the post, the Turing machine has to get encoded into the numbers in the Hamiltonian, which I guess you could call “number-theoretic.”

I don’t know whether their result could be proved with rational numbers in the Hamiltonian only—that’s an interesting question! For the form of the numbers that do appear in their Hamiltonian, see page 12 of their paper. I was actually mistaken about the numbers being algebraic, since they also include terms like e where φ is algebraic. But in any case, there’s only a denumerable infinity of them.

56. fred Says:

Scott #54
“Mathematically, if we know that a problem only requires enumerating a finite list of possibilities, then essentially by definition that problem is computable. This is why, any time you see a new problem that’s been proved at least as hard as the halting problem,[…]”

Is it true that the n-body problem (gravity between n bodies), classically, is intractable as well? (although there’s probably a difference between something that can’t compute a yes/no answer and something that’s giving an numerical answer with an error that’s getting worse and worse).
Is the n-body problem suddenly tractable when considering QM? When n is finite, is it like the finite lattice case?

Scott,

If we lived in a universe where a Zeno computer was possible, would the Zeno computer still have its own halting problem that wasn’t computable? It seems like the contradiction in the halting problem proof would still exist if we fed such a computer its own description (similarly to how, I believe, a TM with an oracle for the halting problem would still have its own, uncomputable halting problem). But then, if the computer *can* do an infinite amount of work in a finite time, what’s the barrier really? Intuitively, if feels like no matter what the proof says, if I can do an infinite amount of computation in finite time, what’s going to stop me from getting the right answer every time (maybe as long as I don’t actually feed the computer its own description as an input)? Sorry for the off-topic question!

58. Scott Says:

fred #56: People who know the subject better than me should feel free to chime in, but I don’t think it’s actually known today whether reasonable formalizations of the Newtonian N-body problem are uncomputable. It is known, I think, that you can get “Zeno-like behavior,” with the velocities of two orbiting bodies diverging to infinity in finite time, and that certainly raises the possibility of uncomputability, but to complete the argument you’d need to show how to encode a Turing machine into celestial mechanics.

In any case, even if uncomputability were present for that reason in Newtonian gravity, it would go away in GR, since there velocities never exceed that of light. But then GR raises its own thorny computability issues, e.g. because of singularities (which again I’m not an expert on).

Meanwhile, in quantum mechanics, the computability situation is going to be related to the dimension of the Hilbert space. In a finite-dimensional Hilbert space, clearly everything is computable. In a Hilbert space of countably infinite dimension, you might be able to encode an uncomputable problem—but even there, exciting more and more modes typically requires more and more energy, so if your total energy is also bounded, then things should remain computable.

I can now handwavingly explain why, by the lights of modern theoretical physics, our universe seems to be computable (so in particular, not able to solve the halting problem). The reason is that, in GR, there’s a fundamental reason why you can’t just keep adding more and more energy to a bounded system: because at some point, you’ll exceed the Schwarzschild bound and create a black hole! So then, when you combine that with quantum mechanics into a (still uncompleted) quantum theory of gravity, one thing you ought to get is that the Hilbert space of a bounded part of the universe should be finite-dimensional, and therefore simulable on a computer to any desired precision. By combining G, c, and ℏ (as Max Planck already did around 1900), you can even predict the maximum number of qubits per square centimeter, and the maximum number of qubit operations per second. Both are numbers that we’re tens of orders of magnitude away from saturating, but they’re finite.

Wait a sec, when I asked my question in post #57, I think I underestimated how counterintuitive Zeno computers are. Now as I think more about it, they must always halt, even if one intentionally goes into an infinite loop, right? For the same reason that 1 + 1/2 + 1/4 + 1/8… exactly equals 2. In that case my question probably makes no sense since what does halting mean for a computer than never halts?

60. Scott Says:

Vadim #57: You would have to write down a mathematical model of the “Zeno-computer universe” before I could answer your question for sure. But yes, the usual rule is that no model of computation can solve its own halting problem, and that this applies even to unphysical “hypercomputing” models that can solve “our” halting problem.

In particular, this true for Turing machines equipped with oracles (i.e., the unsolvability of the halting problem relativizes). Turing already noted, in his 1939 PhD thesis, that this gives rise to an infinite hierarchy of “more and more unsolvable problems” (the halting problem for TMs with HALT oracles, which can’t be solved even by TMs with HALT oracles, then the halting problem for TMs with oracles for that halting problem, etc.)

I can’t resist telling you that, when I was 16, I became obsessed by the question: what would be an example of an interesting model of computation that could solve its own halting problem? (Where ‘interesting’ is supposed to rule out silly answers like “polynomial-time bounded Turing machines, all of which halt on all inputs”?)

Of course, such a model would have to throw away one or more of the most basic properties of the Turing-machine model: for example, that you’re able to do things like halt if a certain subroutine accepts or run forever if it rejects. I proved some results about the abstract properties such a model could satisfy, but then never came up with a compelling example of anything that actually satisfied those properties. Someday, I’d love to return to the topic and write a paper about it. Now that I’m tenured, I guess I have no excuse! 🙂

Thanks, Scott! I’m glad we don’t live in a Zeno-computational universe; I have enough trouble just understanding this one.

62. Rahul Says:

Scott #54:

“any time you see a new problem that’s been proved at least as hard as the halting problem, that problem will always involve some element that goes to infinity”

OK, thanks again.

So, basically, everything is good and computable until you set some parameter to infinity. At that point certain propositions remain decidable and others become undecidable.

But that one parameter set to infinity by itself creates an “artificial” system that we can never ever probe empirically. Not directly for sure but not indirectly either.

I guess what I’m struggling to grasp is why do we find this distinction between computable / non-computable systems of infinite dimension useful. Isn’t it a proposition about an entirely impossible spot in parameter space?

(I always thought the concept of infinity helped us grok the limiting behavior as things become large but in this case we seem to be excluding that “becoming large” stage and exclusively making claims about that special, artificial, unreachable point, “infinity”)

63. Scott Says:

Rahul #62: No, to get uncomputability, it’s enough to have a parameter that’s unbounded, rather than a “completed infinity.” E.g., the halting problem asks whether there’s a positive integer T such that your program halts after T steps. You never need to imagine T infinite, but you do need to imagine it arbitrarily large. Likewise, the Cubitt et al. uncomputability result applies to the question: “in the limit, as the lattice size L gets arbitrarily large, does the spectral gap converge to 0 or to a nonzero value?”

64. Toby Cubitt Says:

Thanks for the nice write up, Scott. If you think this result is old news to you and the rest of the quantum info community, imagine how old it seems to us – we essentially completed the proof three years ago. (Writing up and checking 147 pages of maths takes a while…)

Thanks also for your very useful curmudgeonly shooing away of people from misinterpretations — now I can simply point people here when they ask me about such things 🙂 I’d also point people to the final paragraph of the paper, where tried to be up front about the limitations of our result. (Though no doubt there are more we didn’t think of.)

Although our theorem concerns the thermodynamic limit (i.e. lattice size going to infinity), it does also have a curious implication for finite-size systems. The gapped Hamiltonians we construct in the proof appear gapless up to some large finite lattice size, but when you increase the lattice size by 1 a constant spectral gap suddenly appears. (The gapless ones go on looking gapless all the way to infinity, of course.) The threshold lattice size at which this abrupt opening-up-of-a-gap occurs can be arbitrarily large. (In fact, it’s an uncomputable number.)

Imagine you’re doing numerics to understand this Hamiltonian: you’ve burned some supercomputer cycles crunching the energy spectrum for a 1,000,000 x 1,000,000 lattice, and it looks every bit like a gapless system. Just to double check, you’ve burned even more cycles computing the spectrum for lattice size 1,000,000,000 x 1,000,000,000, and it still looks gapless. So you confidently go ahead and conclude the system really is gapless. Unfortunately, unbeknownst to you, at lattice size 1,000,000,001 x 1,000,000,001 and above the Hamiltonian has a spectral gap of 1.

I doubt people doing lattice QCD or condensed matter numerics are going to lose too much sleep over this. As Scott says (and as we flag in the paper), the Hamiltonians that come out of our proof are extremely artificial. Even with the impressive advances in artificially engineering Hamiltonians in optical lattices, ion traps, etc. no one will even be able to artificially engineer our Hamiltonians anytime soon (or ever), let alone find them occurring naturally. But this could well be an artifact of our proof techniques, and there might be much simpler Hamiltonians out there with this same behaviour.

I like to think numerics people will sleep slightly less easily one night a year because of our result 🙂

65. Toby Cubitt Says:

Dave #1 and Scott #2: Quite right — it’s the usual Goedel-undecidability that you get from any Turing-undecidability result. Immediately obvious to any computer scientist worth their salt, but not to most physicists who never study computability during their degrees. Maybe they should! (I speak as a lapsed physicist 🙂

There’s probably some truth to the suggestion that this just serves to “sell it” to physicists. But we decided to include it because, from talking to physicists, they were often confused about what Turing-undecidability meant, and thought that this was just a result about how hard it is to calculate the spectral gap on a computer. The Goedel-undecidability implication seemed to get the idea across more successfully. So we put both in the paper. (Nature is a science journal, not a computer science journal, after all — got to write to your audience.)

66. Toby Cubitt Says:

Scott #55: Your original statement was correct: all the matrix elements are algebraic. The Hamiltonian only contains terms of the form $e^{i\pi\varphi}$ (note the $\pi$).

67. Toby Cubitt Says:

James #46: See reference [PER89] in the long arXiv version of our paper.

Though I would argue Turing machines are already perfectly reasonable idealizations of physical systems, so undecidability was in physics from the very beginning. (And I bet Scott would agree!)

68. Scott Says:

Toby: Thanks so much for the clarifications, and glad you liked the post!

69. Jair Says:

Scott #24: “With the Continuum Hypothesis, by contrast, you always have the option (though not the obligation) of saying that it’s true in some models of set theory and false in others, and there’s nothing more to it than that.”

That seems a bit odd to me – don’t you have such an obligation? What would be the alternative? Saying that there’s some “ultimate truth” about CH would be like saying “Yes, yes, I admit are highly perverse, pathological non-abelian models of the axioms of group theory, but the ultimate platonic truth of the matter is surely that ab = ba forever and always.” But perhaps there is some difference here I am missing.

70. Scott Says:

Jair #69: To the end of his life, Gödel hoped that new axioms would eventually be discovered that were “just as intuitively obvious” as the axioms of ZF, and that would decide CH one way or the other.

To illustrate, if you believe in the “constructible universe” (basically: the only sets that exist are those that can be constructed starting from the empty set and the ZF axioms), then CH is true. On the other hand, if you read QCSD, you can find an example of a statement, involving throwing darts at the unit square [0,1]2, that strikes many people as “intuitively obvious”—and yet if you accept it, then at least one of CH and AC must be false. Meanwhile, a few set theorists, like Hugh Woodin, have continued Gödel’s program of searching for “uncontroversial” new axioms that would settle things once and for all (see here and here for his survey articles about this from 2001).

As far as I can tell, there hasn’t yet been a widely-accepted resolution, but the possibility of one is not completely absurd. After all, if you’re a hardcore Platonist (as Gödel was), then you regard the axioms of ZF as much more like the axioms of PA than like the axioms of group theory. The whole point of the axioms of group theory is to describe lots of different groups; by contrast, the point of PA is to capture just a single structure, the positive integers. Of course, PA doesn’t capture all true statements about that structure, but that’s just PA’s failing. We all knew what the positive integers were before we ever encountered PA, and maybe we can all agree about some additional axioms (e.g., ordinal induction) that let us prove some of the truths about the positive integers that PA failed to prove.

From this perspective, the question at issue becomes: is “the universe of sets” more like a group, or more like the positive integers?

71. Douglas Knight Says:

Scott, it might be helpful to note that there are uncontroversial extra axioms of set theory, even if they do not address CH. Choice, of course. Large cardinal axioms are probably even less controversial, although only conditional on choice.

72. Scott Says:

Douglas #71: Are you kidding? AC was controversial through much of the 20th century, and controversy still flares up occasionally.

I agree that large cardinal axioms are a different story—if you use ZF at all, then presumably you should believe in a large enough cardinal to construct a model for ZF, and a model for ZF+Con(ZF), and so on. But then there are even larger cardinals, which I understand can again become controversial.

73. Douglas Knight Says:

They’re a lot less controversial than axioms which imply CH.

I think most recent controversy about AC is confused and really should be about something much weaker or much stronger.

I think it is rare to talk about large cardinal axioms in the absence of choice. With choice, different axioms are usually comparable: one implies a larger cardinal than the other. Without choice, I think that there are a lot of variants and it’s a mess. I think that this is a big aesthetic motivation for AC among set theorists. And the (conditional) coherence of the large cardinal axioms into a tower is a big aesthetic motivation for accepting them.

74. Scott Says:

Douglas #73: It probably depends what area of math you’re in. If you’re a set theorist, of course you like AC since otherwise even measuring cardinalities is a mess. On the other hand, if you’re doing measure theory or probability theory, and just want monstrous non-measurable sets to go away, you prefer not(AC).

75. James Says:

@Jair #69:

76. Douglas Knight Says:

Scott 74, No, all pure mathematicians use AC. It’s too baked in to what everyone else is using to be able to escape. Maybe people could rearrange their abstractions to avoid it and keep the same perspective, but they don’t. It seems to me that analysts are more enthusiastic about it than other mathematicians, probably because they run into it quicker.

Measurable sets aren’t a big deal. Practically no one assumes not(AC) to avoid them. The very fact of that independence theorem tells you that there’s no danger of accidentally constructing a non-measurable set to which theorems won’t apply. People could assume not(AC) at that stage, but in functional analysis it shows up a lot more and it would be a lot harder to escape. eg, Krein-Milman shapes how people view the whole subject. And it looks a lot less like the free lunch of no non-measurable sets and more like shoving around the bump in the carpet.

77. Aula Says:

Scott #58: It’s not exactly obvious what “uncomputability of the n-body problem” should even mean. Let’s say we have n bodies in three dimensions interacting gravitationally, and further assume that the total momentum of these bodies is zero. Then we need 6(n-1) numbers to specify the positions and velocities of the bodies. Of course if we actually know the positions and velocities of the bodies at one time, we can compute them for any other time, but the problem with this is that any uncertainty in the initial values tends to blow up when we go too far into the future (or past). So we would prefer to have 6(n-1)-1 constants that can be computed from the initial values; then we can use these constants and time to compute the positions and velocities with better precision. The problem is that (when n>2) we don’t have any idea how to even define these constants (except for the total energy). While something that can’t be defined obviously can’t be computed either, I don’t think that’s really what “uncomputable” should be used for…

78. Davide Castelvecchi Says:

Douglas Knight #7: As the author of the Nature news story, I would like to clarify a point that is often the source of confusion. There is no denying that Nature’s news team is part of Nature magazine. However we are editorially independent of the manuscript editors’ side (the ones who get papers peer reviewed and published) and of the press team that sends out press releases. The week before publication of the paper, I received the same press release that other reporters did, It mentioned the paper by Cubitt et al. rather buried down and only as a paper title: In other words, in this case Nature’s press team did not even deliberately decide to promote this article. In any case, when I (or another Nature journalist) decide to cover results that come out in a Nature journal we base the decision on its scientific significance and its news value: we do not do it to promote the journal itself.

79. Scott Says:

Aula #77: What I had in mind was, you specify the positions and velocities at t=0 as algebraic numbers, or as elements of some other denumerable set. Then, the problem is to determine the highest-order bits of precision of the positions and velocities at some future time.

80. jonas Says:

Vadim Re #57. The short answer is that such a computer would not be able to solve its own halting problem. But a longer answer is still interesting on what such a computer could look like (there’s more than one possibility depending on how powerful you want it), so I’d recommend David Madore’s recent blog entry on this at http://www.madore.org/~david/weblog/d.2015-11-16.2337.html . That also contains the necessary definitions Scott #60 asked for.

81. Aula Says:

Scott #79: OK, that’s a pretty reasonable mathematical definition of the problem (even though it’s completely impractical physically). By that definition, the three-body problem is computable, because the positions and velocities are expressible as infinite series that are known to always converge, but I don’t know if anything is known about the computability of the problem for four or more bodies.

82. JollyJoker Says:

Scott #58 “It is known, I think, that you can get “Zeno-like behavior,” with the velocities of two orbiting bodies diverging to infinity in finite time, and that certainly raises the possibility of uncomputability, but to complete the argument you’d need to show how to encode a Turing machine into celestial mechanics.”

Are proof strategies like this common? I remembered a comment by Terry Tao on Navier-Stokes, “If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids”

https://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/

Perhaps there are more than superficial similarities here?

83. Scott Says:

JollyJoker #82: Well, the way you prove pretty much anything uncomputable is to take some known uncomputable problem—the original halting problem often works, although the Post correspondence problem, matrix mortality, etc. are sometimes more convenient—and then encode that problem into your thing. So in particular, any proof that a physics problem is uncomputable will, at a super-high level, involve showing that the physical system you care about can encode Boolean logic gates, conditionals, memory, and other computer-like paraphernalia. The differences will all lie in the details of the encoding—for which there’s no upper bound on the amount of cleverness that can be needed.

With Terry Tao’s exciting ideas about Navier-Stokes, the crucial twist is that there he wants to encode a universal computer into a differential equation, not only to show uncomputability, but even just to show that the differential equation can take smooth initial data and make it singular—which is a perfectly “classical” question in differential equations, not something that itself asks about computability or algorithms. So if his strategy works, it will provide a dramatic example of what, in my post on his paper a couple years ago, I called “the long reach of computation”.

84. Scott Says:

Incidentally, a request for anyone who has any experience with WordPress:

In my last few posts, the paragraph justification has often been messed up, with many lines having far fewer words than would actually fit on them, and huge, unsightly spaces between the words. The only way I found to fix the problem was to rewrite the paragraphs from scratch (!)—which suggests that, as I edited the paragraph, WordPress was inserting “secret linebreaks” that are invisible to me in the editor, and which only a rewrite will get rid of. Googling didn’t turn up anything directly relevant either. Does anyone know a solution to this?

85. JollyJoker Says:

Scott #83: When you put it like that, the question becomes; if you’ve proven that a system supports universal computing, how on earth is the fact that it can encode uncomputable problems more than a footnote? (A rhetorical question, I guess)

86. Joshua Zelinsky Says:

http://www.smbc-comics.com/index.php?id=3958 seems on topic to both this and the recent D-Wave hoopla.

87. fred Says:

Scott #83

“So in particular, any proof that a physics problem is uncomputable will, at a super-high level, involve showing that the physical system you care about can encode Boolean logic gates, conditionals, memory, and other computer-like paraphernalia. ”

What I don’t get though:
any physical system is by definition finite, so just like for any real life computer, the number of possible states is finite (although huge), so, in theory, such practical halting problem is computable, no? Any non-halting program is bound to eventually cycle through the same series of internal states, so it could be detected?

88. Scott Says:

fred #87: Well, yes, in addition to the ability to encode Boolean logic gates, etc., to get uncomputability you also need some infinite or unbounded element in your problem. In the case of the spectral gap, the unbounded element is the lattice size. In the case of differential equations (including, conjecturally, Navier-Stokes), it might be that you could speed things up in a Zeno-like manner, and thereby do infinitely many steps in finite time.

Again, there’s no claim that this is physically realistic. The point is just that, if you define the problem in such-and-such a way, then you get a problem that’s uncomputable. The mathematical definition doesn’t care about the fact that ‘in real life,’ other physical constraints (like the Planck scale or the size of our observable universe) would ultimately come into play to prevent the uncomputability from rearing its head.

89. Aula Says:

Scott #84: I don’t really know much about WordPress, but I do know that there are some invisible Unicode characters that can cause all kinds of formatting weirdness. So, if you get any broken paragraphs in the future, don’t try to fix them, but add a comment asking people to take a look at them. I’m pretty sure that, with an example of the problem, I (or someone else) can help you figure out what’s going on.

BTW, I really like that as I type my comment in the edit box, it’s immediately rendered below the box, making it very easy to see any formatting errors. That’s WYSIWYG at its best. I wish every blog did that.

90. Michael Gogins Says:

Scott 88:

I understand what you say about finitude preventing uncomputability in the mathematical sense. But are there not practical limits to what can be computed in a finite universe? In other words, in a finite universe the spectral gap is mathematically computable, but is it always computable in practice, is there enough time, energy, etc.?

91. Scott Says:

Michael #90: Indeed, something can be computable but too hard to compute in practice, which is why there’s the entire field of computational complexity—i.e., the thing I spend most of my life on! 😉

See here for recent hardness results for estimating the spectral gap of a finite quantum system.

92. Michael Gogins Says:

Scott 91:

Thanks for the link. So what does this imply? That computing the spectral gap for a large system is provably intractable, possibly intractable, or what?

93. Scott Says:

Michael #92: Well, it’s QMA-hard, so it’s certainly NP-hard, and therefore intractable assuming P≠NP (in the worst case, of course—it can be way easier for particular systems people care about in practice, which helps explain why condensed-matter physics is possible!). The paper I referred you to is concerned with pinning down exactly how much harder than NP-hard it is.

In the current state of complexity theory, you can’t hope for an unconditional proof that these sorts of problems are hard—if P=PSPACE, for example, then all these problems would be in P. But hardness assuming P≠NP is basically the next best thing.

94. Toby Cubitt Says:

Scott #83:

“Any proof that a physics problem is uncomputable will, at a super-high level, involve showing that the physical system you care about can encode Boolean logic gates, conditionals, memory, and other computer-like paraphernalia.”

This is definitely true for all the undecidable physics-related problems I know of. But I think this doesn’t necessarily have to be the case. There are undecidable problems that are strictly weaker than the Halting problem (problems with Turing degree intermediate between 0 and 0′, see Post’s problem).

If one proved undecidability of some physics-related problem by reduction from one of those, it could potentially be strictly weaker than Halting. Presumably, that would mean the physical problem is not rich enough to encode all computer paraphernalia, but still complex enough to be undecidable. (Health warning: everything in this paragraph is wildly imprecise and speculative!)

95. Scott Says:

Toby #94: Yes, of course I know about the languages between 0 and 0′, and was even thinking about them when I wrote the comment you quoted! My thinking was: even a hypothetical reduction from one of those intermediate languages to a physics problem, would plainly still involve encoding computer-like paraphernalia into your physical system. For note that the languages we’re talking about are basically just the halting problem “crippled” by various priority-based diagonalization procedures—much like the NP-intermediate languages that you get from Ladner’s Theorem are SAT “crippled” so that it’s no longer NP-complete (but still outside P, assuming P≠NP).

In any case, there’s no known “natural” example of a language between 0 and 0′ (though there are promise problems that are reasonably natural, like the “Simulate-But-Always-Halt Problem”). So “in practice,” if you could encode any of these languages into a physical system, you could also encode 0′ (i.e., the halting problem).

96. Toby Cubitt Says:

Scott #95: Thanks a lot for the link to your post containing the Consistent Guessing problem! I wasn’t aware that any remotely natural Turing-intermediate problems were known.

I’d be willing to bet you’re right that encoding any of these in a physical system will also let you encode Halting. Except that I’m always wary of betting against smart computer scientists coming up with some clever argument that constructs a counterexample 🙂

97. Rahul Says:

Scott #88:

“Again, there’s no claim that this is physically realistic. The point is just that, if you define the problem in such-and-such a way, then you get a problem that’s uncomputable. The mathematical definition doesn’t care about the fact that ‘in real life,’ other physical constraints (like the Planck scale or the size of our observable universe) would ultimately come into play to prevent the uncomputability from rearing its head.”

It’s a tad annoying that people even call these sorts of mental gymnastics “Physics” then.

Instead of “A Paradox at the Heart of Mathematics Makes a Physics Problem Unanswerable” I’d be much happier had the title to the Nature article been “Smart Logician proves that one more contrived puzzle is undecidable in our mathematical formalism”

This sort of title is misleading, it leads the casual reader to believe we’ve discovered some strange fact about how natural systems behave. And then we must spend half the time disabusing readers of their wrong notions that this result has anything at all to do with observations about the world they live in.

I find it hard to blame the readers though. They were baited. They might be excused for naively believing that “Physics” has something to do about nature, observations & the universe around them?

This isn’t an one-off incidence. Authors from various non-Physics disciplines use this sort of borrowed legitimacy from associations with Physics as cheap click-bait. After all “One more abstruse mathematical puzzle shown undecidable after 150 page proof.” doesn’t have the same ring to it as an unanswerable physics problem, does it?

98. Scott Says:

Rahul #97: No, because the other point is that whether a spectral gap goes to zero as the lattice size goes to infinity is the kind of thing that real condensed-matter physicists and quantum field theorists ask all the time. It’s something they accept every day as a good mathematical idealization for what they care about. So it’s worthwhile to know that their problem secretly contains the halting problem—that for large enough constant d, there can never be a clean algorithmic criterion to tell you which nearest-neighbor qudit Hamiltonians are gapped and which are gapless.

You might say it’s less a result about nature itself than about the mathematical tools that we use to understand nature. But here’s the thing: by necessity, physicists themselves spend like 95% of their time dealing with the tools, and only 5% dealing with nature! Feynman even had a whole riff about that in “Cargo Cult Science”: about how, even if your goal is to study rats, you’ll spend vast amounts of your time just studying how to study rats. And it might even be a major advance in rat studies, if someone figures out that the way everyone in the field had previously been distinguishing between white and brown rats actually encodes the halting problem. 🙂

99. Rahul Says:

“You might say it’s less a result about nature itself than about the mathematical tools that we use to understand nature. “

Thanks Scott. Yes, this is exactly what I wanted to say. 🙂

And, we’ve been well aware of the limitations of our mathematical toolkit at least since Gödel’s masterpiece 80 odd years ago.

I’m sure we can unearth symptoms of this fundamental malaise in all sorts of nooks & crannies of the mathematical models we build.

And 146 page phenomenal proofs can swell to even-more-phenomenal 1500 page proofs of undecidability. I just don’t think it’s Physics.

The situation is akin to posessing a buggy electronic calculator that has a esoteric but already well documented bug but only in a very very rarely used math function (e.g. computing hyperbolic cosecant in Hexadecimal Mode).
This bug causes it to throw different answers at you each time & you doggedly keep rearranging abstruse problems into a form that will need a hyperbolic cosecant calculations & then plugging away at it with Hexadecimal inputs to trigger that bug & then Eureka!!: Publish that “Mechanics Problem is undecidable” followed by “Thermodynamics Problem is undecidable” ad nauseum.

100. Toby Cubitt Says:

Rahul #97: Just to be clear, the title to our Nature article is “Undecidability of the Spectral Gap”. The title “A Paradox at the Heart of Mathematics Makes a Physics Problem Unanswerable” is that of the Nature News article, written by Davide Castelvecci (though I don’t know if he wrote the title).

For what it’s worth, I think Davide did a good job of explaining a rather technically involved result to a general audience. Given that one cannot possibly explain all the nuances of a theorem in a title, the title doesn’t seem unreasonable to me. The spectral gap problem (in the precise mathematical sense defined our paper) is a problem that plenty of physicists study for various models they’re interested in.

Of course, the physicists studying the Haldane conjecture, or the 2D AKLT conjecture, or gapped topological spin liquid phases, etc. are theoretical physicists. Maybe you only want to consider experimental physicists as “real physicists”? (Some people might agree with you – e.g. me on a curmudgeonly day 🙂 But Dirac, Heisenberg, Einstein et al. are generally viewed as physicists, not mathematicians. (This is certainly true amongst mathematicians!) How would you succinctly describe a problem worked on by physicists, if not “physics problem”?

There are lots of subtleties one should be careful about when interpreting undecidability results for physics-related problems, that get lost in the media hype. Scott flagged a few in his post. You’re discussing a more subtle and important one: mathematics is always an idealised model of the physics, so any theoretical result in physics is really a result about the mathematical model; only experiments can tell whether the predictions of the model have any bearing on reality. I like Asher Peres’ way of putting this:

“Quantum phenomena do not occur in a Hilbert space. They occur in a laboratory.”

Any undecidability result has to involve infinity somewhere. So all undecidability results about physics-related problems are going to involve some idealised limit of an indealised mathematical model of physics. But the undecidability in the infinite limit tends to “show through” to the finite, experimentally accessible case. See my comment #64 for how this happens in our case. (<Blatant plug> if you don’t like the very large local Hilbert space dimension in our undecidability result, check out today’s arXiv listing for similar effects in much smaller dimensions. </Blatant plug>)

I don’t think the Nature News article title is the problem here. Levels of public education on Plato, Kant, Popper… maybe 🙂

101. David Gosset Says:

Rahul #99, Toby #64

I also believe that the work raises some interesting “practical” questions. In particular, Toby #64 points out their result seems to imply that there is no way in general to certify that a system is gapless from finite-size numerics. One interesting question is whether or not one can in general certify gappedness from finite-size numerics. If this were possible then I think it would have significant practical applications (a result due to Knabe gives a criterion along these lines for frustration-free Hamiltonians, but not more generally).

102. Toby Cubitt Says:

Rahul #97: Ooops! In case it’s not clear, that last comment (about levels of philosophical education) wasn’t addressed at you! I meant that the subtle distinction between mathematical models of physics and real physical systems isn’t something most people get much practice wrapping their brains around. The average poster on Scott’s blog does this before breakfast, of course 🙂

103. Lou Scheffer Says:

Rahul #99: The divide between the behavior of the model and reality shows up in lots of more accessible places. For example, any practical work using the Navier-Stokes equations assumes (correctly in my estimation) that it possesses solutions in all cases. The Clay institute questions about whether smooth solutions exist in all cases are *only* about the mathematical model, with no relation to the behavior of real fluids in real life.

104. Rahul Says:

@Toby #100:

Thanks for responding! Yes, to clarify, I didn’t mean your article’s title but just the other title. Sorry.

Theoritical Physics is great. But from what I understand, most of the important work in Theoritical Physics produced novel predictions that were empirically testable? e.g. Eddington’solar eclipse test for Einstein’s General Relativity etc.

From what Scott elaborated about your new Uncomputability Result for spectral gaps, no such empirical test exists? To me that was one key difference between this sort of work and the typical “physics” problem. To me, being amenable to a probing of empirical falsifiability is a crucial characteristic of a Physics problem.

Maybe I’m naive. Would love to hear what you think.

PS. Of course, reading your comment it now seems that undecidability in the infinite limit tends does have some sort of implication that can indeed be tested via a finite, experimentally accessible case? If so, I was misunderstanding your result, sorry.

105. Rahul Says:

Lou Scheffer #103:

“For example, any practical work using the Navier-Stokes equations assumes (correctly in my estimation) that it possesses solutions in all cases. The Clay institute questions about whether smooth solutions exist in all cases are *only* about the mathematical model, with no relation to the behavior of real fluids in real life.”

Right. But that’s why it makes perfect sense for a *Math* Institute to ask such a question. 🙂

You might say, these are all artificial bins. Maybe. But I think it makes for a useful distinction to distinguish the abstract Math model from the real world Physics it describes.

106. Scott Says:

Rahul: I do think the distinctions between math, physics, and CS have always been porous and are only getting more so, which is part of why at the very beginning at this thread, I declined to debate whether the undecidability of the spectral gap is “really physics.” Who cares?? Far better to ask: is it correct? is it interesting? is it nontrivial?

Also, testability is wonderful, in those cases where it’s relevant! But in the case at hand, we’re discussing the proof of a theorem, whose assumptions are all derivable from the quantum mechanics of 1925 and the computability theory of 1936. What would it even mean to experimentally test the proof of a theorem? (Actually, there is something mathematicians do that might fit the description, which is to derive other consequences from the proof techniques and check that they don’t get anything absurd. But I assume that’s not what you’re talking about.)

107. Rahul Says:

“I do think the distinctions between math, physics, and CS have always been porous and are only getting more so, which is part of why at the very beginning at this thread, I declined to debate whether the undecidability of the spectral gap is “really physics.” Who cares?? Far better to ask: is it correct? is it interesting? is it nontrivial?”

I think I disagree.

It is like asking “Who cares if DWave calls their contraption a Quantum Computer or not?!” Is it interesting? Sure. Non-trivial? Probably. And the optimizations it yields are correct in substantial part too (I hope!).

My point is not to be excessively dogmatic or pedantic about taxonomy but just that sometimes labels help. e.g. The distinction between the physics versus the mathematical language describing the physics.

“Also, testability is wonderful, in those cases where it’s relevant! But in the case at hand, we’re discussing the proof of a theorem, whose assumptions are all derivable from the quantum mechanics of 1925 and the computability theory of 1936. What would it even mean to experimentally test the proof of a theorem?”

Take something like the Bose Einstein Condensate or Shor’s Algorithm. These too are perfectly derivable from established basics (I think) but that doesn’t stop us from still empirically factoring numbers just to see whether it *really* works.

But yes, had you been forthright in claiming you’ve found a new *math* result, e.g. Fermat’s Last Theorem, sure it would be silly of me to ask you for empirical proof. So, yes, another distinction: Testability in Physics works differently than in math.

Physics is about reality. Math is about hard logic & axioms. (And I guess that’s why I don’t count String Theory as Physics. Yet. )

108. Scott Says:

Rahul #107: Actually, D-Wave is a perfect example of what I’m talking about. Every week, journalists ask me whether D-Wave is “really” a quantum computer or not. And every week, I try to explain why that’s the wrong question, why the right questions are things like “is there computationally-relevant quantum tunneling behavior?” (yes, looks like it) “is there a speedup compared to the best-known classical algorithms?” (no) “do we have a good reason to expect such a speedup if this is scaled up?” (no) Then next week, they ask me the same thing again… 🙂

In general, it seems to me that there’s an enormous amount of error in the world, that comes from people imagining that the way we decide to classify something tells us something about the thing over and above its actual properties.

109. Rahul Says:

Scott #63:

“No, to get uncomputability, it’s enough to have a parameter that’s unbounded, rather than a “completed infinity.” …… Likewise, the Cubitt et al. uncomputability result applies to the question: “in the limit, as the lattice size L gets arbitrarily large, does the spectral gap converge to 0 or to a nonzero value?”

I’m trying to understand this distinction. i.e. Would it be a different meaning if the Cubitt question was so posed:

“As the lattice size L becomes infinity does the spectral gap converge to 0 or non-zero?”

i.e. What’s the difference between “L gets arbitrarily large” vs “L becomes infinity”?

Isn’t the latter a formal way of stating the former?

110. Scott Says:

Rahul #109: It’s the difference between arbitrarily large finite lattices, and a literally infinite lattice. (Sometimes you can prove that the limiting behavior for larger and larger finite systems IS the behavior for an infinite system, but there are also cases where that’s false, and in any case it requires proof when true.) As a computer scientist, I usually prefer to talk about finite systems—which I see as the only ones that are “ultimately physical” anyway—but mathematicians and quantum field theorists are equally happy to talk about infinite ones.

111. Rahul Says:

Scott #110:

Thanks. Very interesting.

So, any good examples of Physics / Math problems where there exists a quantitative numerical “gap” between the limit for the two cases: i.e. “larger and larger finite systems” versus a “literal infinity”?

112. Scott Says:

Rahul #111: The easiest example would be, no finite set, however large, can be put in 1-to-1 correspondence with a proper subset of itself. So then, you might conjecture that the “limiting case” (e.g., the set of all positive integers) couldn’t be put into 1-to-1 correspondence with a proper subset of itself either. But of course, you’d be wrong! 🙂

As a slightly more interesting example, I give as an exercise in my undergrad class to construct an infinite set of tiles, copies of which can be used to fill up any finite part of the plane, however large, but can’t be used to fill up the entire infinite plane. (König’s Lemma implies that the same thing can’t be done with any finite set of tiles.)

113. Sniffnoy Says:

Ooh, here’s my favorite example of this!

Let’s consider choosing a random graph on a set of n vertices in the simplest possible way, namely, for each possible edge, include it with probability 1/2. Now say you choose two random graphs on the same set of n vertices; what’s the probability that they’re isomorphic? Well, it’s at most n!/2^(n choose 2), which, as n goes to infinity, goes to 0. So for large but finite values of n, it is very unlikely that you will randomly choose two isomorphic graphs.

But if n=aleph_0, then with probability 1, a random graph chosen this way is isomorphic to the Rado graph; in particular, if you choose two random graphs this way, then with probability 1 they are isomorphic to each other.

So the large-but-finite case, it is very unlikely, whereas in the countably infinite case, it has probability 1. I don’t know what happens in the uncountable case, but this is enough to illustrate the point.

114. Ian Says:

Is this distinction between large-but-finite and infinite what accounts for the bizarre notion that the sum of all integers is a very large number for any finite stopping point, but “equals” -1/12 in the infinite case?

Ian, I don’t think the -1/12th thing is true, it was either a misunderstanding or intentional trickery. More info: https://plus.maths.org/content/infinity-or-just-112.

116. Scott Says:

Ian #114: That’s a much more complicated discussion, since obviously, 1+2+3+… doesn’t “equal” -1/12 in any straightforward sense! What’s true, rather, is that there are many situations in complex analysis and physics where you get a divergent sum of the form 1+2+3+…, and where it turns out that the right way to “regulate” such a sum is to replace it with -1/12 (for reasons that go back to Euler in the 1700s).

But yes, the trickery that “licenses/encourages” people to replace 1+2+3+… by -1/12, wouldn’t apply to any finite part of that sum.

117. Ian Says:

118. Rahul Says:

So is it right to say that in “physicsey” contexts (modeling real world phenomenon) the large-but-finite limit is the usual one that is relevant?

Or is that generalization wrong? Are there practical modelling situations where the literal-infinity to use?

Or alternatively: is there any guidance on when nature prefers the divergent sum to be set to -1/12 instead of the measured metric just blowing up as it would with other divergent sums?

119. Rahul Says:

A couple more questions if someone can humor my layman curiosity:

(A) Is it correct to think of Cubitt’s undecidability result as having constructed a particular Godel sentence?

(B) Is every undecidable sentence in a consistent formal theory a Godel sentence? Vice versa too?

(C ) From past experience, have discoveries of particular uncomputability results turned out “useful” to derive other consequences outside of just discovering other uncomputable assertions?

120. Will Says:

Scott #116:

I’m very far from being an expert on this subject, so I don’t have a lot of confidence in what I’m about to say, but it seems like situation is more complicated. From the derivation of the Casimir effect I have in front of me, what happens is that you impose a cutoff in your divergent sum, then at the end you take the cutoff to infinity and the famous -1/12 shows up. So far so good. But the infinite cutoff corresponds to the physical situation of perfectly conducting plates. In reality, metals are only good conductors up to a certain frequency, which corresponds to the cutoff you should use in your calculation. And yet the calculation using a cutoff approaching infinity matches (roughly) the experimental results using non-perfect conductors. So it seems like nature is telling us that the -1/12 is showing up in some way even in a truncated sum.

121. Scott Says:

Rahul #119:

(A) The main thing is that it shows a certain problem is Turing-uncomputable. But one particular consequence, is that you can take the Gödel sentence of your favorite formal system and translate it into a statement that says that a certain Hamiltonian is gapless.

(B) No, there are other variations on the Gödel sentence like the Rosser sentence, sentences like Goodstein’s Theorem that are known to be independent of PA (but not of ZF), and sentences like the Axiom of Choice and the Continuum Hypothesis that are known to be independent of ZF (but which are not arithmetical). It remains an open problem to find a “natural” arithmetical sentence, fundamentally different from anything Gödelian, which is provably independent of ZF. In the other direction, if something wasn’t independent of an appropriate formal system, I guess we wouldn’t call it a “Gödel sentence” at all!

(C) Depends what you mean by “useful.” But sure, sometimes the way you show uncomputability is closely related to how you show NP-hardness of a truncated version of a problem, or how you get a universal computer into the system you’re talking about.

Scott: thanks for not saying “Mind the Gap” in this post.

123. Year's end choice articles from the arXiv - The Berkeley Science Review Says:

[…] been on arXiv since February) because the final version of the paper got accepted to Nature.  Scott Aaronson has a great writeup of the result.  Actually, Scott Aaronson also has a great 2015-in-review post […]

124. jacinabox Says:

I think much of the problems of the computing industry are that the Turing model of computation just isnt very natural from a theoretical point of view. I would favour Post machines — their rewrite rules have a direct interpretation in terms of inference rules of logic.

I also theorize re the P vs NP problem that you don’t gain anything by building a secondary data structure — I model this idea as the idea that NP = DSPACE(O(n)). The difference between the 2SAT and the 3SAT problems is that 2SAT has natural rewrite rules for its solution, e.g. P\/Q, ~Q\/R, conclude P\/R, whereas 3SAT does not (as far as anyone knows).

125. Ben Standeven Says:

But CNF-SAT also has natural rewrite rules for its solution, and is still NP-complete. Now the size of the clauses will grow with each resolution for CNF-SAT; but the same is true of Horn-SAT or XOR-SAT, and those problems are in P.

So this property looks more like a distinction between P and NL.