MIP*=RE

Another Update (Jan. 16): Yet another reason to be excited about this result—one that somehow hadn’t occurred to me—is that, as far as I know, it’s the first-ever fully convincing example of a non-relativizing computability result. See this comment for more.

Update: If you’re interested in the above topic, then you should probably stop reading this post right now, and switch to this better post by Thomas Vidick, one of the authors of the new breakthrough. (Or this by Boaz Barak or this by Lance Fortnow or this by Ken Regan.) (For background, also see Thomas Vidick’s excellent piece for the AMS Notices.)

Still here? Alright, alright…

Here’s the paper, which weighs in at 165 pages. The authors are Zhengfeng Ji, Anand Natarajan, my former postdoc Thomas Vidick, John Wright (who will be joining the CS faculty here at UT Austin this fall), and my wife Dana’s former student Henry Yuen. Rather than pretending that I can provide intelligent commentary on this opus in the space of a day, I’ll basically just open my comment section to discussion and quote the abstract:

We show that the class MIP* of languages that can be decided by a classical verifier interacting with multiple all-powerful quantum provers sharing entanglement is equal to the class RE of recursively enumerable languages. Our proof builds upon the quantum low-degree test of (Natarajan and Vidick, FOCS 2018) by integrating recent developments from (Natarajan and Wright, FOCS 2019) and combining them with the recursive compression framework of (Fitzsimons et al., STOC 2019).
An immediate byproduct of our result is that there is an efficient reduction from the Halting Problem to the problem of deciding whether a two-player nonlocal game has entangled value 1 or at most 1/2. Using a known connection, undecidability of the entangled value implies a negative answer to Tsirelson’s problem: we show, by providing an explicit example, that the closure Cqa of the set of quantum tensor product correlations is strictly included in the set Cqc of quantum commuting correlations. Following work of (Fritz, Rev. Math. Phys. 2012) and (Junge et al., J. Math. Phys. 2011) our results provide a refutation of Connes’ embedding conjecture from the theory of von Neumann algebras.

To say it differently (in response to a commenter’s request), some of the major implications are as follows.

(1) There is a protocol by which two entangled provers can convince a polynomial-time verifier of the answer to any computable problem whatsoever (!!), or indeed that a given Turing machine halts.

(2) There is a two-prover game, analogous to the Bell/CHSH game, for which Alice and Bob can do markedly better with a literally infinite amount of entanglement than they can with any finite amount of entanglement.

(3) There is no algorithm even to approximate the entangled value of a two-prover game (i.e., the probability that Alice and Bob win the game, if they use the best possible strategy and as much entanglement as they like). Instead, this problem is equivalent to the halting problem.

(4) There are types of correlations between Alice and Bob that can be produced using infinite entanglement, but that can’t even be approximated using any finite amount of entanglement.

(5) The Connes embedding conjecture, a central conjecture from the theory of operator algebras dating back to the 1970s, is false.

Note that all of these implications—including the ones for pure math and the foundations of quantum physics—were obtained using tools that originated in theoretical computer science, specifically the study of interactive proof systems.

I can remember when the class MIP* was first defined and studied, back around 2003, and people made the point that we didn’t know any reasonable upper bound on the class’s power—not NEXP, not NEEEEXP, not even the set of all computable languages. Back then, the joke was how far our proof techniques were from what was self-evidently the truth. I don’t remember a single person who seriously contemplated that two entangled provers could convince a polynomial-time verifier than an arbitrary Turing machine halts.

Still, ever since Natarajan and Wright’s NEEXP in MIP* breakthrough last year, all of us in quantum computing theory knew that MIP*=RE was a live possibility—and all through the summer and fall, I heard many hints that such a breakthrough was imminent.

It’s worth pointing out that, with only classical correlations between the provers, MIP gives “merely” the power of NEXP (Nondeterministic Exponential Time), while with arbitrary non-signalling correlations between the provers, the so-called MIPns gives the power of EXP (Deterministic Exponential Time). So it’s particularly striking that quantum entanglement, which is “intermediate” between classical correlations and arbitrary non-signalling correlations, yields such wildly greater computational power than either of those two.

The usual proviso applies: when I’ve blogged excitedly about preprints with amazing new results, most have stood, but at least two ended up being retracted. Still, assuming this one stands (as I’m guessing it will), I regard it as easily one of the biggest complexity-theoretic (and indeed computability-theoretic!) surprises so far in this century. Huge congratulations to the authors on what looks to be a historic achievement.

In unrelated news, for anyone for whom the 165-page MIP* paper is too heavy going (really??), please enjoy this CNBC video on quantum computing, which features several clips of yours truly speaking in front of a fake UT tower.

In other unrelated news, I’m also excited about this preprint by Avishay Tal, which sets a new record for the largest known separation between quantum query complexity and classical randomized query complexity, making substantial progress toward proving a conjecture by me and Andris Ambainis from 2015. (Not the “Aaronson-Ambainis Conjecture,” a different conjecture.)

100 Responses to “MIP*=RE”

  1. R. Crist Says:

    A solution to the Connes’ Embedding Conjecture solution is as big a deal to operator algebraists as the actual result is to complexity theory. It’s interesting that two of the biggest problems in operator algebras (CEC and the Paving Problem) were solved by using equivalences to problems in other areas.

  2. Joshua B Zelinsky Says:

    With the disclaimer that I’ve only looked at the paper very briefly, it seems like not only have they proven that MIP* = RE, they’ve also proven that the Connes’ Embedding Conjecture is false. That’s surprising, although not incredibly so, since if I understand correctly, it was already known that the Embedding Conjecture implied an upper bound of RE on MIP*, so there already was a connection between the two problems.

  3. fred Says:

    Is there a short explanation of the consequences of this that can be understood by a casual computational complexity enthusiast? Or 165 pages is the Kolmogorov complexity and I’ll just move on?

  4. Bob Hearn Says:

    Interesting. You write “I don’t remember a single person who seriously contemplated that two entangled provers could convince a polynomial-time verifier than an arbitrary Turing machine halts… I regard it as easily one of the biggest complexity-theoretic (and indeed computability-theoretic!) surprises so far in this century.”

    Can you briefly explain why this is so shocking? You might recall my result with Erik Demaine that team games of imperfect information are undecidable, even when the games involve a fixed amount of state (i.e. are played on a finite board). To me this has the same flavor of “but wait, where did the infinite TM tape go?” that MIP*=RE does. It certainly shocked me at the time, anyway. Is it that same basic sense you are talking about, or something else?

  5. fred Says:

    Hi Scott,

    maybe a question that’s too personal, but I’d like your opinion on this…

    Would you encourage experts in the same field to marry each other? The upsides are obvious, but are there downsides?
    Like, the kids are confused that mom and dad talk and/or possibly argue so much about the entangled value of non-local two player games? 😛

  6. Scott Says:

    fred #2: Did you try reading just the abstract and intro? Trust me, it will be a lot easier than the full 165 pages. 🙂

    Some of the major implications are:

    (1) There is a protocol by which two entangled provers can convince a polynomial-time verifier that a given Turing machine halts (or of the answer to any computable problem whatsoever).

    (2) There is a two-prover game, analogous to the Bell/CHSH game, for which Alice and Bob can do markedly better with a literally infinite amount of entanglement than they can with any finite amount of entanglement.

    (3) There is no algorithm even to approximate the entangled value of a two-prover game (i.e., the probability that Alice and Bob win the game, if they use the best possible strategy and as much entanglement as they like). Instead, this problem is equivalent to the halting problem.

    (4) The Connes embedding conjecture, a central conjecture from the theory of operator algebras dating back to the 1970s, is false.

  7. Scott Says:

    Bob Hearn #3: One reason why it’s shocking—at least relative to everyone’s thinking back in the day—is that the original expectation was that, if anything, entanglement would only decrease the power of multi-prover interactive proof systems. Indeed, there are well-known examples of proof systems that are sound with unentangled provers that become unsound when the provers were entangled. So it was a major advance when, in 2012, Ito and Vidick at least showed that the proof of MIP=NEXP could be “immunized” against entangled provers (yielding NEXP⊆MIP*)—with no thought at the time of getting even more power than you did classically! But now it turns out that, not only do you get the same vast power as classically (NEXP), not only do you get even a little bit more power (QMAEXP), not only do you get a lot more power (NEEXP), you go all the way up to the friggin’ halting problem!

    Was any similar progression followed in the example you mentioned, involving games with imperfect information?

  8. Scott Says:

    fred #4:

      Would you encourage experts in the same field to marry each other? The upsides are obvious, but are there downsides?
      Like, the kids are confused that mom and dad talk and/or possibly argue so much about the entangled value of non-local two player games?

    Uhhh … whatever might be the downsides of marrying someone in the same field as you, they don’t include your kids being confused by technical talk. Married couples don’t typically want or expect their kids to understand every last thing that they say to each other.

    Meanwhile, the upsides include never having to explain things like conference deadlines to your spouse. Also, being able to travel to many conferences with your spouse, and being able to tolerate your spouse’s work friends because many of them are also your work friends.

    (For context, Dana and I are not in exactly the same field—I do quantum while she does classical complexity theory, especially PCP and derandomization. But yeah, it’s close enough that we have many of the same friends, which indeed is how we met in the first place.)

  9. James Giammona Says:

    Hi Scott,
    In a previous comment to a different post (https://www.scottaaronson.com/blog/?p=3678#comment-1763790), you mentioned:
    “Connes’ very abstract-looking embedding conjecture for C* algebras is actually equivalent to the natural statement that any correlations allowed by quantum mechanics can be arbitrarily well approximated using a finite number of entangled qubits”.

    Does this conjecture being false imply that some QM correlations cannot be approximated by a finite number of entangled qubits? (Is that surprising or non-intuitive?) How is this different (or related) to the case that some Hamiltonians cannot be efficiently approximated?

  10. RandomOracle Says:

    To add to why this result is so surprising, if we go in either direction in terms of strengthening or weakening the correlations between the provers we get complexity classes that are strictly weaker than RE.

    Specifically, if the provers share classical correlations the resulting class is MIP which we know is equal to NEXP. If they share general non-signalling correlations we get MIP_ns which is contained in EXP.

    It’s really remarkable that in the case of quantum correlations the resulting class is ridiculously more powerful than either of these two.

  11. Scott Says:

    James Giammona #9: Yes, that’s also an implication. And yes, it’s also extremely surprising, at least relative to what the consensus had been until a year or two ago (when we started to get more and more indications that the consensus was wrong). I’ll add something about that to the main post.

    I’m not sure what result you were referring to with “some Hamiltonians can’t be efficiently approximated,” but whatever it is, it’s almost certainly very different. For one thing, note that with the new result we’re not talking about efficient approximation, but simply about approximation with any finite number of qubits!

  12. Gerard Says:

    Scott: #6

    “(1) There is a protocol by which two entangled provers can convince a polynomial-time verifier that a given Turing machine halts (or of the answer to any computable problem whatsoever).”

    For someone who is very uninformed about things like interactive proofs and quantum complexity theory, a naive reading of this would seem to mean something like “2 quantum computers working together with a classical computer can solve the halting problem”. I’m pretty sure that’s not what you mean to say here, so I’m hoping you can clarify/elaborate on this statement a bit.

    More generally do you think this this result will have practical implications or is it purely theoretical ?

  13. domotorp Says:

    Does this result have any implication about classical multi-prover protocols, like if there are two Provers who can communicate with each other, but only with certain constraints, then the class equals RE?

  14. Scott Says:

    RandomOracle #10: Thanks for the reminder! That’s an extremely relevant point that I’ll also add to the main post.

  15. Mateus Araújo Says:

    I’m shitting bricks here. I never thought I’d see this problem being solved in my lifetime, and I was sure that Tsirelson’s problem had a positive answer: that of course the closure of the set of finite dimensional correlations must be the set of infinite dimensional correlations, because Nature is in some vague sense fundamentally finite. Oh well, it’s still possible that there is a mistake somewhere 😉

  16. Scott Says:

    Gerard #12:

      For someone who is very uninformed about things like interactive proofs and quantum complexity theory, a naive reading of this would seem to mean something like “2 quantum computers working together with a classical computer can solve the halting problem”. I’m pretty sure that’s not what you mean to say here, so I’m hoping you can clarify/elaborate on this statement a bit.

    The fact that these are two provers convincing a verifier is essential here. You can’t simplify that away, as you’re trying to above, without completely destroying the meaning.

    The setting you should imagine is that there are two quantum computers with no bound whatsoever (not even “less than infinity”) on their power, how many qubits they can have, how long they can run for, etc. So then, you might say, it’s no surprise that they have superpowers like solving the halting problem!

    But crucially, the QCs are not trusted. They can say whatever they want, but why should a classical polynomial-time verifier believe them?

    The whole point of the new result is that, as long as the two QCs share an unlimited amount of entanglement, they can convince the verifier, in time polynomial in the length of a program, that they executed that program correctly—no matter how astronomical the program’s running time.

      More generally do you think this this result will have practical implications or is it purely theoretical ?

    As mentioned above, executing the protocol in interesting cases would require two astronomically-powerful QCs that share an astronomical amount of entanglement. For that reason alone, direct applications seem unlikely. As for indirect applications, who knows? When MIP=NEXP itself was first proved (by Babai, Fortnow, and Lund around 1990), it may have seemed comically remote from any applications. But it led directly to PCP Theorem, which became the central tool for making statements about the inapproximability of practical NP-hard problems.

  17. Scott Says:

    domotorp #13: If the provers were limited to communicating with each other in all and only the ways that perfectly simulated quantum entanglement, then yes, I suppose. 😀 But AFAIK the result is very specific to entangled provers.

  18. Mateus Araújo Says:

    Maybe it’s a good idea to pre-emptively clear up a misconception that always crops up when physicists discuss this subject: “How can it be undecidable to determine whether the Tsirelson bound of a bipartite Bell inequality is 1 or at most 1/2? We already have an algorithm for approximating it, the NPA hierarchy!”

    No, we don’t. The NPA hierarchy is not an algorithm, because algorithms need to halt after a finite number of steps, and the hierarchy does not, it goes on forever. Every fixed level of the hierarchy is an algorithm, and sometimes we know that the hierarchy has converged and we don’t need to go to higher levels, and often it gives us very good approximations, but this is not the case in general, and cannot be. In general we only know that the hierarchy converges (to the commuting set) in the limit of infinitely many levels.

    This result implies that there can’t be an algorithm for determining the Tsirelson bound, so in a sense the NPA hierarchy is the best we can hope for.

  19. Scott Says:

    Mateus #15:

      I was sure that Tsirelson’s problem had a positive answer: that of course the closure of the set of finite dimensional correlations must be the set of infinite dimensional correlations, because Nature is in some vague sense fundamentally finite.

    But you can still believe that “Nature is in some vague sense fundamentally finite”! The upshot is simply that, if Nature had involved infinite-dimensional Hilbert spaces, then that could in principle have been testable in a more direct way than you would’ve thought a priori. But it’s not like we know how to (attempt to) apply the infinite-dimensional unitaries anyway, in order to do the requisite experiment.

  20. Scott Says:

    Mateus #18: Yes, thanks for that!

  21. James Giammona Says:

    Scott # 11, thanks for answering my questions! Your answers were quite helpful.
    I was referring to the cost of simulating some quantum systems with a quantum computer. My understanding was that certain quantum systems could not be efficiently simulated on a quantum computer (i.e. would take exponential time or space resources) (perhaps that’s not a proven result?).
    Your explanation that this is about approximation in general and not about efficient approximation is a great clarification.

  22. Tobias Fritz Says:

    Scott #18:

    “But it’s not like we know how to (attempt to) apply the infinite-dimensional unitaries anyway, in order to do the requisite experiment.”

    Not only that, but it is also questionable whether there even exists any physical system that could realize the relevant commuting operators with spacelike separation even in principle.

    In fact, suppose that AQFT is a suitable mathematical framework for modelling physical reality. Then the split property, which is expected to hold for all sensible QFTs, implies that all spacelike commuting operators can always be realized on separate tensor factors while generating the same algebra. See Theorem 4.1 in this paper. This suggests that all physically realizable quantum correlations with spacelike separation are consistent with the tensor product paradigm.

    So while this paper is obviously an enormous achievement and a fantastic breakthrough for complexity theory and operator algebras, I think that there is reason to be skeptical about claims that it is relevant to the foundations of physics.

  23. Scott Says:

    James #21: Sure, there are quantum systems that can’t be efficiently simulated on a QC. But for that very reason, to whatever extent you believe in the “Quantum Extended Church-Turing Thesis” (QECT), those systems should be of limited relevance to Nature. And for all its revolutionary implications, the new breakthrough on MIP* has nothing to say one way or the other about the validity of the QECT.

  24. Mateus Araújo Says:

    Scott #19: That’s no consolation at all, having an actual experimental result to rub in my face how wrong I was. Maybe I deserve that for making fun of Miguel Navascués for believing that such an experiment must exist.

    It is still possible, of course, that the unitaries necessary to do the experiment are unphysical in a strong way, such as needing infinite energy or time to be applied. Until somebody does that I have to accept that as far as we know the experiment is possible.

  25. Mateus Araújo Says:

    Lol, I checked the paper (section 12.4), and their “explicit example” is clearly using mathematician values of “explicit”. We are still far from knowing whether the experiment is physically possible.

  26. Sebastian Oberhoff Says:

    One further point that I think is worth highlighting is that while it’s now possible to convince the verifier that a halting computation does indeed halt, it’s not the case that one can convince the verifier that a non-halting computation doesn’t halt.

    When I first heard of this result I thought: “Whoa, so by conversing with entangled, omniscient provers I could become convinced of the consistency of ZFC, effectively overcoming the Second Incompleteness Theorem!” Alas, this is not the case.

  27. Scott Says:

    Mateus #24: Does the comment of Tobias Fritz (#22) assuage your worries?

  28. Gerard Says:

    Scott #16

    “The setting you should imagine is that there are two quantum computers with no bound whatsoever (not even “less than infinity”) on their power, how many qubits they can have, how long they can run for, etc. So then, you might say, it’s no surprise that they have superpowers like solving the halting problem!”

    I still find that surprising. As far as I know, there are no bounds in the usual Turing Machine model either, yet no TM can solve the halting problem.

    “The fact that these are two provers convincing a verifier is essential here.”

    What is it about the provers that establishes their separateness ? I mean why couldn’t one simulate the entire configuration using just a single QTM ?

  29. Scott Says:

    Gerard #28: No, you’re still not getting it. The provers can even run for a literally infinite amount of time. Thus, if they didn’t also have to convince the verifier, there would be no issue whatsoever.

    Sure, whatever resources the provers end up needing, you can simulate them using similar resources running on a single machine. (With the proviso that the “resources” in question might be literally infinite!)

    But again, the relevant question here is why the verifier is convinced that the provers didn’t cheat. That’s where the assumption of no communication between the provers (only entanglement between them) becomes essential.

  30. Mateus Araújo Says:

    Scott #27: I’ll have to read the papers he linked in order to draw a conclusion (after all AQFT is not a suitable mathematical framework for modelling physical reality), but Fritz’s word is enough to let me sleep tonight.

    I’m wondering about a different problem, though. We know that the tensor product Tsirelson bound is decidable (as it can be approximated from below by just Tarskiing it, the most horrible algorithm I can imagine). Since now we know that the NPA hierarchy does not converge to it from above, but to the commuting Tsirelson bound, shouldn’t there be an algorithm that does converge to it from above? And since it is a decidable number, can’t we hope the algorithm to be something nicer than the NPA hierarchy? (the fact that the algorithm for approximating it from below is anything but nice maybe indicates that this is a misguided hope, or simply that nobody tried hard enough yet).

  31. Boaz Barak Says:

    Tobias Fritz #22 – I am not a physicists, so maybe completely off base, but isn’t your comment an example why the result could potentially have applications to physics?

    Not in the sense that it has such applications now, but that it opens some possibilities that would not have existed had the Connes embedding conjecture been true. Specifically, suppose you had two physical theories that disagreed with one another only in that one allowed for these more general types of operators. Before this result, it might have seemed that even in principle, one would not be able to design an experiment that could distinguish between the two theories, but now based on this results we could potentially design such an experiment. This is similar to how Bell’s inequalities can be used to design an experiment to rule out hidden local variables theories.

  32. William Slofstra Says:

    Mateus @30: While we can compute a sequence that converges to the tensor-product value from below, it’s not an algorithm for the same reason the NPA hierarchy isn’t an algorithm: we don’t have any way to know how close we are to the true value. This new result doesn’t just show that MIP*=RE, it shows that two-prover one-round MIP* = RE, which is equivalent to showing that the tensor-product value of a nonlocal game is not computable, in the sense that there’s no way to decide whether the tensor-product value of a game is equal to 1, or less than 1/2. So not only does the NPA hierarchy not converge to the tensor-product value, there is no computable sequence which converges to the tensor-product value from above.

    We don’t know yet whether it’s possible to have a computable sequence which converges to the commuting-operator value from below. If MIP* = RE, it seems likely that MIP^co, the equivalent class where the provers can use commuting operator strategies, could be equal to coRE. This is still open (see the open questions section of the paper), so it will be interesting to see if it can be done with the same techniques.

  33. Tobias Fritz Says:

    @Boaz Barak #22: that’s an interesting point. But if we have two physical theories that disagree with each other, and they are both concrete enough to make predictions, then they presumably will already make different predictions in much simpler situations. It follows that we will have easier ways to decide which one is wrong. More precisely, we could use both theories to model our proposed Bell test experiment, and if they give different predictions there, then of course they must already differ in the probabilities predicted for one particular choice of setting. Thus we don’t actually need to do the full Bell test: merely doing the relevant measurements with that particular choice of setting is enough.

    But what in case that the theories are not concrete enough to make predictions, but they are rather metatheoretical frameworks such as (A)QFT? In that case, it seems very unlikely that we’d be able to realize a Bell test of the relevant kind anyway; we need a concrete model of our apparatuses in order to determine how to implement the relevant measurements and state preparation. In other words, unless we get extremely lucky by doing something random, a pure metatheoretical framework is not enough to tell us what to do. So also in this case the Bell test doesn’t help.

    For a simpler version of the same argument, consider just a plain Bell test. Even without performing it, we know that either it will disprove local hidden variables, or it will disprove quantum theory. In case that it did the latter, we presumably would already have noticed a deviation from quantum theory in our modelling of the experiment’s individual components. Hence the only physical insight we can hope to obtain by actually conducting a Bell test is to exclude wacky theories in which physics conspires against us by suddenly changing its behaviour as soon as we conduct a Bell test.

  34. Greg Kuperberg Says:

    Joshua #2 – They have disproved the Connes embedding conjecture, but that’s not a separate result. MIP* with tensorable entanglement is in RE, while MIP* with commuting operator entanglement is in coRE. The Connes conjecture is (or was) that tensorable entanglement approximates all commuting operator entanglement. The Connes conjecture would imply that MIP* is in RE cap coRE = R, which is of course inconsistent with RE-completeness by Turing’s theorem.

  35. William Slofstra Says:

    Some sloppiness in my previous comment: showing MIP*(2,1) = RE is equivalent to showing that computing the tensor-product value is as hard as the halting problem. (Otherwise you might not get all of RE.)

  36. Scott Says:

    Boaz #31: Yes, the prospect of an “experimental test of the finite-dimensionality of the universe’s state space” was also one of the things that excited me about this line of research! But we should be clear that we don’t know, even in principle, how to apply the test in question (which would require specific infinite-dimensional unitary operators)—even assuming the case where the universe’s Hilbert space is infinite-dimensional. And that this is a genuine difference between this result and the Bell inequality. (A second difference, of course, is that the Bell inequality is easy to prove. 🙂 )

  37. Joshua B Zelinsky Says:

    Greg #34,

    Yes, that’s a really good point. The connection here is substantially more direct to previous work involving the conjecture than I had initially realized when I started reading it.

  38. Daniel Harlow Says:

    An aspect of this which is naively puzzling to me is as follows: for these astronomical complexity classes, why should quantum mechanics be helpful? After all BQP is contained PSPACE, so if we don’t even care how long it takes things to run then can’t we just simulate all quantum physics classically (or maybe classically plus a random number generator)?

  39. Boaz Barak Says:

    Tobias #33 Scot #31: Let me try to explain better what I mean.
    Bell’s Inequality experiments have 3 components as far as I can tell:

    1) You prove that a certain experiment will always succeed with probability at most p if the parties involved are modeled using classical hidden variables.

    2) You prove that quantum mechanics allows in principle a strategy that will succeed with probability p’ which is strictly larger than p

    3) You find a concrete implementation that manages to succeed in implementing this strategy.

    I think the formal Bell inequality is just steps 1 & 2, and then to make an actual experiment you need to do step 3 which is to actually show how you can realize the unitaries that are needed. Similarly, the new result can be thought of somewhat analogous to step 1 & 2 – it shows that there is an experiment that could be realized with commuting operators but not tensorizing operators (I hope I’m using the right notation).

    It doesn’t give step 3 – a concrete implementation of an experiment which actually achieves a value better than that achievable using tensorizing operators. Step 3 would inherently depend on the particular infinite-dimensional theory (this is a little bit like the question of efficiency of the prover in classical interactive proofs), but at least now there is a “fighting chance” in coming up with such an experiment.

    Perhaps the biggest difference between Bell’s inequality and this result is the conceptual case. In Bell’s inequality, we actually believe in the more general theory (i.e., quantum mechanics) as opposed to the more restricted one (i.e., hidden variables). In contrast, if you believe in some theories such as AQFT then (if I understood Tobias) you actually believe that the set of realizable distributions is more restricted than the one that is allowed by all commuting operators.

    Still, I do think it’s interesting because it says that if you believe in AQFT then you are actually positing a “law of nature”: it is a non trivial restriction on the set of realizable distributions. It might be interesting because if you are already positing this law of nature, you are already on a “slippery slope” and may as well take Aaronson’s position and not just rule out the distributions that could arise only via these infinite-dimension states but also the ones that could only arise using exponentially complex computation..

  40. Scott Says:

    Boaz #39: Yes, I don’t think we disagree about anything here! I would only add that, for me, saying that MIP*=RE is “missing only the concrete implementation aspect,” somewhat understates the magnitude of the difficulty. 🙂

    Let’s say it plainly: to implement the experiment we’re talking about would involve doing a specific quantum computation on a literally infinite number of qubits. So a skeptic might retort: if you actually had an infinite number of qubits available for acting on, then wouldn’t that fact alone already settle the infinite-dimensionality of the universe’s Hilbert space, with no need for further experiments?

    The reply to that is: well, maybe you don’t know how many qubits you have. Instead, you just have some black boxes with unknown (possibly infinite) numbers of qubits, together with finitely many knobs and measuring devices that act on the black boxes in ways that we assume to be consistent with the rules of QM. We can assume further that each knob has only finitely many settings, and each measuring device returns one of only finitely many outcomes.

    In such a case, yes: if the black boxes were of exactly the right sort, then the whole upshot of the MIP*=RE breakthrough is that the boxes’ observed behavior could certify the presence of infinitely many qubits inside them, to high statistical confidence. And this is conceptually amazing.

    From a “practical” standpoint, though, the trouble is that, to whatever extent it’s “physically plausible” to imagine worlds with infinitely many qubits at all, we almost certainly still wouldn’t be able to build the requisite black boxes in those worlds, at least not in any finite amount of time. The reason—physicists on the thread should tell me if I’m wrong!—is that accessing more and more qubits would almost certainly involve accessing higher and higher energy modes, but any doable experiment can marshal only finite energy.

    By contrast, as soon as Bell published his inequality, I think it was clear that, to whatever extent the world is described by quantum mechanics at all, in principle this experiment could be done. And that would still have been true, even if we were unlucky and the Bell experiment had turned out to be impractical for the next billion years, rather than doable in the next 20.

  41. Scott Says:

    Daniel Harlow #38:

      An aspect of this which is naively puzzling to me is as follows: for these astronomical complexity classes, why should quantum mechanics be helpful? After all BQP is contained PSPACE, so if we don’t even care how long it takes things to run then can’t we just simulate all quantum physics classically (or maybe classically plus a random number generator)?

    That’s an excellent question … the asking of which is precisely what led everyone for 17 years to the wrong intuition that MIP* should be relatively weak. 😀

    The key point is that, in the definition of MIP*, there’s no a-priori upper bound on how many entangled qubits the provers are allowed to use. And the more entangled qubits they have, the longer it could take to simulate their strategy.

    Naïvely, this seems like just a technical problem. Surely there’s some theorem showing that, without loss of generality, the provers will never need more than exp(n) entangled qubits (or however many), in order to approximate whatever they can do with arbitrarily many entangled qubits, at least as far as protocols running in poly(n) time are concerned?

    Seemed reasonable … to me, and to nearly everyone else.

    But the upshot of the new paper is that it’s wrong. More entangled qubits let the provers do more, without limit. And so, for example, if you hand the verifier an n-bit program that runs for T steps before halting, then just by running a poly(n)-time protocol, the verifier can check that the provers must have shared at least T entangled qubits. (Or even a literally infinite number of qubits, if we’re willing to go outside the world of tensor-product Hilbert spaces.) Insane.

  42. Sniffnoy Says:

    So, some basic questions here. The paper discusses a number of sets of correlations. There’s C_c, C_q, C_qs, C_qa, and C_qc. These are all distinct (with C_c < C_q < C_qs < C_qa being already known, and C_qa < C_qc being the negative resolution of Tsirelson’s problem that the paper proves). One could also add in another set of correlations to this chain, the set of all non-signalling correlations (let’s call it C_ns).

    Am I correct in inferring that you can define an MIP-analogue for any of these sets? Like, classical correlations C_c is MIP (which is equal NEXP). MIP* (now shown to be RE) I gather is C_qs? And if you use C_qa, is that just also MIP* since C_qa is just the closure of C_qs? And then above it’s been discussed how C_qc gets you MIP_co which is speculated to be coRE? And C_ns gets you MIP_ns which is equal to EXP? And I guess if you allowed all correlations then that would include signalling correlations and so you’d just get IP (which is equal to EXP)?

    So, questions:

    1. Do I have all this right? MIP* comes from C_qs, right? So does C_qa in fact get you the same thing? And what does using just C_q get you? Is that also MIP*?

    2. Has there been any study of this for weirder sets of correlations?

    3. So, a check of my understanding here… the paper shows that Tsirelson’s problem resolves in the negative, i.e., shows that C_qa < C_qc. But a bunch of the discussion above (like Scott #40) talks about distinguishing a finite number of qubits from an infinite number. Isn’t that C_q vs C_qs instead? Or what’s going on here? Is Scott’s informal characterization correct, but it’s not C_q vs C_qs, but rather some other thing that can informally be described the same way and indirectly leads to C_qa ≠ C_qc? What’s going on here?

  43. Mateus Araújo Says:

    Slofstra #32: I’m confused. What I have in mind is the Tarski algorithm (which is an actual algorithm). We parametrize the state and observables for some dimension d, and we’ll have a formula over the reals (the complex numbers drop out because the expectation values are always real) with a lot of parameters, but that boils down to \(\exists |\psi\rangle,\{A_i\},\{B_i\};v(G)=k\). We can then do quantifier elimination to solve this problem, and then do a binary search over k. This shows that for every fixed d the set is decidable.

    Ok, now I see the problem: we can always increase d, and we can’t know when to stop, so the “algorithm” doesn’t terminate, just like the NPA hierarchy. Damn.

    It is still valid question, though, whether we can have a tensor-product analogue of the NPA hierarchy, that converges to the tensor-product value from above. It wouldn’t be an algorithm, but it would still be better than what we have now, which is absolutely nothing.

    (I was writing under the assumption that MIP^* was defined for the commuting strategies, so it wouldn’t be a contradiction for the tensor-product Tsirelson bound to be decidable. It is not, though, as you noticed they used the “correct” definition for MIP^*. I’ll just be a pessimist and conjecture that the commuting Tsirelson bound is also undecidable)

  44. gentzen Says:

    In discussions triggered by the “MIP* contains NEEXP” paper asdf exclaimed: “Something still seems paradoxical.” I wonder whether seeing this result on par with other paradoxes like the Banach-Tarski paradox would be a valid perspective.

    An oracle for the halting problem could lie (by claiming that some non-halting computation would actually halt), and there is no protocol allowing “a mere Turing machine with infinite patience” to catch it lying. But for two oracles with infinite entanglement, there is a protocol that overcomes this barrier. This is sort of unbelievable.

    Joel David Hamkins once asked: What would it take in a book to convince a rational person that it had been written by or directly inspired by a god? I guess there is no way for the book to do that trick. But now imagine two books with infinite entanglement…

  45. Scott Says:

    gentzen #44: Yes, it’s “sort of unbelievable.” Two quibbles though:

    (1) A prover, which does whatever it needs to do to get you to accept, is a fundamentally different type of thing from an oracle, which just unfailingly answers a certain kind of question. If oracles are gods, maybe it would help to think of provers as trickster angels or devils? 🙂

    (2) Personally, I’d already be convinced of godlike powers if an entity could reliably solve NP-complete problems (eg, finding short proofs of theorems whenever they exist), or PSPACE-complete problems (which it could demonstrate via an interactive protocol). The halting problem would be massive overkill.

  46. Mateus Araújo Says:

    Sniffnoy #42: About your question 3, this is indeed confusing. For practical purposes, all that matters are the sets C_qa and C_qc, as C_qa is the closure of C_q (and also the closure of C_qs), and nobody wants to work with an open set.

    Now, despite C_qa explicitly coming from infinite-dimensional states, people still refer to it as “finite”, because you can approximate C_qa arbitrarily well with correlations produced from finite-dimensional states. C_qc, on the other hand, can not be approximated with correlations from finite-dimensional states (as we now know), so it is literal, unnaproachable infinity.

  47. Mateus Araújo Says:

    Scott #40: I actually find conceivable (although very implausible) that Nature does give us for free such black boxes with an infinite amount of qubits which are infinitely entangled. The vaccum state of QFT is, after all, entangled, even though it is a small amount of entanglement that is very hard to use for a Bell inequality violation.

    What I don’t find conceivable is that we could actually implement the unitaries necessary to manipulate these infinitely complex systems. As you point out, that would probably require infinite energy, and some other infinite resources as well.

  48. Boaz Barak Says:

    Scott #40: thank you – I didn’t realize that the energy would blow up and that’s a very significant point.

    The one point that might still make the result potentially useful is that it’s not merely a separation but a reduction. That is, rather than merely refuting the Connes embedding conjecture by a single counterexample, they actually show a general recipe to give a protocol for every problem in RE. That might give some hope that there is more than one set of “black boxes with knobs” that could be used in their result. In any case, the situation is of course very different from Bell since I’m not a physicist, but my understanding is that there aren’t many natural theories with these black boxes lying around.

    On a different note, can this result be used to give quantum 2 prover “proofs of work” or “delegating computation” where the verifier complexity is an absolute constant which is completely independent of the complexity of the computation but only depends on the error probability? That is, we can have an absolute constant time verifier that can be used to certify that you spent T units of computation for an arbitrarily large T, or where you can certify to it the result of computing F(x) for F in TIME(T)? I guess this is the question if whether in the case that the Turing machine does halt in time T, is the prover strategy computable in time polynomial (or even any function of) T?

  49. Tobias Fritz Says:

    Boaz #39: I agree with that very nice summary, but my point still stands: actual Bell experiments are not a good tool to tell us anything new about physical reality, regardless of which theory of physics is correct.

    The reason that Bell tests are not interesting as actual experiments is because we already know with close to certainty what the result will be. And why is that? It’s because, as in your step 3, we already need to have a good theoretical model of our experiment in order to design it [1]. Now suppose that we conduct a Bell test based on such a model, either a vanilla one or a fancy one of the kind that could distinguish between the commuting-operator paradigm and the tensor product paradigm. Then again we will already know the result before performing the experiment, simply because we’ve modelled it. And if our model is not completely correct, then there has to be a discrepancy between model and reality somewhere in the big table of probabilities that describes the Bell experiment. In other words, our model fails to be correct already for one particular choice of setting. So in order to witness this discrepancy, it’s enough to do the experiment with that choice of setting only, so that performing the whole Bell test is moot. Even worse, in all likelihood we would have noticed the failure of our model already much earlier, namely while working with the individual parts that make up our Bell experiment apparatus.

    This is all notwithstanding crazy conspiracy theories in which the laws of nature change as soon as we decide to conduct a Bell-type experiment. But anyone who entertains such theories also needs to explain why they nevertheless find it plausible to assume that the choices of setting are random and not correlated with the entangled state, since without this assumption a Bell test is once again useless. Thus this amounts to believing in a conspiracy and in the absence of a conspiracy at the same time.

    [1] Of course this applies to any experiment in physics. However, one characteristic of an interesting experiment is that it probes the limits of applicability of our model, like a particle accelerator probing higher energies than those at which we have confidence in our model. Perhaps you would argue that an experiment like we’re discussing here would probe the limits of complexity. This may or may not be the case, but I think that my point above applies regardless.

    ———

    I’m a bit worried that this discussion may be too much of a rabbit hole deflecting from the amazing result that has been achieved. So I’ll leave it at this for now, perhaps unless Scott encourages further discussion.

  50. fred Says:

    Scott #16

    “The whole point of the new result is that, as long as the two QCs share an unlimited amount of entanglement, they can convince the verifier”

    If two provers/QCs are sharing an “infinite amount of entanglement”, aren’t they effectively one prover/QC?

    How do you define the boundaries of a QC that’s sharing entanglement with another QC?
    We can trivially connect two classical computers to make a bigger machine (Moore’s law is one result of this), but given two QCs, it’s difficult to join them to create a bigger QC, but “sharing entanglement” seems to be one way to do it, gradually, no?

  51. mr_squiggle Says:

    re: (Not the “Aaronson-Ambainis Conjecture,” a different conjecture.)

    Have you considered calling this other thing the “Ambainis-Aaronson Conjecture”, or is that something else again?

    (I’m sorry, I love reading your blog, Scott, although I confess most of it goes over my head.)

  52. Scott Says:

    fred #50:

      If two provers/QCs are sharing an “infinite amount of entanglement”, aren’t they effectively one prover/QC?

    No, they’re not—for one thing, because they still can’t exchange messages with each other.

    The correlations that you can produce using quantum entanglement—yes, even an infinite amount of entanglement—are only a tiny subset of all the correlations that are a-priori possible. (And far from being some throwaway remark, the preceding sentence is at the core of a large fraction of the advances in quantum protocols and quantum foundations that we’ve seen in the last 15 years.)

  53. Greg Kuperberg Says:

    William #32 – Do you expect that there are groups that are not hyperlinear?

  54. Mateus Araújo Says:

    Fritz #22: Thanks for posting the link to arXiv:0812.1517, that was a delightful read. The key quote from that paper is “… the only known models in which even the distal split property does not hold are physically pathological models, such as models with noncompact global gauge group and models of free particles such that the number of species of particles grows rapidly with mass.”

    “Distal split property” is the property that algebras on space-like separated regions that are at least a distance d apart from eachother have a tensor-product structure.

    Therefore, the problem is worse than I thought in #47 and #24; there I’m just complaining about essentially infinitely dimensional systems. Correlations in C_qc must come from algebras that fail this basic locality condition, and there isn’t even a non-pathological model where it fails! This is like somebody violating the Tsirelson bound because your spacetime has CTCs. Well, duh, but violating the Tsirelson bound is the least of your problems.

    Until someone comes up with a physical QFT that fails the (distal) split property we should assume that the set of physically possible correlations is C_qa, not C_qc.

  55. Mateus Araújo Says:

    Fritz #49: I’m sorry, I’m afraid your argument ignores the actual history of Bell experiments. The thinking at the time the earlier experiments were done (as recounted by Aspect and Clauser in several conferences) was that entanglement might as well be just an idealisation, and that in reality it might collapse when we tried to spatially separate two entangled particles.

    How could you test that without measuring spatially separated observables? And how could you tell which basis the entangled state had collapsed to, without making measurements in different bases? It’s not as if there was any good model of this “entanglement collapses with distance” idea. At the very least one would need to measure an entanglement witness to be sure no separable state could have given rise to those results.

    Couple that with the hope that quantum mechanics could be replaced with a more fundamental deterministic theory (which was still taken seriously at the time), and you have a very strong justification for performing a Bell test.

  56. Thomas Vidick Says:

    Boaz #48, regarding “on a different note”: Yes, it is the case that if the TM M halts in T steps, a perfect winning strategy is computable in time poly(T).
    This said, if you do want the verifier to accept for runtimes at least T, and not below, then you need the verifier to be able to compute T. So I think its size will not be constant; it will be related to the size of a TM that computes T (i.e. the Kolmogorov complexity of T).
    Note our result doesn’t say anything about “runtime” of the provers. It would say that they need to have at least ~T EPR pairs, and make some measurement on all of them.

  57. William Slofstra Says:

    Another fun result to throw into the ring when thinking about speculative laws of nature is Coudron and Vidick’s result that if the provers’ algebras are only approximately commuting, rather than exactly commuting (more precisely, the commutator of Alice’s and Bob’s observables is bounded by some constant delta which decreases with the instance size), then the resulting complexity class MIP*_delta has a deterministic time upper bound which is doubly exponential in delta. So if we make delta small enough that it’s beyond what we could plausible observe, there would still be a separation between MIP*_delta and MIP*.

    Mateus @ 43: The point is that for the RE-hard family of games given in the paper, we are promised that the tensor-product value is either equal to 1, at most 1/2. So if we had a sequence converging to the tensor-product value from above, where (like the NPA hierarchy) the terms of the sequence are computable, and we compute the terms of this sequence while simultaneously computing the lower bounds given by the brute-force method, then eventually we will find either an upper bound below 1, or a lower bound > 1/2, giving us an algorithm for the halting problem. So no such computable sequence can exist.

    However, since we don’t know yet whether MIP^co = coRE, your question is still a good one. You just have to turn it around and make it about lower bounds on the commuting-operator value.

  58. Tobias Fritz Says:

    Mateus #55: I can’t resist one further quick reply. You clearly know the history better than I do, but you’re attacking a straw man; I never claimed that we should not measure spatially separated observables. My claim is that doing that within the framewok of a Bell experiment is not very interesting. Indeed, we’ll get more comprehensive and relevant results if we do a complete tomography of the putative entangled state, or even better we just measure all observables that are experimentally accessible (regardless of whether they’re themselves separable or entangled) and check whether these are compatible with our model. Of course we can then compute the value of a Bell inequality from that, but that’s just a mere byproduct of having probed the limits of our model. And it’s more informative at that, because the model may still be wrong even if the Bell inequality value conforms with our expectations.

  59. Sniffnoy Says:

    Mateus #46:

    Ah, thanks, that clarifies things! I didn’t realize that C_qa was also the closure of C_q. So we just have C_c (or the empty set?) generating MIP = NEXP; C_q, C_qs, or C_qa all generating MIP* = RE; C_qs generating MIP^co, speculated to equal coRE; C_ns generating MIP_ns = EXP; and all possible correlations generating IP = EXP. Yay!

  60. Mateus Araújo Says:

    Slofstra #57: I don’t see the argument. The lowerbounding algorithm might stay below 1/2 arbitrarily long, and the upperbounding algorithm might stay at 1 arbitrarily long. What difference does it make having both? We still cannot have any upper bound on the running time of this “algorithm”.

  61. matt Says:

    Scott #43: you write “And so, for example, if you hand the verifier an n-bit program that runs for T steps before halting, then just by running a poly(n)-time protocol, the verifier can check that the provers must have shared at least T entangled qubits. (Or even a literally infinite number of qubits, if we’re willing to go outside the world of tensor-product Hilbert spaces.) Insane.” Suppose that the result in the paper had only been that “for any finite number T (i.e as large as BB(n) ), the verifier can check that the provers shared at least T entangled qubits”, what would have been the consequences? Would that still disprove Connes embedding? Presumably it would still imply uncomputability of the value of the game?

  62. Scott Says:

    matt #61: Interesting question! I know the Connes Embedding Conjecture implied that MIP* has a computable upper bound. So once you can solve the halting problem in MIP*, you’ve refuted Connes Embedding, full stop. On the other hand, there might be some work that needs to be done to get from “verifying that the provers share at least BB(n) entangled qubits” to “making them solve the halting problem for you.”

  63. Vitor Says:

    Gentzen #44 and Scott #45:

    It would take solving EXP-complete problems (at least!) to convince me that a prover is an actual divine being. Otherwise, my priors would massively favor the “mundane” explanation involving a secret proof of P = PSPACE by the NSA or whoever.

    Furthermore, there’s a technicality with this MIP* protocol that makes me think it can’t prove godhood. Let’s assume 2 beings claim they’re omniscient. You ask them to run the protocol on a hard problem, and they successfully do. This shows that they either have arbitrarily much computation power (and entanglement) available, or that they read the content of each other’s minds and can thus cheat on the protocol. Let’s say that you’re able to prevent any mundane ways of cheating, by only using your finite resources.

    In conclusion, they can convince you that they have at least some weird nonlocal mindreading available, but not actual omniscience. Note that I’m taking as a given that the MIP* protocol breaks down completely if its assumptions are violated in any way (I hope that’s reasonable, please correct me if I’m wrong).

    Followup question: is it possible to build a protocol that degrades gracefully when the communication assumptions between the provers only hold approximately?

  64. Scott Says:

    Vitor #63: Yes, concluding the presence of the divine from empirical observations is a notoriously tricky business. 😀

    If some magic box seemed quickly to solve arbitrary instances of NP-complete problems before my eyes—to say nothing of EXP-complete problems or the halting problem!—then in practice, I’d probably want to start out with most of my probability mass on all sorts of mundane explanations, like ordinary stage magic (e.g., swapping out the instance I chose for a different, easier instance without my noticing), or even a drug-fueled hallucination.

  65. William Slofstra Says:

    Greg @53: I’d also like to know the answer to that question. As far as I can tell, the current paper doesn’t immediately provide an example of a non-hyperlinear group. The existence of such a group is equivalent to the existence of a linear-system nonlocal game with commuting operator value 1 and tensor-product value less than 1. So a natural thing to try would be to adapt the techniques of this paper so that all the games involved are linear system games. While some parts of the construction can be done with linear system games, other parts (for instance, the conditionally linear probability distributions) seem more difficult, at least at the moment. Maybe someone will figure out a workaround for these parts?

    On the other hand, I think it’s pretty likely that the construction in this paper could be done with binary constraint system games. This would give a binary constraint system which is satisfiable in a II1 factor, but not approximately satisfiable in finite-dimensions. This raises the question of which families of binary constraint systems would admit an instance with this type of satisfiability gap. Atserias, Kolaitis, and Severini have a paper about operator satisfiability problems for binary constraint systems, where they observe that LIN, the class of linear constraint systems, reduces to almost all other families of binary constraint systems. Showing that LIN has such an instance should answer this question for all other families. So this gives another reason to look for a non-hyperlinear group, beyond natural curiosity about group algebras.

  66. Halting Is Poly-Time Quantum Provable | Gödel's Lost Letter and P=NP Says:

    […] developments have been covered by Scott Aaronson here, Lance here, Boaz Barak here, and in a personal way by Vidick here. The new paper weights in at 165 […]

  67. . Says:

    So does it mean there is nothing beyond quantum models?

  68. Scott Says:

    . #67: No. It means that there’s one particular quantum complexity class that captures RE, but most quantum complexity classes fall far short of RE, and in any case RE is not everything.

  69. domotorp Says:

    Scott #17: I meant something more interesting, for example, suppose that the two Provers share an infinitely long sequence of random bits that are hidden from the Verifier. Would this give PSPACE, NEXP or RE?

  70. domotorp Says:

    OK, I’ve found that this latter version would be the same as NEXP.

  71. William Slofstra Says:

    Mateus @60: It doesn’t matter how long it takes; since the two series converge to the same value, this procedure will eventually halt for every game, implying that the promise problem is decidable. (This is exactly why Connes being true would have implied that MIP* is decidable.)

    matt @61 and Scott @62: Yes, if Connes was true then the procedure I’ve been talking about with Mateus (finding lower bounds from below using the brute-force method, and upper bounds from above using the NPA hierarchy) would halt on every game, not just games satisfying the promise that the value is equal to 1 or less than 1/2. Since the set of games is computable, this would imply that the running time of this algorithm is some computable function. So to falsify Connes, it would be enough to show that all decidable languages are contained in MIP*. (You can also make the same argument with the dimension of the Hilbert spaces required in place of running time.)

    An interesting complexity class is the refinement of MIP* in which the proof systems have to be complete in the tensor product model, but sound in the commuting-operator model (so for yes instances, the provers have to be able to win with probability greater than the completeness threshold c in the tensor-product model, and not be able to win with probability greater than the soundness threshold s in the commuting-operator model). This class contains only decidable languages. Similarly to the idea that MIP^co could equal coRE, it seems plausible to conjecture that this new class contains all decidable languages. (I think it would be pretty neat if this is true.)

  72. Mateus Araújo Says:

    Fritz #58: You are still being anachronistic. The first Bell test was performed by Freedman and Clauser in 1972. As far as I know quantum state tomography was introduced in 1989 by Vogel and Risken.

    You have to take into account that at the time quantum theory was far from being well-developed and trusted; in fact a lot of our trust in quantum theory comes from the dramatic certification of entanglement in the Bell experiments. Of course doing state tomography, as you suggest, would give us much more information about what is wrong with the theory (if that were the case), but to get to this point first people needed to determine whether it was worth it developing the theory so much.

    Even if we already had quantum state tomography back then, I still think performing a Bell test explicitly would have been a good idea, for pedagogical reasons. Without it, experts would know that local hidden-variable models were dead, but with the experiment the message would percolate to the wider community.

  73. Mateus Araújo Says:

    Slofstra #71: I see. If one assumes that it does not halt, it would mean that for all n,d we would have that tensorNPA(n) > Tarksi(d) + \epsilon, contradicting the assumption that both sequences converge to the same value, so it must halt.

    Shit, there’s really no hope for the Tsirelson bound then. If converging sequences of computable numbers are not possible, what’s the best we can do about upperbounding it without violating undecidability?

  74. David E Speyer Says:

    Are there any expositions out there of the Connes conjecture suitable for reading by a naive algebraist who doesn’t know what an operator algebra is? From this thread, I gather that involves some sort of inequalities analogous to the CHSH inequality which are satisfied by commuting measurements on a finite dimensional Hilbert space, but not an infinite dimensional one.

    That seems amazing to think of, but maybe it isn’t more surprising than the fact that the CHSH inequality exists in the first place. Is there any hope of writing down an actual inequality of this sort?

  75. Scott Says:

    David Speyer #74: Try Thomas Vidick’s recent survey for the AMS Notices. Come to think of it, I’ll also add a link from the main post.

  76. Henry Yuen Says:

    David Speyer #74: In Section 12.4 the paper does provide a construction of an explicit nonlocal game whose commuting operator value is 1, while the tensor product value (i.e. what is approximable by finite dimensional strategies) is at most 1/2. This nonlocal game can be written as an inequality in the same way the CHSH game corresponds to an inequality.

    However, while “explicit”, the nonlocal game encodes a Turing machine that runs a hierarchy of semidefinite programs, so the description of such an inequality is somewhat cumbersome! There is probably a much simpler nonlocal game that separates commuting operator model from tensor product model…

  77. David E Speyer Says:

    Thanks! That is very helpful. I remember starting that article in the Notices and stopping half way through but, with the motivation of the new result, I couldn’t put it down.

    I still find it incredibly counterintuitive. I’ll see if I can think of an actual argument for why I feel that way.

  78. gentzen Says:

    That survey for the AMS Notices is really nice. The explanations for MIP = NEXP and the PCP theorem were very intuitive and helpful for me. I only read the first 6 pages for far, but I am confident that the remaining pages will be intuitive and helpful as well.

    I wonder a bit where the references to “godlike powers” and “an actual divine being” in response to my comment come from. It is true that I did read Henry Yuen’s story on the visit of Alice and Bob, and that his comparison of Alice and Bob to “computational gods” made sense to me. It probably triggered the association with Joel David Hamkins’ question. But when answering his question, one must admit that even godlike powers are not enough.

    I still try to come to terms with MIP* = RE. Here is my next attempt: If the given computation halts, and the two provers have done that computation and therefore know that it halts, then they can convince a polynomial classical verifier that they have done that computation, and that it did halt. (And if that computation had a meaningful results in addition to merely halting, then they also can tell the verifier that result and convince him that it is the actual result.)

    It’s still “sort of unbelievable”, but maybe not as unbelievable as the Banach-Tarski paradox.

  79. HL Says:

    This reminds me of Marcus-Spielman-Srivastava solving the Kadison-Singer problem by working on a seemingly unrelated problem. These kind of moments are a true inspiration to people pursuing new branches far from the “trunk” of mathematics. Without connection to the Connes embedding problem, mathematical establishment would probably ignore this result no matter how important it is within the field.

    By the way, Gilles Pisier has been writing a book on Connes problem, so there might be connections to various other classical questions in functional analysis: https://www.math.tamu.edu/~pisier/TPCOS.pdf

  80. David E Speyer Says:

    So, if Loki and Star Trek’s Q wanted to convince me as to the value of the middle digit of Ackermann(100), and I could lock them in two boxes which prevented them from communicating with each other, and they had infinitely many entangled particles in their boxes, they could succeed. Do I have this right?

  81. Scott Says:

    David #80: Yes, you have it right. And they wouldn’t even need infinitely many entangled qubits for that purpose, but just a huge (comparable to Ackermann(100)) finite number. With the proviso—a big one—that they would also need to be able to apply the requisite unitary transformations to those qubits. And that (of course) your belief about the middle digit would remain conditional on your beliefs in no-signaling and in the validity of quantum mechanics.

  82. David E Speyer Says:

    I’m trying to think through why this seems so crazy. One could imagine studying:

    (Classical) Provers who have planned a classical, perhaps probabilistic, strategy in advance, but have no entangled particles.

    (FT) Entanglement of particles modeled as the tensor product of finite dimensional vector spaces

    (FC) Entanglement of particles modeled as commuting operator on finite dimensional vector spaces

    (IT) Entanglement of particles modeled as the tensor product of infinite dimensional vector spaces

    (IC) Entanglement of particles modeled as commuting operators on infinite dimensional vector spaces

    Bell’s theorem is that Classical < FT. Since I don’t have rocks in my head, it bothers me. It is nonobvious but apparently not deep that FT=FC=IT. And the new result is that IT < IC.

    This bothers me in two ways. One is an algebraic intuition about bimodules that seems like I should be able to show IT=IC, but that’s probably too technical to be worth hashing out in a comment thread.

    But the other is philosophical — it seems crazy that provers should be able to demonstrate they command literally infinite resources as opposed to a ridiculous amount of resources. Whatever Hilbert space they are working with presumably has a countable basis. How could they ever convince me they didn’t just access enough basis elements to answer the question I wanted to ask?

  83. Scott Says:

    David #82:

      It is nonobvious but apparently not deep that FT=FC=IT.

    Not quite—what’s true is that anything in IT is expressible as a limit of things in FT=FC.

      it seems crazy that provers should be able to demonstrate they command literally infinite resources as opposed to a ridiculous amount of resources. Whatever Hilbert space they are working with presumably has a countable basis. How could they ever convince me they didn’t just access enough basis elements to answer the question I wanted to ask?

    I’m with you that this is crazy. But certainly a key part of the answer is that, while the Hilbert space relevant to the commuting-operator model does have a countable basis, it’s not decomposable into a tensor product of “Alice’s part” and “Bob’s part.” So our intuitions about what it even means for Alice and Bob to act on the Hilbert space in a “spatially separated manner” are liable to mislead us, if those intuitions are ported over from the finite-dimensional world, where commutativity simply means having a tensor product decomposition. Or at least that’s how I’ve bracketed off my own confusion about this. 🙂

  84. RandomOracle Says:

    Does the proof of MIP*=RE (or even the previous one of NEEXP ⊆ MIP*) relativize? I suspect not, but is there some easy way to see this?

  85. Rand Says:

    When Anand visited QuICS a few months ago (to talk about his big NEEXP < MIP* result), he suggested to me that this might be true and my mind was boggled.

    The bogglement persists.

  86. Joshua B Zelinsky Says:

    This may be a silly question: Given that we’re talking about infinite amounts of entanglement being used in the MIP* = RE result, has anyone thought carefully about how dependent this result is on one’s axioms? The techniques used seem to all live in ZFC, but I’d be particularly interested if the result depends on the axiom of choice, especially in regards to the behavior of the relevant Hilbert space.

  87. Scott Says:

    RandomOracle #84: That’s a superb question—I don’t know why I didn’t think to ask it myself.

    The issue is that we have two heuristics locked in direct conflict. On the one hand, nothing about interactive proofs relativizes, and MIP*=RE is about interactive proofs. On the other hand, everything in computability theory relativizes, and MIP*=RE is (ultimately) about computability theory.

    In this case, I can prove that (as you guessed) the first heuristic has to win. That is, there exists an oracle relative to which not even coNP, let alone RE, is contained in MIP*. To see this, let’s consider the coNP problem of deciding whether f:{0,1}n→{0,1} is the all-zeroes function. Suppose there’s something that the two entangled provers can do to cause the BQP verifier to accept with high probability. Now change f(x) from 0 to 1 at a single randomly-chosen x∈{0,1}n, and let the provers run exactly the same strategy as before. By the BBBV Theorem (i.e., the optimality of Grover’s algorithm), the verifier won’t “notice” the change, and will still accept with high probability. Notice that the amount of entanglement between the provers, and even the number of provers, were completely irrelevant here.

    Of course, this still leaves the challenge of articulating which step(s) in the proof of MIP*=RE fail to relativize. The intuitive difficulty is that, if we’re talking about Turing machines that run for arbitrary lengths of time, and which therefore (it seems) can only be simulated in a step-by-step manner anyway, what difference could it possibly make if those Turing machines also query an oracle?

    The answer, I think, is that despite that intuition, nonrelativizing stuff happens (and needs to happen) in the “compression lemma” that’s at the core of the new result. Indeed, the Time Hierarchy Theorem relativizes, while the compression lemma says informally that in the land of MIP*, the Time Hierarchy Theorem is as nothing … since at no cost to yourself, you can always just force the provers to go up another exponential in how many entangled qubits they use.

    And indeed, the compression lemma is exactly where the authors bring in PCP and low-degree polynomial machinery—well-known sources of non-relativizing goodness.

    If I’m right, then MIP*=RE is the first fully convincing example I’ve ever seen of a non-relativizing computability theorem.

    (Update: In response to my inquiry, Thomas Vidick was kind enough to confirm the above understanding—adding that their proof uses NEXP⊆MIP* as an ingredient, which already fails to relativize—although that’s almost certainly not the only non-relativizing part of the proof.)

  88. Sniffnoy Says:

    Josh #86: I mean, we’re only dealing here with separable Hilbert spaces, right? So such issues shouldn’t be relevant…

  89. Boaz Barak Says:

    Scott #87: By your argument above, also the prior result that MIP* contains NEEXP doesn’t relativize, right?

  90. How ‘spooky’ is quantum physics? The answer could be incalculable (via Qpute.com) – Quantum Computing Says:

    […] shitting bricks here,” commented another physicist, Mateus Araújo of the Austrian Academy of Sciences in Vienna. “I never thought I’d see this […]

  91. Scott Says:

    Boaz #89: Right.

  92. How ‘spooky’ is quantum physics? The answer could be incalculable – Nature.com – News247 Says:

    […] shitting bricks here,” commented another physicist, Mateus Araújo of the Austrian Academy of Sciences in Vienna. “I never thought I’d see this […]

  93. Simon Apers Says:

    What was the common belief regarding Connes’ embedding conjecture? Ken Regan’s blog post states that it was “apparently widely believed”, but I’ve also heard otherwise.

  94. Amazing: Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP* = RE and thus disproved Connes 1976 Embedding Conjecture, and provided a negative answer to Tsirelson’s problem. | Combinatorics and more Says:

    […] Optimized: MIP*=RE; Other posts on blogs: GLL (Ken Regan); WIT (Boaz Barak); CC (Lance Fortnow); WN; Posts on an […]

  95. . Says:

    Does it show two entangled provers can convince a verifier in polynomial time that a mathematical proof of a statement in Sigma^0_1 exists without revealing the proof?

  96. . Says:

    Also does it show MIP* is not closed under complement?

  97. Scott Says:

    .: Yes and yes. Those are both immediate implications of being equal to RE.

  98. Tobias Fritz Says:

    David #82, Scott #83: Perhaps it’s instructive to have an explicit example of two commuting algebras of operators on an infinite-dimensional Hilbert space which cannot be written as a (direct sum or direct integral of) tensor product of representations of the individual algebras. The following is based on Example B.13 here, which is due to Takesaki. I will use the fact that the spectrum of an entangled operator acting on a tensor product Hilbert space is independent of the particular representations of the two operator algebras under consideration (as long as the representations are faithful).

    As a prelude, let G be a finitely generated non-amenable group. For example, a non-abelian free group of finite rank will do just fine. Then consider the random walk on the Cayley graph of G with respect to a given finite set of generators. A result of Kesten says that the spectrum of this random walk does not contain 1. This is intuitive: the Cayley graph branches out too much to have almost invariant distributions.

    Now consider G acting on itself both on the left and on the right. This defines a unitary representation of G x G on ℓ²(G), or equivalently two commuting representations of the reduced group C*-algebra C*_r(G). Now let S be a symmetric generating set in G, and consider the entangled operator given by the Laplace operator

    L = |S|¯¹ Σ_g g ⊗ g¯¹,

    where the sum is over generators g ∈ S. This hermitian operator has the basis vector δ₁ ∈ ℓ²(G) as an eigenvector of eigenvalue one, where 1 ∈ G is the neutral element.

    Now let us consider L in any tensor product representation, and show that the spectrum of L is bounded away from one. By my statement in the first paragraph above, it is enough to consider the tensor product of the two representation where G acts on itself from the left with the one where it acts on itself from the right. Using the isomorphism g ↦ g¯¹ to turn the second representation into a left action of G on itself as well, we essentially get the Laplace operator associated to the action of G on G x G by left multiplication. Since this decomposes into disjoint orbits given by G acting on merely one copy of itself, the above statement about random walks on the Cayley graph applies, showing that the spectrum of L is bounded away from one.

    ———

    We could also try to use the above algebras of commuting operators to construct a counterexample to Tsirelson’s problem. However, it’s far from obvious how to go about that; for example, the Kirchberg factorization property implies that this does not work as soon as G is residually finite, a property that is satisfied e.g. by free groups. This illustrates the difficulty of the problem.

  99. asdf Says:

    This is also an interesting bypass of Gödel’s speed-up theorem.

  100. Amazing: Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP* = RE and thus disproved Connes 1976 Embedding Conjecture, and provided a negative answer to Tsirelson’s problem. | Combinatorics and more Says:

    […] Reviews in Mathematical Physics, 24(05):1250012, 2012. (A few enlightening comments by Fritz on SO: I, […]

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.