## Entanglement without end

Today we take a break from this blog’s usual round of topics—free will, consciousness, the Singularity, social justice, Donald Trump—to talk about something *really* crazy and left-field. Namely, recent research in quantum information.

Earlier this month, William Slofstra, currently a Research Assistant Professor at the IQC in Waterloo, posted a breakthrough paper on the arXiv (yeah, I’m using the b-word again—sue me), which solves one version of a ten-year-old problem in entanglement theory called Tsirelson’s Problem. The problem, in one sentence, asks whether all quantum-mechanical correlations that can be achieved using commuting measurements, can also be achieved using measurements on separate parts of a tensor-product Hilbert space. The answer turns out to be **no**. (We’ve long known that the two kinds of correlations are identical as long as you stick to finite-dimensional Hilbert spaces, but Slofstra shows that they can differ in infinite-dimensional spaces.)

One implication of Slofstra’s result can be stated much more concretely, in terms of *two-prover games*: those things like the famous Bell/CHSH experiment, where Alice and Bob are put in separate rooms, and get inputs *x* and *y* respectively, and then without communicating, have to produce outputs *a* and *b* respectively satisfying some relation *V*(*x*,*y*,*a*,*b*). We’ve long known examples of two-prover games, like the Mermin-Peres magic square game, that can be won 100% of the time if Alice and Bob share quantum entanglement, but that can’t be won 100% of the time in a classical universe. Slofstra gives the first example of something different: namely, **a two-prover game that can be won 100% of the time using commuting measurements in an ****infinite-dimensional Hilbert space—something “formally within the rules of quantum mechanics”—but that can’t be won 100% of the time using any finite number of qubits of entanglement.**

(Previously, Leung, Toner, and Watrous had given a simpler example of such a game, but theirs required the referee to exchange quantum messages with Alice and Bob.)

If that’s not enough, Slofstra’s construction also shows that, given as input a description of a two-prover game, it’s *undecidable* (as in, equivalent to the halting problem) whether Alice and Bob can win the game with certainty using commuting measurements on an infinite-dimensional Hilbert space. Notoriously, quantum computing theorists have been unable to put *any* upper bound (not even “computable”) on the complexity class MIP*, consisting of languages that admit multi-prover interactive systems with entangled provers—precisely because they’ve been unable to bound how much entanglement the provers might need to implement their optimal strategy. Slofstra’s result helps to explain why this problem has been so vexing. I hasten to add, though, that his result doesn’t imply that MIP* contains anything uncomputable, since it remains plausible that anything Alice and Bob can do with infinite entanglement, they can approximate well enough with a finite amount.

That last remark leads to a further fundamental question, one that Slofstra leaves open. Namely, even if Alice and Bob need infinite entanglement to win Slofstra’s game with certainty, can they at least win it with probability *arbitrarily close* to 100%, using larger and larger finite amounts of entanglement? More broadly, could there exist a game that was winnable with certainty using infinite entanglement, but with at most (say) 90% probability using *any* finite amount of entanglement? That problem was shown, by Ozawa (see also Scholz and Werner), to be equivalent to a famous unsolved problem in operator algebras called the Connes embedding problem.

Clarifying the matter further, Slofstra (following earlier authors) points out that there are really *four* classes of two-prover games in play here:

- Games that can be won with certainty using some fixed, finite amount of entanglement.
- Games that can be won with certainty using an infinite amount of entanglement, but still in a tensor-product Hilbert space, H
_{A}⊗H_{B}. - Games that can be won with probability
*approaching*1, using an infinite sequence of strategies from class 1, or equivalently (as it turns out) from class 2. - Games that can be won with certainty using measurements by Alice and Bob on an infinite-dimensional quantum state |ψ〉, where we require all of Alice’s measurements to commute with all of Bob’s, but don’t require |ψ〉 to have a tensor-product structure.

It can be shown that 1 is a subset of 2 is a subset of 3 is a subset of 4. Previously, we didn’t know *any* of these containments to be strict. Slofstra’s result shows that class 2 differs from class 4—and as a consequence, that class 1 differs from class 4 as well. The Connes embedding problem, which remains open, asks whether 3 differs from 4. It also remains open whether 1 differs from 2 and whether 2 differs from 3.

OK, you ask, but what’s the broader importance of any of this? To me, these problems touch on a question of almost metaphysical significance: namely, *what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?*

Because of the Bekenstein bound from quantum gravity, I’m of the opinion that the Hilbert spaces relevant to our universe are ultimately finite-dimensional—or more concretely, that any bounded physical system can store at most ~10^{69} qubits per square meter of surface area. And in quantum computing and information, almost everything we care about only requires finite-dimensional Hilbert spaces—the subject of this blog post being a striking exception!

Yet if you take a traditional quantum mechanics course, virtually every example you see will involve *infinite*-dimensional Hilbert spaces—starting with the harmonic oscillator, the hydrogen atom, and coherent states of light. And indeed, when I’ve banged the drum about finite-dimensional QM being the truly fundamental kind, physicists have often retorted by pointing to one of the very first things they learn: the position/momentum commutation relation, which only makes sense in infinite-dimensional Hilbert space. Of course, if you tried to *probe* position/momentum commutation to greater and greater precision, eventually your experiments would run up against the limits of quantum gravity, so this retort doesn’t imply that infinite dimensions actually exist at the machine-code level of the universe. But still: is there some *conceivable* experiment for which a positive result would show us that Nature wasn’t describable by a finite number of qubits, but only by an infinite number?

A few years ago, Tobias Fritz wrote a lovely paper about precisely that question. He gave an example of an identity—namely,

V^{-1}U^{2}V=U^{3} implies UV^{-1}UV=V^{-1}UVU

—that holds for all finite dimensional unitary matrices U and V, but fails badly for certain infinite-dimensional ones. He suggested that, if this identity were discovered to fail, then Occam’s Razor would favor an infinite-dimensional Hilbert space for our universe.

Unfortunately, Fritz’s example is open to the same sort of objection that Slofstra’s game is. Namely, as Fritz points out, if the antecedent (V^{-1}U^{2}V=U^{3}) held to excellent precision but not perfectly, then his identity could “fail to within experimental limits,” even if our universe had a finite-dimensional Hilbert space and therefore satisfied his identity.

OK, but suppose that the Connes embedding problem had a negative answer—or equivalently, that there existed a two-prover game G that could be won with certainty using commuting operators, but that couldn’t be won (say) 90% of the time using any finite amount of entanglement. In that case, the believers in a quantumly finite universe, like myself, would have to put some real money on the table, in much the same way the original Bell inequality forced the believers in Einsteinian local hidden variables to put money down. We finitists would have to say that the game G *couldn’t* be won with certainty in the real world, even though formally, winning G with certainty wouldn’t seem to contradict either quantum mechanics or locality. And if, hypothetically, an experiment showed that G *could* be won with certainty—or indeed, with any probability bounded above 90%—then our position would’ve been falsified, much like the Bell experiments falsified Einsteinian locality.

So how did Slofstra prove his result? I’ll be brief, since STOC’2016 is happening in Cambridge right now, and I’d like to get over there in time for lunch.

If you like, the key idea is to start with equations that have infinite-dimensional solutions but no finite-dimensional ones. The most famous such equation is the position/momentum commutation relation mentioned earlier, which for our purposes is just the following matrix equation:

AB – BA = I.

This equation can’t be satisfied by any finite-dimensional matrices, since AB and BA have the same trace, so Tr(AB-BA)=0, but Tr(I) is nonzero. But, OK, let A be the infinite-dimensional linear operator that takes as input the coefficients of a polynomial c_{0}+c_{1}x+c_{2}x^{2}+… and that differentiates the polynomial, and let B be the linear operator that multiplies the polynomial by x. Then I invite you to check that the equation holds.

It’s not known at present how to turn the above equation into a two-prover game—I regard it as a fascinating question whether that’s possible. Rather than an algebraic equation (involving both addition and multiplication), Slofstra instead needs to start with *group* equations (involving only multiplication)—ones with the strange property that they’re satisfied only by the identity matrix or by infinite matrices. Equivalently, he needs a group, defined by a finite list of generators and relations, that admits no nontrivial finite-dimensional matrix representations. Fortunately for him, such groups exist—the first known example being Higman’s group, discovered in 1951. Higman’s group is generated by four elements, a,b,c,d, which satisfy the equations

a^{-1}ba = b^{2}, b^{-1}cb = c^{2}, c^{-1}dc = d^{2}, d^{-1}ad = a^{2}.

I don’t have a good intuition for Higman’s group, but if I did, it would come from rereading this post by Terry Tao. Certainly it has no known “physics interpretation” analogous to that for the position/momentum commutation relation.

Anyway, given such a group, the hard part, the new part, is to give a general way to convert them into the kinds of groups that can be realized as two-prover games. So that’s what Slofstra does, using 50 pages dense with commutative diagrams, quotient maps, and other Serious Math Stuff—hey, I told you this part of the post would be brief! For more, see his paper.

Now, once you have this general transformation of groups, you can also use it to show that there’s no algorithm to decide whether a two-prover game has a perfect commuting strategy, by taking the word problem for groups, which is known to be undecidable, and reducing it to that problem.

Anyway, infinite congrats (or the limit of arbitrarily large finite congrats?) to Slofstra for this achievement! Now it’s off to STOC, which I guess you could also ask me about in the comments if you wanted.

**Unrelated Announcement (June 21):** Ran Raz asks me to announce a workshop for Avi Wigderson’s 60th birthday, to be held at the Institute for Advanced Study in Princeton October 6-8. I’ll be speaking there, and I hope to see many of you there as well!

Comment #1 June 20th, 2016 at 12:56 pm

Hi Scott,

Here is what I took as the real nugget:

“… since it remains plausible that anything Alice and Bob can do with infinite entanglement, they can approximate well enough with a finite amount”

If this problem is solved and it turns out that Alice and Bob can’t

approach100% certainty, this would be seen as real evidence that the universe is continuous and not discreet, right? Moreover, if it were demonstrated that such a game could be played in our universe and someone demonstrated 100% success, this would leave us with *proof* that the universe is indeed continuous and not discreet?Conversely, if it can be proved that Alice and Bob

canapproachsuch games with finite entanglement, this wouldn’t prove the universe was discrete, but the onus would be on the continuum folks to come up with some other evidence where infinite entanglement is needed to explain some observable.I take it you would label this result a ‘breakthrough’ because it clarifies this picture and may get us that much closer to future work that could resolve the question above?

If I have this right…. neato indeed!

Comment #2 June 20th, 2016 at 1:26 pm

>if experiments showed that G could be won (…) our position would be falsified, just as the Bell experiments

Suppose this is exactly what happens. Shouldn’t we conclude that the 90% limit may be for n larger that we can test?

Comment #3 June 20th, 2016 at 1:49 pm

adamt #1: Not quite. I don’t think

eitherresolution of the mathematical question would bear directly on whether our universe is continuous or discrete. Instead, it would bear onwhat sorts of evidencecould in principle suffice to convince us that the universe was continuous.Thus, suppose the Connes embedding problem had a negative answer, and a two-prover game with finite/infinite gap existed. Even then, as a believer in finite-dimensional Hilbert spaces, I could just respond: that’s great, so tell me when you can

actually playthis game and win with probability 1! How exactly are you planning to implement these measurement operators on an infinite-dimensional Hilbert space? I.e., I could regard the game’s existence as a curious mathematical fact with no bearing on our real, finite-dimensional world—preciselybecauseplaying the game perfectly would require unphysical infinite-dimensional measurements.But then, if—completely hypothetically—someone

diddo an experiment that won the game with probability 1, that would be a dramatic indication that infinite-dimensional Hilbert spaces reallywereneeded for a fundamental description of the physical world. And even assuming the experiment could never be done, the mere fact that we couldimaginean experiment convincing us of such a thing would, to me, be an amazing conceptual fact about entanglement.Meanwhile, if the Connes embedding problem has a positive answer, then the issue never really gets off the ground—since in that case, even if we won the game with probability 1 (to within the limits of experimental accuracy), we could always account for that with a suitably large but finite-dimensional Hilbert space. (On the other hand, in principle you could imagine a two-prover game giving you a

lower boundon the Hilbert space dimension that was larger than the Bekenstein bound applied to your lab equipment… 🙂 )Comment #4 June 20th, 2016 at 1:54 pm

Jay #2:

Suppose this is exactly what happens. Shouldn’t we conclude that the 90% limit may be for n larger that we can test?

I don’t understand the question. Given a two-prover game, you can keep playing it over and over independently. So if, after (say) a billion runs, you had won 998,000,000 of them, the hypothesis that the game was winnable with probability at most 0.9 could be ruled out with extremely high confidence.

Comment #5 June 20th, 2016 at 1:58 pm

Thanks Scott, that is more or less what I was thinking even if I wasn’t as cautious with my words. The only

directlyconceivable evidence for or against viz a viz continous or discreet would be an actual demonstration of near 100% success for such a game that has been proven to require infinite entanglement to approach 100% success.The other resolutions of the question, while not dispositive, would still lead us to adjusting our priors re: continuous vs discreet if they turn up except for the positive result for Connes embedding problem. I take it that you believe a positive result most likely.

Comment #6 June 20th, 2016 at 2:13 pm

adamt #5: A positive resolution of the Connes embedding problem would certainly lead to a

simplerpicture of what happens when you let entanglement go to infinity. But I don’t have enough experience with the problem to want to venture any opinion about the truth. In any case, absent some hypothetical experiment like the one discussed in the post actually being done (and giving the requisite result), if you wanted to make me take seriously the possibility of infinite-dimensional Hilbert spaces being the “ground truth” for bounded physical systems, you’d probably need to explain to me what was wrong with Bekenstein’s arguments for finiteness. Certainly a negative resolution of Connes’ problem,by itself, wouldn’t appreciably change my views about the physics question.Comment #7 June 20th, 2016 at 2:48 pm

Scott #4,

>I don’t understand the question.

Suppose the conclusion depends on n-chess instead of a two-prover game. Even if I keep winning my games, I wouldn’t be able to conclude on n-chess, because in a way all my games would have been “small-n-chess”, not n-chess proper.

So my question amount to: the two-prover game you discuss, is it analogous to n-chess or you could “set the size of the checkboard”?

Comment #8 June 20th, 2016 at 3:02 pm

Jay #7: The game is just a single, fixed object (Slofstra says that it involves about 400 variables and 300 relations). It doesn’t grow with an input size n.

Comment #9 June 20th, 2016 at 3:37 pm

Thanks!

Comment #10 June 20th, 2016 at 4:06 pm

Scott #6,

Bekenstein’s limit is derived from a thought experiment of what would happen were you to take an isolated system with more entropy/information than the bound allows in proximity to a black hole, right? This would violate the second law thus motivating the bound IIUIC.

Using this to argue for a discrete universe seems to me like unfairly baking in the discreteness with a tautology forming as a consequence of the underlying assumption that any real system can be so isolated. What of the information necessary to describe the system

in isolation fromits surroundings? Under Bekenstein’s thought experiment that information isn’t stored inside the sphere of the system in question and yet it seems necessary information to describe it if the system is causally connected to its surroundings.My best attempt at steel manning (probably pretty pitiful steel manning, but hey, at least I’m trying) is let’s say the system is isolated by a patch of the universe causally disconnected from the rest by means of the expansion of the universe, but in such case how could you ever see a violation of the second law since it is causally disconnected from any

otherblack hole to lower into?Is there another incongruity a causally disconnected system with entropy or information above the bound might cause?

Comment #11 June 20th, 2016 at 4:25 pm

I don’t want to diminish Slofstra’s accomplishment, but I find it unlikely that any physicist would agree that he solved Tsirelson’s problem. What physicists regard as Tsirelson’s problem is whether the closure of 1 equals 4, that is, whether any correlation that can be produced via commuting observables can be approximated arbitrarily well by observables that are in a tensor product structure.

This is the only experimentally meaningful question, as the difference between 1 and its closure vanishes when taking into account the finite precision of the measurements.

And the reason physicists care about Tsirelson’s problem (or at least I) is precisely because of the possibility of proving experimentally that Nature is actually infinite-dimensional, if 3 and 4 are different.

Don’t get me wrong, this is amazing work, and the first time anyone made meaningful progress on Tsirelson’s problem since it was first posed. But one shouldn’t overstate one’s results.

Comment #12 June 20th, 2016 at 4:28 pm

adamt #10: I don’t really follow you. Systems don’t need to be isolated for the Bekenstein bound to apply to them. Note that in this context, “describing” a system doesn’t entail describing anything that the system might interact with or be correlated with or influence in the future—any more than a biography of Lincoln needs to include anything that anyone else ever said about Lincoln, or spent a $5 bill on … or a 1GB hard drive could be marketed as 10

^{122}bits, since that’s how many it could potentially interact with in the rest of the observable universe. 🙂 It just means describing the system’s local density matrix ρ (say, conditional on some fixed spacetime background)—enough to recover the field configurations, etc. within the region of interest.Comment #13 June 20th, 2016 at 4:35 pm

Mateus #11: I don’t know the detailed history—others can fill me in—but Scholz and Werner appear to use “Tsirelson’s Problem” to mean 2 vs. 4. And in any case, we already have a different term—

the Connes embedding problem—for 3 vs. 4.Comment #14 June 20th, 2016 at 5:02 pm

Scott #13: I’m afraid you misread the paper. They state “Hence Tsirelson’s problem is the same as the question whether all quantum correlations can be approximated by correlations between finite dimensional systems.”

About the Connes embedding conjecture: it is indeed the same thing as Tsirelson’s problem. This was proven by Ozawa in 2012. Before then we didn’t have a different name for 3 vs 4.

Comment #15 June 20th, 2016 at 5:25 pm

Mateus #14: OK, thanks, I changed the post to “one version of Tsirelson’s Problem.” Maybe Tsirelson himself could tell us what’s the inner product between the problem that was solved and the problem he meant to pose. (In this open problem list, which Slofstra cites for Tsirelson’s formulation of the problem, Tsirelson asks several different variants of it, but I don’t have time right now to attempt a translation between Tsirelson’s language and Slofstra’s.)

Comment #16 June 20th, 2016 at 5:58 pm

Scott #15: Thanks for doing that. We could ask Tsirelson, actually. He is quite nice and has always answered my questions, no matter how stupid.

But on the statement of the problem he has in his webpage (http://www.tau.ac.il/~tsirel/download/bellopalg.pdf) he says “The question is, whether Q’ = Q”, or not. If Q’ \(\neq\) Q” then another (even more important) question arises naturally: is Q’ dense in Q”, or not?”

So I guess we know his opinion already. Not that I think it matters much, actually; I would value more the consensus among the people working on Tsirelson’s problem than Tsirelson’s opinion.

Comment #17 June 20th, 2016 at 6:02 pm

Mateus #16: What are Q’ and Q”, in the language of this post (classes 1, 2, 3, and 4)? Also, is the question “is Q’ dense in Q” ” equivalent to the Connes embedding problem?

Comment #18 June 20th, 2016 at 6:18 pm

Scott #17: Q’ is 2, Q” is 4. And yes, the question “is Q’ dense in Q” ” is equivalent to the Connes embedding problem (because 2 is obviously dense in 3, so the nontrivial question is whether 3 is equal to 4).

Comment #19 June 20th, 2016 at 7:07 pm

Mateus #18: Ok then, so when Tsirelson writes “The question is” (with a definite article), what follows that is 2 vs 4, the problem Slofstra solves. Tsirelson then remarks that 3 vs 4 is “even more important” (and you and I agree with him about that, and I said as much in the post). Based on that, my ruling is that Slofstra’s claim in his paper to have “solved Tsirelson’s problem” is legitimate. 🙂

Comment #20 June 20th, 2016 at 7:31 pm

And how about Ozawa’s claim to have proven that Tsirelson’s problem is equivalent to Connes’ embedding conjecture? Do you rule that to be just a lie?

Comment #21 June 20th, 2016 at 7:50 pm

Mateus #20: Obviously not; I don’t know why you’d even bring that into the discussion. Ozawa was simply talking about a different problem of Tsirelson, the “even more important” problem that comes after “the” problem. The confusion over terminology in this area seems unfortunate to me, but would be fixed if we all agreed to use “Tsirelson’s problem” for 2 vs 4 and “Connes embedding problem” (which certainly seems like a prestigious enough name 🙂 ) for 3 vs 4.

Comment #22 June 20th, 2016 at 8:10 pm

Scott #21: The problem is that this would clash with accepted terminology. To give you an example, when a friend told me that someone had posted a paper on the arXiv claiming to solve Tsirelson’s problem I shat bricks: “Holy shit! After Ozawa showed that it was equivalent to Connes’ embedding conjecture I had lost hope that it would ever be solved! This guy is either a crackpot or the second coming of Grothendieck!”

If you want everyone to change what they mean by Tsirelson’s problem, that will be an uphill struggle.

Comment #23 June 20th, 2016 at 8:33 pm

Mateus #22: In that case, it sounds like the problem is really one of managing expectations. You thought that Slofstra had taught a dog the theory of Lebesgue integration, and then were disappointed when you found out that he merely taught the dog two-digit addition. And if others had that experience as well, then maybe it indeed would’ve been better for Slofstra to write his abstract in a way that warded off this misidentification explicitly. But in any case, I think my post was clear and emphatic about the distinction between what was proved and what still

isn’tproved! So, taking myself as an example of someone who could totally have gotten confused about this were Slofstra’s paper written less well, it didn’t confuse me.Comment #24 June 20th, 2016 at 10:43 pm

I know this isn’t actually a contribution, but Mateus #22, “the second coming of Grothendieck” is one of the best exclamations I have seen on the internet in a long time. 🙂

Comment #25 June 21st, 2016 at 3:00 am

The term “Tsirelson’s problem” gets used in a number of different ways in the literature, and it definitely causes some confusion. I like Vern Paulsen’s resolution of referring to 1 vs 4 as the strong Tsirelson problem, and 3 vs 4 as the weak Tsirelson problem. As I say in my paper, I think the “middle” Tsirelson problem of 2 vs 4 is the closest to the original statement, for the reasons that Scott identifies in #19.

That said, there’s no reason I can’t change my abstract slightly to make this clearer, so expect to see that.

(And while I’m here, let me thank Scott for a very nice post.)

Comment #26 June 21st, 2016 at 4:41 am

anon #24: =)

Scott #23: I’m not saying your post was misleading! On the contrary, You were very clear about what was proved. But I think the reason you were not confused by Slofstra’s claim is that you are not really working in the field of nonlocality, so you didn’t have expectations about Tsirelson’s problem.

Let me make an analogy with something from your field then. Suppose someone had claimed to prove that quantum computers are better than classical computers. All your life you thought that meant proving that BPP \neq BQP. But when you open the guy’s paper, you see that what he has actually proven is that SampleP \neq SampleBQP assuming that the polynomial hierarchy is infinite. A great result, no doubt, but you wouldn’t claim to have proven that quantum computers are better than classical ones, would you?

Comment #27 June 21st, 2016 at 6:57 am

Slofstra #25: It would be great if you did that, it would reduce the physicists’ confusion.

Since you’re here, let me congratulate you for your result, and I hope you don’t mind me asking a question about your paper: how explicit can you make the game? Is it anything that could be stated concisely, like the CHSH game? And what is the minimal size? In the paper you state that you can get it down to 400 variables and 300 relations. What does this means in terms of number of inputs and outputs? And would you like to make a wild guess about what is the minimal size of the game for which there is a difference between 2 and 4? I would love to know the answer to this problem!

Comment #28 June 21st, 2016 at 7:28 am

@Scott: I admit I don’t understand the details here at all. In that hypothetical scenario you suggest, when it turned out that infinite dimensional entanglement has more power than finite dimensional entanglement, and it is physically testable that you can use infinite dimensional entanglement, would it still be true that the BQP class corresponds to the decision problems you can realistically compute with a physical computer?

Comment #29 June 21st, 2016 at 7:33 am

Since I was involved in the formulation of Tsirelson’s problem, I believe that I have something to say here.

In 2006, my collaborators and I came up with a hierarchy of semidefinite programming relaxations -now called the NPA hierarchy- which completely characterizes the non-local distributions which one can achieve in the commuting paradigm. We were aware of Tsirelson’s claim that this set was equivalent to the tensor set, so we were very excited: we had solved the problem of characterizing quantum nonlocality!

However, when we wrote Tsirelson to ask him for a proof, he realized that his argument had a flaw. He wrote a short description of the problem and submitted it to Werner’s list of open problems in quantum information theory. The rest is history.

So the only reason why Tsirelson’s problem surfaced at all was that we wanted to know whether the NPA hierarchy converges to the closure of the tensor set (it was clear from the beginning that it converged to some closed set). As Mateus has pointed out, taking the closure of the tensor set did not trouble us because a set of correlations and its closure are physically indistinguishable.

In summary: Tsirelson’s problem, as understood by the quantum nonlocality community since 2006, is whether the closure of the tensor set is equivalent to the (closed) commuting set. This definition has been used consistently by Werner, Fritz, Junge etc., long before the equivalence with Connes embedding problem was established by Ozawa. Slofstra’s paper, while interesting and groundbreaking, solves another problem.

Comment #30 June 21st, 2016 at 7:56 am

Miguel #29: OK, thanks for the backstory. It would gratify me if this post played a role in mediating a friendly resolution—where Slofstra reworded his abstract and introduction to be more in line with how the relevant community has used the phrase “Tsirelson’s problem” (rather than what someone reading Tsirelson’s 2006 note without that social context might have construed to be his problem), the community recognized Slofstra’s groundbreaking contribution, and everyone went home happy.

Comment #31 June 21st, 2016 at 12:23 pm

To someone who’s not in this field the terminology sounds confusing indeed. I find names strong and weak Tsirelson’s problem also a bit confusing, because usually one would think that solution of strong version of a problem would imply solution of the weak but if I understood correctly, that wouldn’t necessarily be the case here.

Also, since no-one has asked: what were/are the highlights of STOC’2016?

Comment #32 June 21st, 2016 at 12:50 pm

@anon31 the resolution of the strong question in the other direction (1=4, now ruled out by Slofstra) would imply the resolution of the weak question (as 3=4). There is always this danger when naming things “strong” and “weak” — if you guess wrong which way the answer will be, your terminology will confuse everyone.

Comment #33 June 21st, 2016 at 7:03 pm

anon #31: The best paper talks were Babai on graph isomorphism, the awesome new two-source extractor by Eshan Chattopadhyay and David Zuckerman (who I’ll join soon at UT Austin), and the Reed-Muller code achieving capacity. Another result that I and many others loved was “A polylog shaved is a lower bound made,” showing that if you could compute Edit Distance even in n

^{2}/log^{k}(n) time for sufficiently large k—a tiny improvement over what’s known—That would imply a subexponential-time algorithm for FormulaSAT, which in turn would imply a new circuit lower bound (NEXP not in NC^{1}). I also really liked a result that “almost” derandomizes the fast parallel algorithms for finding a maximum matching in a bipartite graph—they do it in log^{2}time with n^{log(n)}parallel processors. What else? We now know that to store all the pairwise distances in an n-vertex graph, to constant additive error, requires Ω(n^{4/3}) bits, which is tight (that was one of the Lewin Best Student Paper talks). And there’s a new way to prove hardness of PAC-learning, based on the assumed hardness of random SAT (that was both a STOC talk and a tutorial). And we now have super-polynomial lower bounds on Frege proofs for depth up to √(log n) (the previous record was loglog(n)). For completeness, Dana spoke on her and Subhash Khot’s candidate hard unique game, Troy Lee spoke on the new separations in query complexity, and my student Shalev Ben-David spoke on the followup work that separates randomized and quantum query complexities for a total Boolean function by the 2.5th power, more than the quadratic separation of Grover’s algorithm (the latter two things were discussed on my blog). Anyone else who was at STOC should feel free to share more highlights.Comment #34 June 21st, 2016 at 8:42 pm

>> whether the universe was discrete or continuous

Perhaps this is a stupid question, but assume that we finally find the mathematical description of fundamental physics and it suggests that the world is indeed continuous;

The Loewenheim-Skolem theorem would then tell us that there is a countable model of that mathematical system and would this not suggest that the world is somehow discrete after all?

Comment #35 June 21st, 2016 at 10:07 pm

wolfgang #34: The Lowenheim-Skolem theorem is about models of first-order sentences, and probably not everyone would accept that our world is such a model. But yes, your basic point stands: even if the world were fundamentally continuous, there would still be some discrete approximation to it that accounted for everything we could see or hear (to within the limits of visual and auditory perception), everything we could write a paper about, etc. So at some point, as Tobias Fritz points out in the nice paper I linked to in the OP, the believer in continuity needs to appeal to Occam’s Razor, and say that the continuous model is simpler or more elegant, that the discretization is just an arbitrary imposition.

My point was that,

ifthe Connes embedding problem had a negative answer, andifthe requisite game were actually winnable with probability 1, then this appeal to Occam’s Razor would be on very strong ground. For in that case, accounting for the winning of the game using only finite-dimensional Hilbert spaces would mean giving up on locality, or some other equally-basic principle. Of course, none of this has actually happened.Comment #36 June 22nd, 2016 at 10:14 am

I think that it is important to distinguish these two versions of Tsirelson’s conjectures, i.e., weak versus strong, when you want to communicate these ideas to mathematicians. It just helps to clarify everyone’s thinking, especially to those of us new to the area.

It also helps us outsiders to understand that whether or not all games achieve their quantum values is another, separate question.

The result that Tsirelson proved in 1987 implies that 1=4 for n input, 2 output games, provided that you require that the marginal probabilities are all 1/2. So it is natural for those of us new to the area, to assume that Tsirelson’s conjecture is about 1=4 more generally and has nothing to do with closures or limits.

Another interesting problem to me is whether or not 2=3.

An important point that hasn’t been discussed: So far we are really only focusing on Tsirelson’s bivariate conjectures. What we really have is that 3=4 in the bivariate case is equivalent to Connes’ embedding problem having a positive answer. But currently, we don’t know what 3=4 in multivariate cases might mean for operator algebras.

What would it mean for physics if 3=4 in the bivariate case but not in the multivariate case?

In any case, Slofstra has made a major advance in our understanding.

Kudos, William!

Comment #37 June 22nd, 2016 at 12:25 pm

Vern #36: Thanks!!

Thinking about it more, there’s another basic problem here that I haven’t seen discussed: namely, can

Slofstra’s specific game(the one based on Higman’s group) be won with probability approaching 1, using larger and larger finite amounts of entanglement?If the answer is no, then 3≠4 and the Connes embedding problem has a negative answer.

If the answer is yes, then that seems like significant evidence in favor of 3=4 and Connes having a positive answer.

But unlike the Connes embedding problem itself (in its usual formulations), this is an

extremelyconcrete question, one that in principle could even be investigated using computer search for the optimal strategies (though I’m guessing that such searches would probably crap out before they told us anything interesting).Comment #38 June 22nd, 2016 at 12:43 pm

Scott #37: I would also really like to know the answer to this question. Since a negative answer would solve a major open problem, I bet that the answer is either yes or unknown =)

Comment #39 June 22nd, 2016 at 9:38 pm

Let me try to give some extra insight into Scott’s last question. Roughly, a group is hyperlinear if it has maps into the unitary matrices that are nearly multiplicative, growing more nearly multiplicative as the size of the matrices tends to infinity.

If one had such near representations of Higman’s group, then that would give the strategies with probabilities approaching one.

Does Higman’s group have such near representations?

Well, Connes’ is equivalent to every countable discrete group being hyperlinear. So yes if Connes’ is true. Moreover, Higman’s group is considered a good test case for this hyperlinear conjecture. William was aware of this and this is undoubtedly one reason that he chose HIgman’s group in his construction.

…plus ça change, plus c’est la même chose…

So if one follows Scott’s approach and finds optimal strategies in each dimension, one could then try to check, maybe using the 2nd order NPA hierarchy, their multiplicative behaviours on products of two generators. This might give some computational evidence for deciding if Higman’s group is hyperlinear.

A nice reference for these connections between hyperlinear groups and Connes’ embedding conjeccture is Capraro and Lupini’s:

arXiv:1309.2034v6

Comment #40 June 23rd, 2016 at 2:12 am

Mateus #27: Yes, the game is completely explicit, just large. In a linear system game, Alice receives an equation, and must output a consistent assignment for all the variables in the equation. Bob receives a variable, and outputs a value for the variable. Thus the input sets for the game based on Higman’s group are roughly of size 300 and 400 after fine-tuning, while the output sets have size 8 (all equations in this case contain exactly 3 variables) and size 2 respectively. I expect that further fine-tuning could be done, but that any game constructed via this method would still be relatively large. The smallest game showing 2 != 4 could be I3322; I don’t see any reason to count it out.

#37 and #38: we don’t know the answer to this question. If the answer is yes, then it would imply that 2 != 3, so resolving the question in either direction would be interesting.

Comment #41 June 23rd, 2016 at 7:02 am

Slofstra #40: Thanks for the answer. So there is really no hope of attacking your specific game with numerical methods.

But I’m skeptical that 2 can be different than 4 for I3322: The best upper bound we know is 0.250 875 38, which is equal to the best lower bound we know within numerical precision (see http://arxiv.org/abs/1006.3032 ). That paper shows also some inequalities in the 4422 scenario for which there is still a gap between the upper and lower bounds, so that is not ruled out.

Comment #42 June 25th, 2016 at 12:41 am

Mateus #41: I know there are finite-dimensional strategies which seem to converge to the best-known upper bound. So very likely 3=4 for I3322. But is there any evidence to suggest that 2=4? For instance, is there an explicit infinite-dimensional tensor-product strategy which is thought to be optimal?

Comment #43 June 27th, 2016 at 7:14 am

Slofstra #42: Oh. Indeed. I had assumed that Tamás’ strategies converged to a sensible infinite-dimensional strategy, but this is not the case. So his paper is actually evidence that 1 is not equal to its closure, 3, and that 3=4.

Of course it is still possible that 2=3 for this scenario, but there is no evidence for that as far as I know.

Comment #44 July 4th, 2016 at 2:37 pm

Because of this thread I got curious about the Tsirelson bound of the I3322 inequality, and posted a question on MathOverflow about getting an analytic expression for it:

http://mathoverflow.net/questions/243624/analytic-expression-for-the-tsirelson-bound-of-the-i3322-inequality

Comment #45 July 14th, 2016 at 9:55 am

As it turns out, William’s result has other consequences as well: for example, quantum logic is undecidable. I’ve written a short draft explaining this observation. I’d appreciate any sort of comments or questions!

Comment #46 July 14th, 2016 at 5:55 pm

Tobias #45: That’s an awesome observation! I’ve wondered for a while about what can be said at the interface between complexity/computability and quantum logic. Is the classical analogue of your undecidable problem NP-complete? (It seems like basically just SAT, no?)

Comment #47 July 15th, 2016 at 4:29 am

Scott #46: the classical analogue is a restricted form of SAT, which we’ve shown to be NP-complete in Proposition 8.1.2 here.

For finite Hilbert space dimension, there’s also a number of works by Christian Herrmann and coauthors on the complexity of quantum logic, such as this one.