## John Wright joins UT Austin

I’m delighted to announce that quantum computing theorist John Wright will be joining the computer science faculty at UT Austin in Fall 2020, after he finishes a one-year postdoc at Caltech.

John made an appearance on this blog a few months ago, when I wrote about the new breakthrough by him and Anand Natarajan: namely, that MIP* (multi-prover interactive proofs with entangled provers) contains NEEXP (nondeterministic double-exponential time). Previously, MIP* had only been known to contain NEXP (nondeterministic *single* exponential time). So, this is an exponential expansion in the power of entangled provers over what was previously known and believed, and the first proof that entanglement actually *increases* the power of multi-prover protocols, rather than decreasing it (as it could’ve done a priori). Even more strikingly, there seems to be no natural stopping point: MIP* might soon swallow up arbitrary towers of exponentials or even the halting problem (!). For more, see for example this *Quanta* article, or this post by Thomas Vidick, or this short story [sic] by Henry Yuen.

John grew up in Texas, so he’s no stranger to BBQ brisket or scorching weather. He did his undergrad in computer science at UT Austin—my colleagues remember him as a star—and then completed his PhD with Ryan O’Donnell at Carnegie Mellon, followed by a postdoc at MIT. Besides the work on MIP*, John is also well-known for his 2015 work with O’Donnell pinning down the sample complexity of quantum state tomography. Their important result, a version of which was independently obtained by Haah et al., says that if you want to learn an unknown d-dimensional quantum mixed state ρ to a reasonable precision, then ~d^{2} copies of ρ are both necessary and sufficient. This solved a problem that had personally interested me, and already plays a role in, e.g., my work on shadow tomography and gentle measurements.

Our little quantum information center at UT Austin is growing rapidly. Shyam Shankar, a superconducting qubits guy who previously worked in Michel Devoret’s group at Yale, will also be joining UT’s Electrical and Computer Engineering department this fall. I’ll have two new postdocs—Andrea Rocchetto and Yosi Atia—as well as new PhD students. We’ll continue recruiting this coming year, with potential opportunities for students, postdocs, faculty, and research scientists across the CS, physics, and ECE departments as well as the Texas Advanced Computing Center (TACC). I hope you’ll consider applying to join us.

With no evaluative judgment attached, I can honestly say that this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand.

Meanwhile, this was an unprecedented year for CS hiring at UT Austin more generally. John Wright is one of at least four new faculty (probably more) who will be joining us. It’s a good time to be in CS.

A huge welcome to John, and hook ’em Hadamards!

(And for US readers: have a great 4^{th}! Though how could any fireworks match the proof of the Sensitivity Conjecture?)

Comment #1 July 4th, 2019 at 3:46 am

Wait isn’t BBQ a complexity class?

Comment #2 July 4th, 2019 at 5:19 am

> Even more strikingly, there seems to be no natural stopping point: MIP* might soon swallow up arbitrary towers of exponentials or even the halting problem (!)

Wait, what? I thought entanglement and QM could be simulated classically in exponential time. So if classical MIP is NEXP, how can entanglement do better than add another layer or two of exponentials? I read Thomas Vidick’s and Kevin Hartnett’s articles, but not yet Henry Yuen’s, but don’t see much help.

The whole thing seems paradoxical. I guess it’s a non-physical system because of the unbounded computation available to the provers, But still, before believing a MIP* proof maybe we have to worry that the provers know something we don’t about QM. Do the provers “merely” have unbounded Turing computation (i.e. pi-0-1 oracles) or can they go further, like for the truth set of arithmetic with arbitrary quantifier nesting? If the latter, they can generate fake “random” strings that fool pi-0-n unbounded verifiers for any finite n…

Comment #3 July 4th, 2019 at 6:11 am

Aha regarding previous, Thomas Vidick’s article does help understand this and I’ll try to read it carefully, and to finish my thought from earlier. The Henry Yuen story wasn’t of much help though. It says what the MIP* provers can do, but not how they do it.

Comment #4 July 4th, 2019 at 6:57 am

L #1: I’ve seen BQP misspelled BPQ (including at the STOC business meeting this year), at which point you’re only edit distance 1 from Texas’ state food.

Comment #5 July 4th, 2019 at 7:05 am

asdf: The key point is that, with MIP*, no a-priori upper bound is placed on how many entangled qubits the provers could have. So, imagine the verifier asks them to implement some crazy strategy involving a stack-of-exponentials number of entangled qubits. In that case, simulating the provers by brute force—by which I mean, enumerating over all their possible actions to see whether they can make the verifier accept or not—would also take a stack-of-exponentials amount of time. Nothing similar happens in the unentangled case, where the provers’ strategies are always fully specified by just writing down exponentially-large truth tables.

If it seems shocking to you that such a weird hypothetical scenario could actually rear its head … well, yes! Now you know why this subject has become so interesting.

Comment #6 July 4th, 2019 at 10:38 am

I apologize for bringing politics into this, but neither can we proceed as though nothing is happening: I must say that a country with concentration camps doesn’t deserve fireworks.

Comment #7 July 4th, 2019 at 12:19 pm

Thanks for the very kind blog post Scott! I’m excited to join UT 🙂

Comment #8 July 4th, 2019 at 1:12 pm

Marshall #6: Speaking only for myself, and not necessarily for anyone else mentioned in this post: you might be right. I’m as desirous as anyone of ending the monstrous cruelties on the border, and all the other assaults on the country’s ideals being perpetrated by the criminal in the White House. We’ll have an opportunity to do exactly that a year from now. It’s crucial that we don’t let that opportunity slip away, especially given the Democrats’ unique genius for losing unlosable elections. That will mean: staying united, not tearing each other apart by factional infighting and moral preening, and (especially) refusing to cede that the other side is “the real Americans” while we’re just cosmopolitan elitists.

That itself, I think, would be a sufficient reason to go see the fireworks tonight, though to be honest it’s not why we’ll go. We’ll go because they’re happening regardless, the prime viewing area is like a 10-minute walk from us, we don’t have other plans for tonight, and the kids will like them.

Comment #9 July 4th, 2019 at 3:36 pm

Scott #5 thanks, that’s interesting. Stack-of-exponential number of entangled qubits I can deal with, but how do you do anything useful with them in polynomial time? Can you get that many into polynomially bounded physical space, so that the speed-of-light communication time to get at them doesn’t grow exponentially or worse?

Otherwise, if the number of qubits and unbounded prover computation weren’t already tip-offs, ignoring the communication delays is another sign this model of computation is non-physical. And I still don’t see how to get from there to the halting problem. I mean you can solve the halting problem on a classical TM given an initial tape with enough consecutive “1”‘s written on it. You just have to know that the number of 1’s is finite but larger than BB(n), where n is the size of the query 😉

Meanwhile, somebody claims to have observed wavefunction collapse and been able to reverse it. You’ve probably seen that already, but if not:

https://www.quantamagazine.org/how-quantum-trajectory-theory-lets-physicists-understand-whats-going-on-during-wave-function-collapse-20190703/

There is no wikipedia article about “quantum trajectory theory” which is probably a bad sign ;). Some web search indicates that it might have something to do with pilot-wave theory.

Comment #10 July 4th, 2019 at 4:02 pm

asdf #9: In interactive proof models, unless specified otherwise, only the

verifieris restricted to polynomial time. The provers can use all the resources they want.To get up to the halting problem, we’d have to let the number of entangled qubits shared by the provers (and the running time of the provers) be literally infinite.

No, of course this isn’t “physically realistic.” When did I ever say anything about “physically realistic” in this context? 😀

Realistic or not, there’s still a fundamental question about the nature of entanglement here—can our piddling, polynomial-time verifier do a test to find out, with high statistical confidence, whether the provers have a literally infinite amount of entanglement, rather than “merely” Ackermann(n) of it? We don’t yet know the answer to that question, but there’s now a very good chance that we’ll learn it soon. Exciting, right?

Certainly more exciting than that article you inflicted on me in a drive-by linking, in blatant violation of this blog’s policies! 🙂 I started reading it but couldn’t finish. The total unwillingness to admit that “what happens in the middle of a measurement” is perfectly well captured by the standard Schrödinger equation, if we include the coupling to the environment, and that this is the entire point of decoherence theory … no, not on July 4th. Not while I’m on vacation with the family.

Comment #11 July 5th, 2019 at 11:35 am

Do you mind sharing who the three other new faculty members you’ve alluded to are?

Comment #12 July 5th, 2019 at 12:04 pm

Sebastian #11: I’m not absolutely certain whether the info is public yet (but if not it will be soon).

Comment #13 July 5th, 2019 at 7:15 pm

Scott, sorry! I thought it was ok to post a link as long as I didn’t ask you to comment on it.

Is Mike & Ike still your recommended textbook for understanding enough QC to make sense of this MIP* paper? Good point about only the verifier having to be polynomially bounded and about the infinite entanglement allowed to the provers. It sounds like there’s still an exponential amplification step that has to be iterated infinitely many times, giving a new(?) model of computation since if we believe the verifier can produce arithmetically random bits, the prover system can’t be classically simulated even with what you once called super duper pooper machines.

Something still seems paradoxical. I’ve heard that essentially all the math found in physics, including QM, can be encoded in Peano arithmetic. But if MIP* can convince a verifier that a TM does or doesn’t halt, it can (interactively) prove the consistency or inconsistency of PA, or your favorite stronger effective theory, to arbitarily high certainty. I thought there were some theorems preventing that sort of thing.

Comment #14 July 5th, 2019 at 9:29 pm

asdf #13: You’re sort of mixing together a bunch of different subjects. MIP* is not so much a “new model of computation” as just an unusual complexity class—unusual because of the lack of any obvious ceiling on how much entanglement the provers might need. It’s been shown, however, that MIP* can’t give

morepower than either the halting problem or its complement (depending on exactly how MIP* with infinite entanglement is defined). So, MIP* might (or might not) go beyond the computable functions, but it doesn’t go up into the arithmetical hierarchy.If MIP* contained the complement of the halting problem, then yes, the verifier could ask the provers to prove the consistency of Peano arithmetic, ZFC, etc., and efficiently verify their claim. But keep in mind that, not only would this involve provers who we already agreed were physically unrealistic—but even if such provers existed, the verifier still wouldn’t end up with a finite-sized proof of Con(PA) or Con(ZFC) with which to convince anyone else. PA would still be unable to prove

its ownconsistency, and likewise for ZFC. Neither Gödel nor any other known theorem rules this possibility out.Mike & Ike is still a standard reference for the field. But if you want to learn about MIP* specifically, you’d do much better to read the literature on that subject (some of it quite recent) that precedes Natarajan-Wright, like Cleve-Hoyer-Toner-Watrous, Ito-Vidick, Reichardt-Unger-Vazirani, and Slofstra.

Comment #15 July 5th, 2019 at 10:32 pm

Asdf #13

Vidick & Watrous also have a (long) survey entitled “Quantum Proofs” that is excellent.

Comment #16 July 6th, 2019 at 7:27 pm

asdf, Scott: Thanks for that discussion. This possibility that MIP* could swallow the halting problem surprises and confuses me too.

Here is my first confusion: Based on my understanding of MIP, I thought that MIP* only need to convince a verifier that a TM halts (in case it actually does). If it doesn’t halt, then the probability that the verifier gets convinced that it halts must be smaller than 1/3. Even with this restriction, I still have trouble to accept that MIP* might have this power (=my second confusion). But your discussion indicates that even this restriction might just be a matter of the exact definition:

Can you explain how it happens that there can be an ambiguity between the halting problem or its complement, depending on the exact details of a seemingly unrelated point?

Back to my second confusion: Even with this restriction, I still have trouble to accept that MIP* might have this power.

Even if a TM halts, no amount of information (+computation time) bounded by a computable function of the length of the description of the TM is in general sufficient to verify this. Can one work around that barrier by using randomness and “the verifier still cannot convince anybody else” alone, or is entanglement crucial in (possibly) overcoming that barrier?

Comment #17 July 6th, 2019 at 8:35 pm

gentzen #16: The entanglement between the provers—and moreover,

unboundedentanglement between them—is crucial to everything we’re talking about here. But there are two different things one can mean by “unbounded entanglement.”First, we could say that the provers share a finite number of entangled qubits, but there’s no upper bound on how many. In that case, the halting problem is always an upper bound on what the provers can prove to the verifier, because a verifier with a HALT oracle could simply enumerate over all possible prover strategies (each of which is finite) looking for one that accepts.

Alternatively, we can say that the provers share a literally infinite amount of entanglement, and can implement any strategy whatsoever where their operators commute with each other (the usual criterion for spatial separation in quantum field theory). In this case, the

complementof the halting problem is always an upper bound on what the provers can prove to the verifier. The reason is harder to explain, but it follows from work by Navascues, Pironio, and Acin on semidefinite programming hierarchies.It’s currently unknown whether these two notions of “unbounded entanglement” are equivalent to each other. It’s known that the answer is yes if and only if the Connes Embedding Conjecture holds—that’s a famous unsolved problem in math from the 1970s.

If MIP* turns out to be able to do the halting problem, that will imply that the two notions of entanglement are

notequivalent, and therefore that the Connes Embedding Conjecture is false.I am not making any of this up.

Comment #18 July 7th, 2019 at 4:51 am

Scott #17:

Wouldn’t that kick the provers out of MIP* back into plain old MIP? I mean, the whole point of MIP* is that the provers must make non-commutative measurements, isn’t it?

Comment #19 July 7th, 2019 at 5:10 am

Aula #18: Nope. Faraway operators always commute with each other, even if they’re applied to two halves of an entangled state. If they didn’t commute, then you wouldn’t even need entanglement, as you’d simply have a direct channel of communication. Of course the outcomes of the measurement operators won’t have a local hidden variable explanation, but that’s different.

Comment #20 July 8th, 2019 at 5:58 am

Scott #17: Thanks for your explanations.

Sorry, if my question implied anything like that. It was just that I had similar expectations like asdf, and your answers indicated that those expectations for MIP* were inappropriate. So I tried to be careful and explain where my expectations came from.

In addition, it often turns out that my intuitive expectations about things related to probabilities are wrong. So I queried whether that was also the reason here why my expectations for MIP* were inappropriate.

(In order to get a better feeling for the power of randomness, I now worked through proofs of Adelman’s theorem and Toda’s theorem. Turns out I had worked through a proof of Adelman’s theorem before, and that PP and Toda’s theorem are actually much less about probability than I had expected. Guess I will have to work through proofs related to IP and MIP to get a better feeling for the power of randomness when it comes to proofs.)

Comment #21 July 8th, 2019 at 8:16 pm

Scott, it really does seem to be a new model of computation since these infinite-entanglement machines can do something that classical oracle machines can’t, namely convince us puny humans of the truth or falsehood of otherwise undecidable propositions.

I still don’t see how it can work. Let’s say I believe ZFC is consistent, so there is no integer n that is the Gödel number of a proof of 1=0. But iIf all you and I can agree on about the integers is PA, then there *is* such an n in some (maybe nonstandard) model of PA. So we ask the MIP* verifier to settle the question and it says it says ZFC is inconsistent. How does it convince us there is an actual inconsistency rather than that the provers are working in a nonstandard model of arithmetic?

The theorems I was thinking of earlier were the “randomness doesn’t help computability” ones (e.g. by Sacks) mentioned in [this MO thread](https://mathoverflow.net/questions/58060) but I haven’t studied them.

Comment #22 July 8th, 2019 at 10:24 pm

asdf #21: If ZFC was inconsistent, then the provers could prove that to you—and moreover, prove that they were talking about “the standard model of the integers” (i.e., what outside of logic we simply call “the integers”)—by just

showing you the damn inconsistency!So in some sense, the more interesting question is whether MIP* provers could prove to you that ZFC is

consistent(or more generally, that a given TM runs forever). As I said, the answer is not yet known. Having said that, there’s also a question about whether they could prove to you that a given TM halts, in time that can be upper-bounded by some computable function of the size of the TM, with no dependence on the TM’s running time. That’s also an open problem right now. Both of these problems are intimately connected to the Connes Embedding Conjecture, and to the equivalence or inequivalence of the two different notions of infinite entanglement.But if the answers to the questions are yes, then there’s just a protocol that witnesses it, and (presumably) a proof of the protocol’s completeness and soundness that can be formalized in ZFC. In which case, an MIP* proof that a given TM halts is a proof that it halts, and a proof that it doesn’t is a proof that it doesn’t … in the real world. You don’t need to breathe a word about the formal figments of logic known as “nonstandard models of arithmetic,” any more than you would with ordinary zero-knowledge or interactive protocols, or for that matter with any other ordinary mathematical reasoning. It’s just a total irrelevancy.

Comment #23 July 9th, 2019 at 1:36 am

Scott, with an unbounded verifier, the prover could possibly say “I’ll give you the contradiction one bit at a time, just send me another query when you’re ready for the next bit and I promise there are only finitely many of them”. Then it gives bit 1, bit 2, etc. but somehow never seems to finish: “I’m almost done! Just keep querying!”. Remember that the actual contradiction may be enormous, like the 10 zillionth iterated busy beaver function of whatever. It *might* stop someday but who knows.

With a polynomially bounded verifier even that doesn’t work. Am I missing something? “Show the contradiction” seems like “show the forced checkmate” in the claim Black has a forced win in chess. The amazing thing about IP is that it shows the existence of the forced win without actually outputting the whole game tree. So I thought the MIP* equivalent would only show existence since demonstrating would be unbounded in size.

Comment #24 July 9th, 2019 at 3:47 am

asdf #23: Yes, I said the same thing in my comment. If MIP* contained the halting problem, it would mean that the provers could prove that ZFC was inconsistent (if indeed it was) in time that was polynomially bounded in the length of the ZFC axioms, totally independent of the size of the contradiction.

Conceptually, MIP* is extremely similar to IP, so if you understand the latter, you shouldn’t have a problem with MIP*. The entire point of an interactive protocol is that the verifier can become convinced of statements, without ever seeing an explicit line-by-line derivation (in this case, the actual ZFC contradiction) or being able to convince anyone else. All that matters is that the completeness and soundness conditions are satisfied. To review:

Completeness:If a given statement is true, then the prover(s) can act in a way that causes the verifier to accept with overwhelming probability.Soundness:If a given statement is false, then no matter how the prover(s) act, the verifier will reject with overwhelming probability.That’s it. In CS theory since the 1980s, any system whatsoever that satisfies the two conditions above, no matter how weird it looks otherwise, gets to be dignified with the name “proof system.”

Comment #25 July 9th, 2019 at 8:51 am

Just a note that Shyam is from Michel Devoret’s group at Yale (not Rob’s), although of course the two groups collaborate heavily. Excited to see him bring superconducting qubits to UT!

Comment #26 July 9th, 2019 at 9:27 am

Alec #25: Thanks!! Fixed.

Comment #27 July 9th, 2019 at 4:49 pm

The current number of people in the field is proportional to the current qubit record.

Comment #28 July 9th, 2019 at 11:15 pm

If the Connes embedding conjecture is true, does that imply any limitations on the power of entangled provers? (the conjecture does have some relation to quantum games)

Comment #29 July 10th, 2019 at 6:33 am

matt #28: Yes, I believe the Connes embedding conjecture implies MIP* is computable, since the definition that’s in RE (recursively enumerable) and the definition that’s in coRE would then coincide, and RE∩coRE = computable.