Sabineblogging

February 4th, 2019

I’ve of course been following the recent public debate about whether to build a circular collider to succeed the LHC—notably including Sabine Hossenfelder’s New York Times column arguing that we shouldn’t.  (See also the responses by Jeremy Bernstein and Lisa Randall, and the discussion on Peter Woit’s blog, and Daniel Harlow’s Facebook thread, and this Vox piece by Kelsey Piper.)  Let me blog about this as a way of cracking my knuckles or tuning my violin, just getting back into blog-shape after a long hiatus for travel and family and the beginning of the semester.

Regardless of whether this opinion is widely shared among my colleagues, I like Sabine.  I’ve often found her blogging funny and insightful, and I wish more non-Lubos physicists would articulate their thoughts for the public the way she does, rather than just standing on the sidelines and criticizing the ones who do. I find it unfortunate that some of the replies to Sabine’s arguments dwelled on her competence and “standing” in physics (even if we set aside—as we should—Lubos’s misogynistic rants, whose predictability could be used to calibrate atomic clocks). It’s like this: if high-energy physics had reached a pathological state of building bigger and bigger colliders for no good reason, then we’d expect that it would take a semi-outsider to say so in public, so then it wouldn’t be a further surprise to find precisely such a person doing it.

Not for the first time, though, I find myself coming down on the opposite side as Sabine. Basically, if civilization could get its act together and find the money, I think it would be pretty awesome to build a new collider to push forward the energy frontier in our understanding of the universe.

Note that I’m not making the much stronger claim that this is the best possible use of $20 billion for science. Plausibly a thousand $20-million projects could be found that would advance our understanding of reality by more than a new collider would. But it’s also important to realize that that’s not the question at stake here. When, for example, the US Congress cancelled the Superconducting Supercollider midway through construction—partly, it’s believed, on the basis of opposition from eminent physicists in other subfields, who argued that they could do equally important science for much cheaper—none of the SSC budget, as in 0% of it, ever did end up redirected to those other subfields. In practice, then, the question of “whether a new collider is worth it” is probably best considered in absolute terms, rather than relative to other science projects.

What I found most puzzling, in Sabine’s writings on this subject, was the leap in logic from

  1. many theorists expected that superpartners, or other new particles besides the Higgs boson, had a good chance of being discovered at the LHC, based on statistical arguments about “natural” parameter values, and
  2. the basic soundness of naturalness arguments was always open to doubt, and indeed the LHC results to date offer zero support for them, and
  3. many of the same theorists now want an even bigger collider, and continue to expect new particles to be found, and haven’t sufficiently reckoned with their previous failed predictions, to …
  4. therefore we shouldn’t build the bigger collider.

How do we get from 1-3 to 4: is the idea that we should punish the errant theorists, by withholding an experiment that they want, in order to deter future wrong predictions? After step 3, it seems to me that Sabine could equally well have gone to: and therefore it’s all the more important that we do build a new collider, in order to establish all the more conclusively that there’s just an energy desert up there—and that I, Sabine, was right to emphasize that possibility, and those other theorists were wrong to downplay it!

Like, I gather that there are independently motivated scenarios where there would be only the Higgs at the LHC scale, and then new stuff at the next energy scale beyond it. And as an unqualified outsider who enjoys talking to friends in particle physics and binge-reading about it, I’d find it hard to assign the totality of those scenarios less than ~20% credence or more than ~80%—certainly if the actual experts don’t either.

And crucially, it’s not as if raising the collision energy is just one arbitrary direction in which to look for new fundamental physics, among a hundred a-priori equally promising directions. Basically, there’s raising the collision energy and then there’s everything else. By raising the energy, you’re not testing one specific idea for physics beyond Standard Model, but a hundred or a thousand ideas in one swoop.

The situation reminds me a little of the quantum computing skeptics who say: scalable QC can never work, in practice and probably even in principle; the mainstream physics community only thinks it can work because of groupthink and hype; therefore, we shouldn’t waste more funds trying to make it work. With the sole, very interesting exception of Gil Kalai, none of the skeptics ever seem to draw what strikes me as an equally logical conclusion: whoa, let’s go full speed ahead with trying to build a scalable QC, because there’s an epochal revolution in physics to be had here—once the experimenters finally see that I was right and the mainstream was wrong, and they start to unravel the reasons why!

Of course, $20 billion is a significant chunk of change, by the standards of science even if not by the standards of random government wastages (like our recent $11 billion shutdown). And ultimately, decisions do need to be made about which experiments are most interesting to pursue with limited resources. And if a future circular collider were built, and if it indeed just found a desert, I think the balance would tilt pretty strongly toward Sabine’s position—that is, toward declining to build an even bigger and more expensive collider after that. If the Patriots drearily won every Superbowl 13-3, year after year after year, eventually no one would watch anymore and the Superbowl would get cancelled (well, maybe that will happen for other reasons…).

But it’s worth remembering that—correct me if I’m wrong—so far there have been no cases in the history of particle physics of massively expanding the energy frontier and finding absolutely nothing new there (i.e., nothing that at least conveyed multiple bits of information, as the Higgs mass did). And while my opinion should count for less than a neutrino mass, just thinking it over a-priori, I keep coming back to the question: before we close the energy frontier for good, shouldn’t there have been at least one unmitigated null result, rather than zero?

The Winding Road to Quantum Supremacy

January 15th, 2019

Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on.

I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus.

But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something.

Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more.

And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI.

And a fourth, in which I talk about whether reality is discrete or continuous.

Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started.

Incompleteness ex machina

December 30th, 2018

I have a treat with which to impress your friends at New Year’s Eve parties tomorrow night: a rollicking essay graciously contributed by a reader named Sebastian Oberhoff, about a unified and simplified way to prove all of Gödel’s Incompleteness Theorems, as well as Rosser’s Theorem, directly in terms of computer programs. In particular, this improves over my treatments in Quantum Computing Since Democritus and my Rosser’s Theorem via Turing machines post. While there won’t be anything new here for the experts, I loved the style—indeed, it brings back wistful memories of how I used to write, before I accumulated too many imaginary (and non-imaginary) readers tut-tutting at crass jokes over my shoulder. May 2019 bring us all the time and the courage to express ourselves authentically, even in ways that might be sneered at as incomplete, inconsistent, or unsound.

Announcements

December 27th, 2018

I’m planning to be in Australia soon—in Melbourne January 4-10 for a friend’s wedding, then in Sydney January 10-11 to meet colleagues and give a talk. It will be my first trip down under for 12 years (and Dana’s first ever). If there’s interest, I might be able to do a Shtetl-Optimized meetup in Melbourne the evening of Friday the 4th (or the morning of Saturday the 5th), and/or another one in Sydney the evening of Thursday the 10th. Email me if you’d go, and then we’ll figure out details.

The National Quantum Initiative Act is now law. Seeing the photos of Trump signing it, I felt … well, whatever emotions you might imagine I felt.

Frank Verstraete asked me to announce that the University of Vienna is seeking a full professor in quantum algorithms; see here for details.

Why are amplitudes complex?

December 17th, 2018

[By prior agreement, this post will be cross-posted on Microsoft’s Q# blog, even though it has nothing to do with the Q# programming language.  It does, however, contain many examples that might be fun to implement in Q#!]

Why should Nature have been quantum-mechanical?  It’s totally unclear what would count as an answer to such a question, and also totally clear that people will never stop asking it.

Short of an ultimate answer, we can at least try to explain why, if you want this or that piece of quantum mechanics, then the rest of the structure is inevitable: why quantum mechanics is an “island in theoryspace,” as I put it in 2003.

In this post, I’d like to focus on a question that any “explanation” for QM at some point needs to address, in a non-question-begging way: why should amplitudes have been complex numbers?  When I was a grad student, it was his relentless focus on that question, and on others in its vicinity, that made me a lifelong fan of Chris Fuchs (see for example his samizdat), despite my philosophical differences with him.

It’s not that complex numbers are a bad choice for the foundation of the deepest known description of the physical universe—far from it!  (They’re a field, they’re algebraically closed, they’ve got a norm, how much more could you want?)  It’s just that they seem like a specific choice, and not the only possible one.  There are also the real numbers, for starters, and in the other direction, the quaternions.

Quantum mechanics over the reals or the quaternions still has constructive and destructive interference among amplitudes, and unitary transformations, and probabilities that are absolute squares of amplitudes.  Moreover, these variants turn out to lead to precisely the same power for quantum computers—namely, the class BQP—as “standard” quantum mechanics, the one over the complex numbers.  So none of those are relevant differences.

Indeed, having just finished teaching an undergrad Intro to Quantum Information course, I can attest that the complex nature of amplitudes is needed only rarely—shockingly rarely, one might say—in quantum computing and information.  Real amplitudes typically suffice.  Teleportationsuperdense coding, the Bell inequality, quantum money, quantum key distribution, the Deutsch-Jozsa and Bernstein-Vazirani and Simon and Grover algorithms, quantum error-correction: all of those and more can be fully explained without using a single i that’s not a summation index.  (Shor’s factoring algorithm is an exception; it’s much more natural with complex amplitudes.  But as the previous paragraph implied, their use is removable even there.)

It’s true that, if you look at even the simplest “real” examples of quantum systems—or as a software engineer might put it, at the application layers built on top of the quantum OS—then complex numbers are everywhere, in a way that seems impossible to remove.  The Schrödinger equation, energy eigenstates, the position/momentum commutation relation, the state space of a spin-1/2 particle in 3-dimensional space: none of these make much sense without complex numbers (though it can be fun to try).

But from a sufficiently Olympian remove, it feels circular to use any of this as a “reason” for why quantum mechanics should’ve involved complex amplitudes in the first place.  It’s like, once your OS provides a certain core functionality (in this case, complex numbers), it’d be surprising if the application layer didn’t exploit that functionality to the hilt—especially if we’re talking about fundamental physics, where we’d like to imagine that nothing is wasted or superfluous (hence Rabi’s famous question about the muon: “who ordered that?”).

But why should the quantum OS have provided complex-number functionality at all?  Is it possible to answer that question purely in terms of the OS’s internal logic (i.e., abstract quantum information), making minimal reference to how the OS will eventually get used?  Maybe not—but if so, then that itself would seem worthwhile to know.

If we stick to abstract quantum information language, then the most “obvious, elementary” argument for why amplitudes should be complex numbers is one that I spelled out in Quantum Computing Since Democritus, as well as my Is quantum mechanics an island in theoryspace? paper.  Namely, it seems desirable to be able to implement a “fraction” of any unitary operation U: for example, some V such that V2=U, or V3=U.  With complex numbers, this is trivial: we can simply diagonalize U, or use the Hamiltonian picture (i.e., take e-iH/2 where U=e-iH), both of which ultimately depend on the complex numbers being algebraically closed.  Over the reals, by contrast, a 2×2 orthogonal matrix like $$ U = \left(\begin{array}[c]{cc}1 & 0\\0 & -1\end{array}\right)$$

has no 2×2 orthogonal square root, as follows immediately from its determinant being -1.  If we want a square root of U (or rather, of something that acts like U on a subspace) while sticking to real numbers only, then we need to add another dimension, like so: $$ \left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0&-1\end{array}\right)=\left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & 0 & 1\\0 & -1 & 0\end{array}\right) ^{2} $$

This is directly related to the fact that there’s no way for a Flatlander to “reflect herself” (i.e., switch her left and right sides while leaving everything else unchanged) by any continuous motion, unless she can lift off the plane and rotate herself through the third dimension.  Similarly, for us to reflect ourselves would require rotating through a fourth dimension.

One could reasonably ask: is that it?  Aren’t there any “deeper” reasons in quantum information for why amplitudes should be complex numbers?

Indeed, there are certain phenomena in quantum information that, slightly mysteriously, work out more elegantly if amplitudes are complex than if they’re real.  (By “mysteriously,” I mean not that these phenomena can’t be 100% verified by explicit calculations, but simply that I don’t know of any deep principle by which the results of those calculations could’ve been predicted in advance.)

One famous example of such a phenomenon is due to Bill Wootters: if you take a uniformly random pure state in d dimensions, and then you measure it in an orthonormal basis, what will the probability distribution (p1,…,pd) over the d possible measurement outcomes look like?  The answer, amazingly, is that you’ll get a uniformly random probability distribution: that is, a uniformly random point on the simplex defined by pi≥0 and p1+…+pd=1.  This fact, which I’ve used in several papers, is closely related to Archimedes’ Hat-Box Theorem, beloved by friend-of-the-blog Greg Kuperberg.  But here’s the kicker: it only works if amplitudes are complex numbers.  If amplitudes are real, then the resulting distribution over distributions will be too bunched up near the corners of the probability simplex; if they’re quaternions, it will be too bunched up near the middle.

There’s an even more famous example of such a Goldilocks coincidence—one that’s been elevated, over the past two decades, to exalted titles like “the Axiom of Local Tomography.”  Namely: suppose we have an unknown finite-dimensional mixed state ρ, shared by two players Alice and Bob.  For example, ρ might be an EPR pair, or a correlated classical bit, or simply two qubits both in the state |0⟩.  We imagine that Alice and Bob share many identical copies of ρ, so that they can learn more and more about it by measuring this copy in this basis, that copy in that basis, and so on.

We then ask: can ρ be fully determined from the joint statistics of product measurements—that is, measurements that Alice and Bob can apply separately and locally to their respective subsystems, with no communication between them needed?  A good example here would be the set of measurements that arise in a Bell experiment—measurements that, despite being local, certify that Alice and Bob must share an entangled state.

If we asked the analogous question for classical probability distributions, the answer is clearly “yes.”  That is, once you’ve specified the individual marginals, and you’ve also specified all the possible correlations among the players, you’ve fixed your distribution; there’s nothing further to specify.

For quantum mixed states, the answer again turns out to be yes, but only because amplitudes are complex numbers!  In quantum mechanics over the reals, you could have a 2-qubit state like $$ \rho=\frac{1}{4}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1& 0 & 0 & 1\end{array}\right) ,$$

which clearly isn’t the maximally mixed state, yet which is indistinguishable from the maximally mixed state by any local measurement that can be specified using real numbers only.  (Proof: exercise!)

In quantum mechanics over the quaternions, something even “worse” happens: namely, the tensor product of two Hermitian matrices need not be Hermitian.  Alice’s measurement results might be described by the 2×2 quaternionic density matrix $$ \rho_{A}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -i\\i & 1\end{array}\right), $$

and Bob’s results might be described by the 2×2 quaternionic density matrix $$ \rho_{B}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -j\\j & 1\end{array}\right), $$

and yet there might not be (and in this case, isn’t) any 4×4 quaternionic density matrix corresponding to ρA⊗ρB, which would explain both results separately.

What’s going on here?  Why do the local measurement statistics underdetermine the global quantum state with real amplitudes, and overdetermine it with quaternionic amplitudes, being in one-to-one correspondence with it only when amplitudes are complex?

We can get some insight by looking at the number of independent real parameters needed to specify a d-dimensional Hermitian matrix.  Over the complex numbers, the number is exactly d2: we need 1 parameter for each of the d diagonal entries, and 2 (a real part and an imaginary part) for each of the d(d-1)/2 upper off-diagonal entries (the lower off-diagonal entries being determined by the upper ones).  Over the real numbers, by contrast, “Hermitian matrices” are just real symmetric matrices, so the number of independent real parameters is only d(d+1)/2.  And over the quaternions, the number is d+4[d(d-1)/2] = 2d(d-1).

Now, it turns out that the Goldilocks phenomenon that we saw above—with local measurement statistics determining a unique global quantum state when and only when amplitudes are complex numbers—ultimately boils down to the simple fact that $$ (d_A d_B)^2 = d_A^2 d_B^2, $$

but $$\frac{d_A d_B (d_A d_B + 1)}{2} > \frac{d_A (d_A + 1)}{2} \cdot \frac{d_B (d_B + 1)}{2},$$

and conversely $$ 2 d_A d_B (d_A d_B – 1) < 2 d_A (d_A – 1) \cdot 2 d_B (d_B – 1).$$

In other words, only with complex numbers does the number of real parameters needed to specify a “global” Hermitian operator, exactly match the product of the number of parameters needed to specify an operator on Alice’s subsystem, and the number of parameters needed to specify an operator on Bob’s.  With real numbers it overcounts, and with quaternions it undercounts.

A major research goal in quantum foundations, since at least the early 2000s, has been to “derive” the formalism of QM purely from “intuitive-sounding, information-theoretic” postulates—analogous to how, in 1905, some guy whose name I forget derived the otherwise strange-looking Lorentz transformations purely from the assumption that the laws of physics (including a fixed, finite value for the speed of light) take the same form in every inertial frame.  There have been some nontrivial successes of this program: most notably, the “axiomatic derivations” of QM due to Lucien Hardy and (more recently) Chiribella et al.  Starting from axioms that sound suitably general and nontechnical (if sometimes unmotivated and weird), these derivations perform the impressive magic trick of deriving the full mathematical structure of QM: complex amplitudes, unitary transformations, tensor products, the Born rule, everything.

However, in every such derivation that I know of, some axiom needs to get introduced to capture “local tomography”: i.e., the “principle” that composite systems must be uniquely determined by the statistics of local measurements.  And while this principle might sound vague and unobjectionable, to those in the business, it’s obvious what it’s going to be used for the second it’s introduced.  Namely, it’s going to be used to rule out quantum mechanics over the real numbers, which would otherwise be a model for the axioms, and thus to “explain” why amplitudes have to be complex.

I confess that I was always dissatisfied with this.  For I kept asking myself: would I have ever formulated the “Principle of Local Tomography” in the first place—or if someone else had proposed it, would I have ever accepted it as intuitive or natural—if I didn’t already know that QM over the complex numbers just happens to satisfy it?  And I could never honestly answer “yes.”  It always felt to me like a textbook example of drawing the target around where the arrow landed—i.e., of handpicking your axioms so that they yield a predetermined conclusion, which is then no more “explained” than it was at the beginning.

Two months ago, something changed for me: namely, I smacked into the “Principle of Local Tomography,” and its reliance on complex numbers, in my own research, when I hadn’t in any sense set out to look for it.  This still doesn’t convince me that the principle is any sort of a-priori necessity.  But it at least convinces me that it’s, you know, the sort of thing you can smack into when you’re not looking for it.

The aforementioned smacking occurred while I was writing up a small part of a huge paper with Guy Rothblum, about a new connection between so-called “gentle measurements” of quantum states (that is, measurements that don’t damage the states much), and the subfield of classical CS called differential privacy.  That connection is a story in itself; let me know if you’d like me to blog about it separately.  Our paper should be on the arXiv any day now; in the meantime, here are some PowerPoint slides.

Anyway, for the paper with Guy, it was of interest to know the following: suppose we have a two-outcome measurement E (let’s say, on n qubits), and suppose it accepts every product state with the same probability p.  Must E then accept every entangled state with probability p as well?  Or, a closely-related question: suppose we know E’s acceptance probabilities on every product state.  Is that enough to determine its acceptance probabilities on all n-qubit states?

I’m embarrassed to admit that I dithered around with these questions, finding complicated proofs for special cases, before I finally stumbled on the one-paragraph, obvious-in-retrospect “Proof from the Book” that slays them in complete generality.

Here it is: if E accepts every product state with probability p, then clearly it accepts every separable mixed state (i.e., every convex combination of product states) with the same probability p.  Now, a well-known result of Braunstein et al., from 1998, states that (surprisingly enough) the separable mixed states have nonzero density within the set of all mixed states, in any given finite dimension.  Also, the probability that E accepts ρ can be written as f(ρ)=Tr(Eρ), which is linear in the entries of ρ.  OK, but a linear function that’s determined on a subset of nonzero density is determined everywhere.  And in particular, if f is constant on that subset then it’s constant everywhere, QED.

But what does any of this have to do with why amplitudes are complex numbers?  Well, it turns out that the 1998 Braunstein et al. result, which was the linchpin of the above argument, only works in complex QM, not in real QM.  We can see its failure in real QM by simply counting parameters, similarly to what we did before.  An n-qubit density matrix requires 4n real parameters to specify (OK, 4n-1, if we demand that the trace is 1).  Even if we restrict to n-qubit density matrices with real entries only, we still need 2n(2n+1)/2 parameters.  By contrast, it’s not hard to show that an n-qubit real separable density matrix can be specified using only 3n real parameters—and indeed, that any such density matrix lies in a 3n-dimensional subspace of the full 2n(2n+1)/2-dimensional space of 2n×2n symmetric matrices.  (This is simply the subspace spanned by all possible tensor products of n Pauli I, X, and Z matrices—excluding the Y matrix, which is the one that involves imaginary numbers.)

But it’s not only the Braunstein et al. result that fails in real QM: the fact that I wanted for my paper with Guy fails as well.  As a counterexample, consider the 2-qubit measurement that accepts the state ρ with probability Tr(Eρ), where $$ E=\frac{1}{2}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1 & 0 & 0 & 1\end{array}\right).$$

I invite you to check that this measurement, which we specified using a real matrix, accepts every product state (a|0⟩+b|1⟩)(c|0⟩+d|1⟩), where a,b,c,d are real, with the same probability, namely 1/2—just like the “measurement” that simply returns a coin flip without even looking at the state at all.  And yet the measurement can clearly be nontrivial on entangled states: for example, it always rejects $$\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}},$$ and it always accepts $$ \frac{\left|00\right\rangle-\left|11\right\rangle}{\sqrt{2}}.$$

Is it a coincidence that we used exactly the same 4×4 matrix (up to scaling) to produce a counterexample to the real-QM version of Local Tomography, and also to the real-QM version of the property I wanted for the paper with Guy?  Is anything ever a coincidence in this sort of discussion?

I claim that, looked at the right way, Local Tomography and the property I wanted are the same property, their truth in complex QM is the same truth, and their falsehood in real QM is the same falsehood.  Why?  Simply because Tr(Eρ), the probability that the measurement E accepts the mixed state ρ, is a function of two Hermitian matrices E and ρ (both of which can be either “product” or “entangled”), and—crucially—is symmetric under the interchange of E and ρ.

Now it’s time for another confession.  We’ve identified an elegant property of quantum mechanics that’s true but only because amplitudes are complex numbers: namely, if you know the probability that your quantum circuit accepts every product state, then you also know the probability that it accepts an arbitrary state.  Yet, despite its elegance, this property turns out to be nearly useless for “real-world applications” in quantum information and computing.  The reason for the uselessness is that, for the property to kick in, you really do need to know the probabilities on product states almost exactly—meaning (say) to 1/exp(n) accuracy for an n-qubit state.

Once again a simple example illustrates the point.  Suppose n is even, and suppose our measurement simply projects the n-qubit state onto a tensor product of n/2 Bell pairs.  Clearly, this measurement accepts every n-qubit product state with exponentially small probability, even as it accepts the entangled state 
$$\left(\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}}\right)^{\otimes n/2}$$

with probability 1.  But this implies that noticing the nontriviality on entangled states, would require knowing the acceptance probabilities on product states to exponential accuracy.

In a sense, then, I come back full circle to my original puzzlement: why should Local Tomography, or (alternatively) the-determination-of-a-circuit’s-behavior-on-arbitrary-states-from-its-behavior-on-product-states, have been important principles for Nature’s laws to satisfy?  Especially given that, in practice, the exponential accuracy required makes it difficult or impossible to exploit these principles anyway?  How could we have known a-priori that these principles would be important—if indeed they are important, and are not just mathematical spandrels?

But, while I remain less than 100% satisfied about “why the complex numbers? why not just the reals?,” there’s one conclusion that my recent circling-back to these questions has made me fully confident about.  Namely: quantum mechanics over the quaternions is a flaming garbage fire, which would’ve been rejected at an extremely early stage of God and the angels’ deliberations about how to construct our universe.

In the literature, when the question of “why not quaternionic amplitudes?” is discussed at all, you’ll typically read things about how the parameter-counting doesn’t quite work out (just like it doesn’t for real QM), or how the tensor product of quaternionic Hermitian matrices need not be Hermitian.  In this paper by McKague, you’ll read that the CHSH game is winnable with probability 1 in quaternionic QM, while in this paper by Fernandez and Schneeberger, you’ll read that the non-commutativity of the quaternions introduces an order-dependence even for spacelike-separated operations.

But none of that does justice to the enormity of the problem.  To put it bluntly: unless something clever is done to fix it, quaternionic QM allows superluminal signaling.  This is easy to demonstrate: suppose Alice holds a qubit in the state |1⟩, while Bob holds a qubit in the state |+⟩ (yes, this will work even for unentangled states!)  Also, let $$U=\left(\begin{array}[c]{cc}1 & 0\\0 & j\end{array}\right) ,~~~V=\left(\begin{array}[c]{cc}1 & 0\\0& i\end{array}\right).$$

We can calculate that, if Alice applies U to her qubit and then Bob applies V to his qubit, Bob will be left with the state $$ \frac{j \left|0\right\rangle +
k \left|1\right\rangle}{\sqrt{2}}.$$

By contrast, if Alice decided to apply U only after Bob applied V, Bob would be left with the state 
$$ \frac{j \left|0\right\rangle – k \left|1\right\rangle}{\sqrt{2}}.$$

But Bob can distinguish these two states with certainty, for example by applying the unitary $$ \frac{1}{\sqrt{2}}\left(\begin{array}[c]{cc}j & k\\k & j\end{array}\right). $$

Therefore Alice communicated a bit to Bob.

I’m aware that there’s a whole literature on quaternionic QM, including for example a book by Adler.  Would anyone who knows that literature be kind enough to enlighten us on how it proposes to escape the signaling problem?  Regardless of the answer, though, it seems worth knowing that the “naïve” version of quaternionic QM—i.e., the version that gets invoked in quantum information discussions like the ones I mentioned above—is just immediately blasted to smithereens by the signaling problem, without the need for any subtle considerations like the ones that differentiate real from complex QM.

Update (Dec. 20): In response to this post, Stephen Adler was kind enough to email me with further details about his quaternionic QM proposal, and to allow me to share them here. Briefly, Adler completely agrees that quaternionic QM inevitably leads to superluminal signaling—but in his proposal, the surprising and nontrivial part is that quaternionic QM would reduce to standard, complex QM at large distances. In particular, the strength of a superluminal signal would fall off exponentially with distance, quickly becoming negligible beyond the Planck or grand unification scales. Despite this, Adler says that he eventually abandoned his proposal for quaternionic QM, since he was unable to make specific particle physics ideas work out (but the quaternionic QM proposal then influenced his later work).

Unrelated Update (Dec. 18): Probably many of you have already seen it, and/or already know what it covers, but the NYT profile of Donald Knuth (entitled “The Yoda of Silicon Valley”) is enjoyable and nicely written.

The NP genie

December 11th, 2018

Hi from the Q2B conference!

Every nerd has surely considered the scenario where an all-knowing genie—or an enlightened guru, or a superintelligent AI, or God—appears and offers to answer any question of your choice.  (Possibly subject to restrictions on the length or complexity of the question, to prevent glomming together every imaginable question.)  What do you ask?

(Standard joke: “What question should I ask, oh wise master, and what is its answer?”  “The question you should ask me is the one you just asked, and its answer is the one I am giving.”)

The other day, it occurred to me that theoretical computer science offers a systematic way to generate interesting variations on the genie scenario, which have been contemplated less—variations where the genie is no longer omniscient, but “merely” more scient than any entity that humankind has ever seen.  One simple example, which I gather is often discussed in the AI-risk and rationality communities, is an oracle for the halting problem: what computer program can you write, such that knowing whether it halts would provide the most useful information to civilization?  Can you solve global warming with such an oracle?  Cure cancer?

But there are many other examples.  Here’s one: suppose what pops out of your lamp is a genie for NP questions.  Here I don’t mean NP in the technical sense (that would just be a pared-down version of the halting genie discussed above), but in the human sense.  The genie can only answer questions by pointing you to ordinary evidence that, once you know where to find it, makes the answer to the question clear to every competent person who examines the evidence, with no further need to trust the genie.  Or, of course, the genie could fail to provide such evidence, which itself provides the valuable information that there’s no such evidence out there.

More-or-less equivalently (because of binary search), the genie could do what my parents used to do when my brother and I searched the house for Hanukkah presents, and give us “hotter” or “colder” hints as we searched for the evidence ourselves.

To make things concrete, let’s assume that the NP genie will only provide answers of 1000 characters or fewer, in plain English text with no fancy encodings.  Here are the candidates for NP questions that I came up with after about 20 seconds of contemplation:

  • Which pieces of physics beyond the Standard Model and general relativity can be experimentally confirmed with the technology of 2018? What are the experiments we need to do?
  • What’s the current location of the Ark of the Covenant, or its remains, if any still exist?  (Similar: where can we dig to find physical records, if any exist, pertaining to the Exodus from Egypt, or to Jesus of Nazareth?)
  • What’s a sketch of a resolution of P vs. NP, from which experts would stand a good chance of filling in the details?  (Similar for other any famous unsolved math problem.)
  • Where, if anywhere, can we point radio telescopes to get irrefutable evidence for the existence of extraterrestrial life?
  • What happened to Malaysia Flight 370, and where are the remains by which it could be verified?  (Similar for Amelia Earhart.)
  • Where, if anywhere, can we find intact DNA of non-avian dinosaurs?

Which NP questions would you ask the genie?  And what other complexity-theoretic genies would be interesting to consider?  (I thought briefly about a ⊕P genie, but I’m guessing that the yearning to know whether the number of sand grains in the Sahara is even or odd is limited.)


Update: I just read Lenny Susskind’s Y Combinator interview, and found it delightful—pure Lenny, and covering tons of ground that should interest anyone who reads this blog.

Airport idiocy

November 28th, 2018

On Sunday, I returned to Austin with Dana and the kids from Thanksgiving in Pennsylvania.  The good news is that I didn’t get arrested this time, didn’t mistake any tips for change, and didn’t even miss the flight!  But I did experience two airports that changed decisively for the worse.

In Newark Terminal C—i.e., one of the most important terminals of one of the most important airports in the world—there’s now a gigantic wing without a single restaurant or concession stand that, quickly and for a sane price, serves the sort of food that a child (say) might plausibly want to eat.  No fast food, not even an Asian place with rice and teriyaki to go.  Just one upscale eatery after the next, with complicated artisanal foods at brain-exploding prices, and—crucially—“servers” who won’t even acknowledge or make eye contact with the customers, because you have to do everything through a digital ordering system that gives you no idea how long the food might take to be ready, and whether your flight is going to board first.  The experience was like waking up in some sci-fi dystopia, where all the people have been removed from a familiar environment and replaced with glassy-eyed cyborgs.  And had we not thought to pack a few snacks with us, our kids would’ve starved.

Based on this and other recent experiences, I propose the following principle: if a customer’s digitally-mediated order to your company is eventually going to need to get processed by a human being anyhow—a fallible human who could screw things up—and if you’re less competent at designing user interfaces than Amazon (which means: anyone other than Amazon), then you must make it easy for the customer to talk to one of the humans behind the curtain.  Besides making the customer happy, such a policy is good business, since when you do screw things up due to miscommunications caused by poor user interfaces—and you will—it will be on you to fix things anyway, which will eat into your profit margin.  To take another example, besides Newark Terminal C, all these comments apply with 3000% force to the delivery service DoorDash.

Returning to airports, though: whichever geniuses ruined Terminal C at Newark are amateurs compared to those in my adopted home city of Austin.  Austin-Bergstrom International Airport (ABIA) chose Thanksgiving break—i.e., the busiest travel time of the year—to roll out a universally despised redesign where you now need to journey for an extra 5-10 minutes (or 15 with screaming kids in tow), up and down elevators and across three parking lots, to reach the place where taxis and Ubers are.  The previous system was that you simply walked out of the terminal, crossed one street, and the line of taxis was there.

Supposedly this is to “reduce congestion” … except that, compared to other airports, ABIA never had any significant congestion caused by taxis.  I’d typically be the only person walking to them at a given time, or I’d join a line of just 3 or 4 people.  Nor does this do anything for the environment, since the city of Austin has no magical alternative, no subway or monorail to whisk you from the airport to downtown.  Just as many people will need a taxi or Uber as before; the only difference is that they’ll need to go ten times further out of their way as they’d need to go at a ten times busier airport.  For new visitors, this means their first experience of Austin will be one of confusion and anger; for Austin residents who fly a few times per month, it means that days or weeks have been erased from their lives.  From the conversations I’ve had so far, it appears that every single passenger of ABIA, and every single taxi and Uber driver, is livid about the change.  With one boneheaded decision, ABIA singlehandedly made Austin a less attractive place to live and work.

Postscript I.  But if you’re a prospective grad student, postdoc, or faculty member, you should still come to UT!  The death of reason, and the triumph of the blank-faced bureaucrats, is a worldwide problem, not something in any way unique to Austin.

Postscript II.  No, I don’t harbor any illusions that posts like this, or anything else I can realistically say or do, will change anything for the better, at my local airport let alone in the wider world.  Indeed, I sometimes wonder whether, for the bureaucrats, the point of ruining facilities and services that thousands rely on is precisely to grind down people’s sense of autonomy, to make them realize the futility of argument and protest.  Even so, if someone responsible for the doofus decisions in question happened to come across this post, and if they felt even the tiniest twinge of fear or guilt, felt like their victory over common sense wouldn’t be quite as easy or painless as they’d hoped—well, that would be reason enough for the post.

Teaching quantum in junior high: special Thanksgiving guest post by Terry Rudolph

November 22nd, 2018

Happy Thanksgiving!

People have sometimes asked me: “how do you do it?  how do you do your research, write papers, teach classes, mentor grad students, build up the quantum center at UT, travel and give talks every week or two, serve on program committees, raise two rambunctious young kids, and also blog and also participate in the comments and also get depressed about people saying mean things on social media?”  The answer is that increasingly I don’t.  Something has to give, and this semester, alas, that something has often been blogging.

And that’s why, today, I’m delighted to have a special guest post by my good friend Terry Rudolph.  Terry, who happens to be Erwin Schrödinger’s grandson, has done lots of fascinating work over the years in quantum computing and the foundations of quantum mechanics, and previously came up on this blog in the context of the PBR (Pusey-Barrett-Rudolph) Theorem.  Today, he’s a cofounder and chief architect at PsiQuantum, a startup in Palo Alto that’s trying to build silicon-photonic quantum computers.

Terry’s guest post is about the prospects for teaching quantum theory at the junior high school level—something he thought about a lot in the context of writing his interesting recent book Q is for Quantum.  I should stress that the opinions in this post are Terry’s, and don’t necessarily reflect the official editorial positions of Shtetl-Optimized.  Personally, I have taught the basics of quantum information to sharp junior high and high school students, so I certainly know that it’s possible.  (By undergrad, it’s not only possible, but maybe should become standard for both physics and CS majors.)  But I would also say that, given the current state of junior high and high school education in the US, it would be a huge step up if most students graduated fully understanding what’s a probability, what’s a classical bit, what’s a complex number, and any of dozens of other topics that feed into quantum information—so why not start by teaching the simpler stuff well?  And also, if students don’t learn the rules of classical probability first, then how will they be properly shocked when they come to quantum? 🙂

But without further ado, here’s Terry—who’s also graciously agreed to stick around and answer some comments.


Can we/should we teach Quantum Theory in Junior High?

by Terry Rudolph

Should we?

Reasons which suggest the answer is “yes” include:

Economic: We are apparently into a labor market shortage in quantum engineers.  We should not, however, need the recent hype around quantum computing to make the economic case – the frontier of many disparate regions of the modern science and technology landscape is quantum.  Surely if students do decide to drop out of school at 16 they should at least be equipped to get an entry-level job as a quantum physicist?

Educational: If young peoples’ first exposures to science are counterintuitive and “cutting edge,” it could help excite them into STEM.  The strong modern quantum information theoretic connections between quantum physics, computer science and math can help all three subjects constructively generate common interest.

Pseudo-Philosophical: Perhaps our issues with understanding/accepting quantum theory are because we come to it late and have lost the mental plasticity for a “quantum reset” of our brain when we eventually require it late in an undergraduate degree.  It may be easier to achieve fluency in the “language of quantum” with early exposure.

Can we?

There are two distinct aspects to this question: Firstly, is it possible at the level of “fitting it in” – training teachers, adjusting curricula and so on?  Secondly, can a nontrivial, worthwhile fraction of quantum theory even be taught at all to pre-calculus students?

With regards to the first question, as the child of two schoolteachers I am very aware that an academic advocating for such disruption will not be viewed kindly by all.  As I don’t have relevant experience to say anything useful about this aspect, I have to leave it for others to consider.

Let me focus for the remainder of this post on the second aspect, namely whether it is even possible to appropriately simplify the content of the theory.  This month it is exactly 20 years since I lectured the first of many varied quantum courses I have taught at multiple universities. For most of that period I would have said it simply wasn’t possible to teach any but the most precocious of high school students nontrivial technical content of quantum theory – despite some brave attempts like Feynman’s use of arrows in QED: The Strange Theory of Light and Matter (a technique that cannot easily get at the mysteries of two-particle quantum theory, which is where the fun really starts).  I now believe, however, that it is actually possible.

A pedagogical method covering nontrivial quantum theory using only basic arithmetic

My experience talking about quantum theory to 12-15 year olds has only been in the idealized setting of spending a few hours with them at science fairs, camps and similar.  In fact it was on the way to a math camp for very young students, desperately trying to plan something non-trivial to engage them with, that I came up with a pedagogical method which I (and a few colleagues) have found does work.

I eventually wrote the method into a short book Q is for Quantum, but if you don’t want to purchase the book then here is a pdf of Part I,, which takes a student knowing only the rules of basic arithmetic through to learning enough quantum computing they can understand the Deutsch–Jozsa algorithm.  In fact not only can they do a calculation to see how it works in detail, they can appreciate conceptual nuances often under-appreciated in popular expositions, such as why gate speed doesn’t matter – it’s all about the number of steps, why classical computing also can have exponential growth in “possible states” so interference is critical, why quantum computers do not compute the uncomputable and so on.

Before pointing out a few features of the approach, here are some rules I set myself while writing the book:

  • No analogies, no jargon – if it can’t be explained quantitatively then leave it out.
  • No math more than basic arithmetic and distribution across brackets.
  • Keep clear the distinction between mathematical objects and the observed physical events they are describing.
  • Be interpretationally neutral.
  • No soap opera: Motivate by intriguing with science, not by regurgitating quasi-mythological stories about the founders of the theory.
  • No using the word “quantum” in the main text! This was partly to amuse myself, but I also thought if I was succeeding in the other points then I should be able to avoid a word almost synonymous with “hard and mysterious.”

One of the main issues to confront is how to represent and explain superposition.  It is typical in popular expositions to draw analogies between a superposition of, say, a cat which is dead and a cat which is alive by saying it is dead “and” alive.  But if superposition was equivalent to logical “and”, or, for that matter, logical “or”, then quantum computing wouldn’t be interesting, and in this and other ways the analogy is ultimately misleading.  The approach I use is closer to the latter – an unordered list of possible states for a system (which is most like an “or”) can be used to represent a superposition. Using a list has some advantages – it is natural to apply a transformation to all elements of a list, for instance doubling the list of ingredients in a recipe.  More critically, given two independent lists of possibilities the new joint list of combined possibilities is a natural concept.  This makes teaching the equivalent of the Kronecker (tensor) product for multiple systems easy, something often a bit tricky even for undergrads to become comfortable with.

Conceptually the weirdest part of the whole construction, particularly for someone biased by the standard formalism, is that I use a standard mathematical object (a negative or minus sign) applied to a diagram of a physical object (a black or white ball).  Moreover, positive and negative balls in a diagram can cancel out (interfere).  This greatly simplifies the exposition, by removing a whole level of abstraction in the standard theory (we do not need to use a vector containing entries whose specific ordering must be remembered in order to equate them to the physical objects).  While it initially seemed odd to me personally to do this, I have yet to have any young person think of it as any more weird than using the negative sign on a number.  And if it is always kept clear that drawing and manipulating the whole diagram is an abstract thing we do, which may or may not have any correspondence to what is “really going on” in the physical setups we are describing, then there really is no difference.

There are some subtleties about the whole approach – while the formalism is universal for quantum computing, it can only make use of unitary evolution which is proportional to a matrix with integer entries.  Thus the Hadamard gate (PETE box) is ok, the Controlled-NOT and Toffoli likewise, but a seemingly innocuous gate like the controlled-Hadamard is not capable of being incorporated (without adding a whole bunch of unintuitive and unjustified rules).  The fact the approach covers a universal gate set means some amazing things can be explained in this simple diagrammatic language.  For example, the recent paper Quantum theory cannot consistently describe the use of itself, which led to considerable discussion on this blog, can be fully reproduced.  That is, a high school student can in principle understand the technical details of a contemporary argument between professional physicists.  I find this amazing.

Based on communication with readers I have come to realize the people at most risk of being confused by the book are actually those already with a little knowledge – someone who has done a year or two’s worth of undergraduate quantum courses, or someone who has taken things they read in pop-sci books too literally.  Initially, as I was developing the method, I thought it would be easy to keep “touching base” with the standard vector space formalism.  But in fact it becomes very messy to do so (and irrelevant for someone learning quantum theory for the first time).  In the end I dropped that goal, but now realize I need to develop some supplementary notes to help someone in that situation.

Q is for Quantum is certainly not designed to be used as a classroom text – if nothing else my particular style and choice of topics will not be to others’ tastes, and I haven’t included all the many, many simple examples and exercises I have students doing along with me in class when I actually teach this stuff.  It should be thought of as more a “proof of principle,” that the expository challenge can be met.  Several colleagues have used parts of these ideas already for teaching, and they have given me some great feedback.  As such I am planning on doing a revised and slightly expanded version at some point, so if you read it and have thoughts for improvement please send me them.

Ten updates

November 7th, 2018

If you like quantum, complexity, etc., then please read to the end! I’ve gotten a bunch of emails lately of the form “why haven’t you ever blogged about such-and-such?,” when it turned out that I damn well did blog about it; it was just somewhere down in a multi-item post.

1. Like many of you, I watched the US midterm election results with (mostly…) disappointment and dismay.  I think that history will judge us harshly for not totally and unequivocally rebuking everything Trump stands for and every politician associated with him.  But that’s not what I wanted to blog about today.

2. There was a breakthrough in communication complexity by Arkadev Chattopadhyay, Nikhil Mande, and Suhail Sherif: the first exponential separation between randomized communication complexity and log approximate rank for a total Boolean function f.  This falsifies the longstanding conjecture that these measures are polynomially related (though it doesn’t resolve the original log rank conjecture).  For those of you keeping score at home, the quantum communication complexity of f is sandwiched in between randomized CC and log approximate rank.  So, at least one of the following must now be true: either randomized CC is exponentially separated from quantum CC, or else quantum CC is exponentially separated from log approximate rank.  My money’s on the latter.

3. Ewin Tang, who achieved fame with a quantum-inspired classical algorithm for recommendation systems (which I blogged about in July), is now back with quantum-inspired classical algorithms for principal component analysis and supervised clustering.  Well, with the announcements of such algorithms; details of the analysis are to come later.

4. A bunch of people asked me about the paper by Sergey Bravyi, David Gosset, and Robert Koenig, Quantum advantage with shallow circuits.  tl;dr: it’s great!  And it was deservedly a highlight of the QIP conference back in January!  That’s why it confused me when everyone started asking about it a couple weeks ago.  The resolution is that the paper was just recently published in Science magazine, which led to popular coverage like this, which in turn led to people asking me whether this result unconditionally proves P≠BQP (that is, quantum computers can solve more problems in polynomial time than classical computers), and if not why not.  The answer is no: the paper proves an unconditional separation, but one that’s a long long way from P≠BQP, or anything else that would entail solving the central open problems of complexity theory like P vs. PSPACE.  Basically, it shows there are problems solvable in constant time with a quantum computer that aren’t solvable in constant time classically, for suitable meanings of “problem” (namely, a relation problem) and “in constant time” (namely, NC0 circuits, in which each output bit depends on only a constant number of input bits).  I understand that a stronger separation has since been achieved, between quantum NC0 and classical AC0, in work that’s not yet on the arXiv.  The problems in question, however, are all easy to solve in P, or even in classical logarithmic time, given a polynomial number of parallel processors.

5. A bunch of people also asked me about the paper by Xun Gao and Luming Duan, Efficient classical simulation of noisy quantum computation.  This paper tries to formalize something that many of us have suspected/feared for years, namely that random quantum circuits (the whole thing is specific to random circuits) can tolerate only a tiny amount of noise and decoherence before they become efficiently simulable classically.  If true, this has obvious implications for the sampling-based quantum supremacy experiments that Google and others are planning for the next few years: namely, that all the engineering effort they’ve already been investing anyway to push down the noise rate, will actually be necessary!  However, correspondence with the authors revealed that there’s a significant gap in the analysis as it currently stands: namely, the current proof applies only to closed quantum systems, which would (for example) rule out all the techniques that people eventually hope to use to achieve quantum fault-tolerance—all of which are based on constantly measuring subsets of the qubits, doing essentially error-free classical computation on the measurement outcomes, throwing away noisy qubits, and pumping in fresh qubits.  Xun and Duan say that they’re currently working on an extension to open systems; in my personal view, such an extension seems essential for this interesting result to have the informal interpretation that the authors want.

6. My friend Bram Cohen asked me to announce that his company, Chia, has launched a competition for best implementation of its Verifiable Delay Functions (VDFs), with real money rewards.  You can find the details at this Github page.

7. The second Q2B (“Quantum 2 Business”) conference, organized by QC Ware Corp., will be held this coming December 10-12, at the Computer History Museum in Mountain View.  There will be two keynote addresses, one by John Preskill and the other by me.  I hope I’ll get a chance to meet some of you there!

8. Longtime colleague and friend-of-the-blog Ashwin Nayak asked me to announce that the 2019 Conference on Computational Complexity, to be held July 18-20 in exciting New Brunswick, NJ, is now accepting submissions.  I hope to be there!

9. OK, what the hell: the 21st annual, and nearly impossible to capitalize correctly, SQuInT (Southwest Quantum Information and Technology) workshop will be held February 2019 in Albuquerque, NM.  UT Austin is now a node of the SQuInT network, and I’ll hopefully be attending along with a posse of students and postdocs.  The deadline for abstract submission is coming up soon: Monday November 12!

10. I went to morning Shabbat services in Austin this past weekend, exactly one week after the tragedy in Pittsburgh.  There was massively increased security, with armed guards interrogating us, Israeli-style, about why we had no membership sticker on our car, whether we knew the name of the rabbi, etc.  Attendance was maybe a factor of three higher than usual.  About thirty priests, ministers, and Christian seminary students, and one Muslim, came up to the bima to say a prayer of solidarity with Jews.  The mayor of Austin, Steve Adler, was also on hand to give a speech.  Then the rabbi read a letter to the synagogue by Sen. Ted Cruz denouncing antisemitism (well, parts of it; he said the letter was really long).  There were murmurs of disapproval from the crowd when Cruz’s name was mentioned, but then everyone quieted down and listened.  The thing is, the US and large parts of the world are now so far outside the norms of liberal democracy, in territory so terrifyingly uncharted since the end of World War II, that shooting up synagogues is bad is actually something that it’s better than not for powerful people to affirm explicitly.  Anyway, while I’m neither a believer nor much of a synagogue-goer, I found the show of unity moving.

Boof

October 2nd, 2018

(Just a few politics-related comments to get off my chest.  Feel free to skip if American politics isn’t your 5-liter bottle of Coke.)

FiveThirtyEight currently gives Beto O’Rourke a ~29% chance of winning Ted Cruz’s Senate seat.  I wish it were higher, but I think this will be such a spectacular upset if it happens, and so transformative for Texas, that it’s well worth our support.  I’ve also been impressed by the enthusiasm of Beto’s campaign—including a rally in Austin this weekend where the 85-year-old Willie Nelson, headlining the first political event of his 60-year music career, performed a new song (“Vote ‘Em Out”).  I’ll tell you what: if anyone donates to Beto’s campaign within the next two days as a result of reading this post, and emails or leaves a comment to tell me about it, I’ll match their donation, up to my personal Tsirelson bound of $853.

Speaking of which, if you’re a US citizen and are not currently registered to vote, please do so!  And then show up and vote in the midterms!  My personal preference is to treat voting as simply a categorical imperative.  But if you’d like a mathematical discussion of the expected utility of voting, then check out this, by my former MIT undergraduate advisee Shaunak Kishore.

But what about the highest questions currently facing the American republic: namely, the exact meanings of “boofing,” “Devil’s triangle,” and “Renate alumnius”?  I’ve been reading the same articles and analyses as everybody else, and have no privileged insight.  For what it’s worth, though, I think it’s likely that Blasey Ford is teling the truth.  And I think it’s likely that Kavanaugh is lying—if not about the assault itself (which he might genuinely have no memory of—blackout is a real phenomenon), then certainly about his teenage drinking and other matters.  And while, absent some breakthrough in the FBI investigation, none of this rises to the beyond-a-reasonable-doubt standard, I think it likely should be seen as disqualifying for the Supreme Court.  (Admittedly, I’m not a good arbiter of that question, since there are about 200 unrelated reasons why I don’t want Kavanaugh near the Court.)  I also think it’s perfectly reasonable of Senate Democrats to fight this one to the bitter end, particularly after what the Republicans did to Merrick Garland, and what Kavanaugh himself did to Bill Clinton.  If you’re worried about the scorched-earth, all-defect equilibrium that seems to prevail in Congress—well, the Democrats are not the ones who started it.

All of that would be one thing, coming from some hardened social-justice type who might have happily convicted Kavanaugh of aggravated white male douchiness even before his umbilical cord was cut.  But I daresay that it means a bit more, coming from an individual who hundreds of online activists once denounced just as fervently as they now denounce Kavanaugh—someone who understands perfectly well that not even the allegation of wrongdoing is needed any longer for a person to be marked for flattening by the steamroller of Progress.  What can I say?  The enemy of my enemy is sometimes still my enemy.  My friend is anybody, of whatever party or creed, who puts their humanity above their ideology.  Justice is no respecter of persons.  Sometimes those who earn the mob’s ire are nevertheless guilty.

I was actually in the DC area the week of the Kavanaugh hearings, to speak at a quantum information panel on Capitol Hill convened by the House Science Committee, to participate in a quantum machine learning workshop at UMD, and to deliver the Nathan Krasnopoler Memorial Lecture at Johns Hopkins, which included the incredibly moving experience of meeting Nathan’s parents.

The panel went fine, I think.  Twenty or thirty Congressional staffers attended, including many of those involved in the National Quantum Initiative bill.  They asked us about the US’s standing relative to China in QIS; the relations among academia, industry, and national labs; and how to train a ‘quantum workforce.’  We panelists came prepared with a slide about what qubits and interference are, but ended up never needing it: the focus was emphatically on policy, not science.

Kamala Harris (D-CA) is the leader in the Senate for what’s now called the Quantum Computing Research Act.  One of Sen. Harris’s staffers conveyed to me that, given her great enthusiasm for quantum computing, the Senator would have been delighted to meet with me, but was unfortunately too busy with Kavanaugh-related matters.  This was better than what I’d feared, namely: “following the lead of various keyboard warriors on Twitter and Reddit, Sen. Harris denounces you, Dr. Aaronson, as a privileged white male techbro and STEMlord, and an enemy of the people.”  So once again I was face-to-face with the question: is it conceivable that social-media discourse is a bit … unrepresentative of the wider world?