Archive for the ‘Quantum’ Category

Why are amplitudes complex?

Monday, December 17th, 2018

[By prior agreement, this post will be cross-posted on Microsoft’s Q# blog, even though it has nothing to do with the Q# programming language.  It does, however, contain many examples that might be fun to implement in Q#!]

Why should Nature have been quantum-mechanical?  It’s totally unclear what would count as an answer to such a question, and also totally clear that people will never stop asking it.

Short of an ultimate answer, we can at least try to explain why, if you want this or that piece of quantum mechanics, then the rest of the structure is inevitable: why quantum mechanics is an “island in theoryspace,” as I put it in 2003.

In this post, I’d like to focus on a question that any “explanation” for QM at some point needs to address, in a non-question-begging way: why should amplitudes have been complex numbers?  When I was a grad student, it was his relentless focus on that question, and on others in its vicinity, that made me a lifelong fan of Chris Fuchs (see for example his samizdat), despite my philosophical differences with him.

It’s not that complex numbers are a bad choice for the foundation of the deepest known description of the physical universe—far from it!  (They’re a field, they’re algebraically closed, they’ve got a norm, how much more could you want?)  It’s just that they seem like a specific choice, and not the only possible one.  There are also the real numbers, for starters, and in the other direction, the quaternions.

Quantum mechanics over the reals or the quaternions still has constructive and destructive interference among amplitudes, and unitary transformations, and probabilities that are absolute squares of amplitudes.  Moreover, these variants turn out to lead to precisely the same power for quantum computers—namely, the class BQP—as “standard” quantum mechanics, the one over the complex numbers.  So none of those are relevant differences.

Indeed, having just finished teaching an undergrad Intro to Quantum Information course, I can attest that the complex nature of amplitudes is needed only rarely—shockingly rarely, one might say—in quantum computing and information.  Real amplitudes typically suffice.  Teleportationsuperdense coding, the Bell inequality, quantum money, quantum key distribution, the Deutsch-Jozsa and Bernstein-Vazirani and Simon and Grover algorithms, quantum error-correction: all of those and more can be fully explained without using a single i that’s not a summation index.  (Shor’s factoring algorithm is an exception; it’s much more natural with complex amplitudes.  But as the previous paragraph implied, their use is removable even there.)

It’s true that, if you look at even the simplest “real” examples of quantum systems—or as a software engineer might put it, at the application layers built on top of the quantum OS—then complex numbers are everywhere, in a way that seems impossible to remove.  The Schrödinger equation, energy eigenstates, the position/momentum commutation relation, the state space of a spin-1/2 particle in 3-dimensional space: none of these make much sense without complex numbers (though it can be fun to try).

But from a sufficiently Olympian remove, it feels circular to use any of this as a “reason” for why quantum mechanics should’ve involved complex amplitudes in the first place.  It’s like, once your OS provides a certain core functionality (in this case, complex numbers), it’d be surprising if the application layer didn’t exploit that functionality to the hilt—especially if we’re talking about fundamental physics, where we’d like to imagine that nothing is wasted or superfluous (hence Rabi’s famous question about the muon: “who ordered that?”).

But why should the quantum OS have provided complex-number functionality at all?  Is it possible to answer that question purely in terms of the OS’s internal logic (i.e., abstract quantum information), making minimal reference to how the OS will eventually get used?  Maybe not—but if so, then that itself would seem worthwhile to know.

If we stick to abstract quantum information language, then the most “obvious, elementary” argument for why amplitudes should be complex numbers is one that I spelled out in Quantum Computing Since Democritus, as well as my Is quantum mechanics an island in theoryspace? paper.  Namely, it seems desirable to be able to implement a “fraction” of any unitary operation U: for example, some V such that V2=U, or V3=U.  With complex numbers, this is trivial: we can simply diagonalize U, or use the Hamiltonian picture (i.e., take e-iH/2 where U=e-iH), both of which ultimately depend on the complex numbers being algebraically closed.  Over the reals, by contrast, a 2×2 orthogonal matrix like $$ U = \left(\begin{array}[c]{cc}1 & 0\\0 & -1\end{array}\right)$$

has no 2×2 orthogonal square root, as follows immediately from its determinant being -1.  If we want a square root of U (or rather, of something that acts like U on a subspace) while sticking to real numbers only, then we need to add another dimension, like so: $$ \left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0&-1\end{array}\right)=\left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & 0 & 1\\0 & -1 & 0\end{array}\right) ^{2} $$

This is directly related to the fact that there’s no way for a Flatlander to “reflect herself” (i.e., switch her left and right sides while leaving everything else unchanged) by any continuous motion, unless she can lift off the plane and rotate herself through the third dimension.  Similarly, for us to reflect ourselves would require rotating through a fourth dimension.

One could reasonably ask: is that it?  Aren’t there any “deeper” reasons in quantum information for why amplitudes should be complex numbers?

Indeed, there are certain phenomena in quantum information that, slightly mysteriously, work out more elegantly if amplitudes are complex than if they’re real.  (By “mysteriously,” I mean not that these phenomena can’t be 100% verified by explicit calculations, but simply that I don’t know of any deep principle by which the results of those calculations could’ve been predicted in advance.)

One famous example of such a phenomenon is due to Bill Wootters: if you take a uniformly random pure state in d dimensions, and then you measure it in an orthonormal basis, what will the probability distribution (p1,…,pd) over the d possible measurement outcomes look like?  The answer, amazingly, is that you’ll get a uniformly random probability distribution: that is, a uniformly random point on the simplex defined by pi≥0 and p1+…+pd=1.  This fact, which I’ve used in several papers, is closely related to Archimedes’ Hat-Box Theorem, beloved by friend-of-the-blog Greg Kuperberg.  But here’s the kicker: it only works if amplitudes are complex numbers.  If amplitudes are real, then the resulting distribution over distributions will be too bunched up near the corners of the probability simplex; if they’re quaternions, it will be too bunched up near the middle.

There’s an even more famous example of such a Goldilocks coincidence—one that’s been elevated, over the past two decades, to exalted titles like “the Axiom of Local Tomography.”  Namely: suppose we have an unknown finite-dimensional mixed state ρ, shared by two players Alice and Bob.  For example, ρ might be an EPR pair, or a correlated classical bit, or simply two qubits both in the state |0⟩.  We imagine that Alice and Bob share many identical copies of ρ, so that they can learn more and more about it by measuring this copy in this basis, that copy in that basis, and so on.

We then ask: can ρ be fully determined from the joint statistics of product measurements—that is, measurements that Alice and Bob can apply separately and locally to their respective subsystems, with no communication between them needed?  A good example here would be the set of measurements that arise in a Bell experiment—measurements that, despite being local, certify that Alice and Bob must share an entangled state.

If we asked the analogous question for classical probability distributions, the answer is clearly “yes.”  That is, once you’ve specified the individual marginals, and you’ve also specified all the possible correlations among the players, you’ve fixed your distribution; there’s nothing further to specify.

For quantum mixed states, the answer again turns out to be yes, but only because amplitudes are complex numbers!  In quantum mechanics over the reals, you could have a 2-qubit state like $$ \rho=\frac{1}{4}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1& 0 & 0 & 1\end{array}\right) ,$$

which clearly isn’t the maximally mixed state, yet which is indistinguishable from the maximally mixed state by any local measurement that can be specified using real numbers only.  (Proof: exercise!)

In quantum mechanics over the quaternions, something even “worse” happens: namely, the tensor product of two Hermitian matrices need not be Hermitian.  Alice’s measurement results might be described by the 2×2 quaternionic density matrix $$ \rho_{A}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -i\\i & 1\end{array}\right), $$

and Bob’s results might be described by the 2×2 quaternionic density matrix $$ \rho_{B}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -j\\j & 1\end{array}\right), $$

and yet there might not be (and in this case, isn’t) any 4×4 quaternionic density matrix corresponding to ρA⊗ρB, which would explain both results separately.

What’s going on here?  Why do the local measurement statistics underdetermine the global quantum state with real amplitudes, and overdetermine it with quaternionic amplitudes, being in one-to-one correspondence with it only when amplitudes are complex?

We can get some insight by looking at the number of independent real parameters needed to specify a d-dimensional Hermitian matrix.  Over the complex numbers, the number is exactly d2: we need 1 parameter for each of the d diagonal entries, and 2 (a real part and an imaginary part) for each of the d(d-1)/2 upper off-diagonal entries (the lower off-diagonal entries being determined by the upper ones).  Over the real numbers, by contrast, “Hermitian matrices” are just real symmetric matrices, so the number of independent real parameters is only d(d+1)/2.  And over the quaternions, the number is d+4[d(d-1)/2] = 2d(d-1).

Now, it turns out that the Goldilocks phenomenon that we saw above—with local measurement statistics determining a unique global quantum state when and only when amplitudes are complex numbers—ultimately boils down to the simple fact that $$ (d_A d_B)^2 = d_A^2 d_B^2, $$

but $$\frac{d_A d_B (d_A d_B + 1)}{2} > \frac{d_A (d_A + 1)}{2} \cdot \frac{d_B (d_B + 1)}{2},$$

and conversely $$ 2 d_A d_B (d_A d_B – 1) < 2 d_A (d_A – 1) \cdot 2 d_B (d_B – 1).$$

In other words, only with complex numbers does the number of real parameters needed to specify a “global” Hermitian operator, exactly match the product of the number of parameters needed to specify an operator on Alice’s subsystem, and the number of parameters needed to specify an operator on Bob’s.  With real numbers it overcounts, and with quaternions it undercounts.

A major research goal in quantum foundations, since at least the early 2000s, has been to “derive” the formalism of QM purely from “intuitive-sounding, information-theoretic” postulates—analogous to how, in 1905, some guy whose name I forget derived the otherwise strange-looking Lorentz transformations purely from the assumption that the laws of physics (including a fixed, finite value for the speed of light) take the same form in every inertial frame.  There have been some nontrivial successes of this program: most notably, the “axiomatic derivations” of QM due to Lucien Hardy and (more recently) Chiribella et al.  Starting from axioms that sound suitably general and nontechnical (if sometimes unmotivated and weird), these derivations perform the impressive magic trick of deriving the full mathematical structure of QM: complex amplitudes, unitary transformations, tensor products, the Born rule, everything.

However, in every such derivation that I know of, some axiom needs to get introduced to capture “local tomography”: i.e., the “principle” that composite systems must be uniquely determined by the statistics of local measurements.  And while this principle might sound vague and unobjectionable, to those in the business, it’s obvious what it’s going to be used for the second it’s introduced.  Namely, it’s going to be used to rule out quantum mechanics over the real numbers, which would otherwise be a model for the axioms, and thus to “explain” why amplitudes have to be complex.

I confess that I was always dissatisfied with this.  For I kept asking myself: would I have ever formulated the “Principle of Local Tomography” in the first place—or if someone else had proposed it, would I have ever accepted it as intuitive or natural—if I didn’t already know that QM over the complex numbers just happens to satisfy it?  And I could never honestly answer “yes.”  It always felt to me like a textbook example of drawing the target around where the arrow landed—i.e., of handpicking your axioms so that they yield a predetermined conclusion, which is then no more “explained” than it was at the beginning.

Two months ago, something changed for me: namely, I smacked into the “Principle of Local Tomography,” and its reliance on complex numbers, in my own research, when I hadn’t in any sense set out to look for it.  This still doesn’t convince me that the principle is any sort of a-priori necessity.  But it at least convinces me that it’s, you know, the sort of thing you can smack into when you’re not looking for it.

The aforementioned smacking occurred while I was writing up a small part of a huge paper with Guy Rothblum, about a new connection between so-called “gentle measurements” of quantum states (that is, measurements that don’t damage the states much), and the subfield of classical CS called differential privacy.  That connection is a story in itself; let me know if you’d like me to blog about it separately.  Our paper should be on the arXiv any day now; in the meantime, here are some PowerPoint slides.

Anyway, for the paper with Guy, it was of interest to know the following: suppose we have a two-outcome measurement E (let’s say, on n qubits), and suppose it accepts every product state with the same probability p.  Must E then accept every entangled state with probability p as well?  Or, a closely-related question: suppose we know E’s acceptance probabilities on every product state.  Is that enough to determine its acceptance probabilities on all n-qubit states?

I’m embarrassed to admit that I dithered around with these questions, finding complicated proofs for special cases, before I finally stumbled on the one-paragraph, obvious-in-retrospect “Proof from the Book” that slays them in complete generality.

Here it is: if E accepts every product state with probability p, then clearly it accepts every separable mixed state (i.e., every convex combination of product states) with the same probability p.  Now, a well-known result of Braunstein et al., from 1998, states that (surprisingly enough) the separable mixed states have nonzero density within the set of all mixed states, in any given finite dimension.  Also, the probability that E accepts ρ can be written as f(ρ)=Tr(Eρ), which is linear in the entries of ρ.  OK, but a linear function that’s determined on a subset of nonzero density is determined everywhere.  And in particular, if f is constant on that subset then it’s constant everywhere, QED.

But what does any of this have to do with why amplitudes are complex numbers?  Well, it turns out that the 1998 Braunstein et al. result, which was the linchpin of the above argument, only works in complex QM, not in real QM.  We can see its failure in real QM by simply counting parameters, similarly to what we did before.  An n-qubit density matrix requires 4n real parameters to specify (OK, 4n-1, if we demand that the trace is 1).  Even if we restrict to n-qubit density matrices with real entries only, we still need 2n(2n+1)/2 parameters.  By contrast, it’s not hard to show that an n-qubit real separable density matrix can be specified using only 3n real parameters—and indeed, that any such density matrix lies in a 3n-dimensional subspace of the full 2n(2n+1)/2-dimensional space of 2n×2n symmetric matrices.  (This is simply the subspace spanned by all possible tensor products of n Pauli I, X, and Z matrices—excluding the Y matrix, which is the one that involves imaginary numbers.)

But it’s not only the Braunstein et al. result that fails in real QM: the fact that I wanted for my paper with Guy fails as well.  As a counterexample, consider the 2-qubit measurement that accepts the state ρ with probability Tr(Eρ), where $$ E=\frac{1}{2}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1 & 0 & 0 & 1\end{array}\right).$$

I invite you to check that this measurement, which we specified using a real matrix, accepts every product state (a|0⟩+b|1⟩)(c|0⟩+d|1⟩), where a,b,c,d are real, with the same probability, namely 1/2—just like the “measurement” that simply returns a coin flip without even looking at the state at all.  And yet the measurement can clearly be nontrivial on entangled states: for example, it always rejects $$\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}},$$ and it always accepts $$ \frac{\left|00\right\rangle-\left|11\right\rangle}{\sqrt{2}}.$$

Is it a coincidence that we used exactly the same 4×4 matrix (up to scaling) to produce a counterexample to the real-QM version of Local Tomography, and also to the real-QM version of the property I wanted for the paper with Guy?  Is anything ever a coincidence in this sort of discussion?

I claim that, looked at the right way, Local Tomography and the property I wanted are the same property, their truth in complex QM is the same truth, and their falsehood in real QM is the same falsehood.  Why?  Simply because Tr(Eρ), the probability that the measurement E accepts the mixed state ρ, is a function of two Hermitian matrices E and ρ (both of which can be either “product” or “entangled”), and—crucially—is symmetric under the interchange of E and ρ.

Now it’s time for another confession.  We’ve identified an elegant property of quantum mechanics that’s true but only because amplitudes are complex numbers: namely, if you know the probability that your quantum circuit accepts every product state, then you also know the probability that it accepts an arbitrary state.  Yet, despite its elegance, this property turns out to be nearly useless for “real-world applications” in quantum information and computing.  The reason for the uselessness is that, for the property to kick in, you really do need to know the probabilities on product states almost exactly—meaning (say) to 1/exp(n) accuracy for an n-qubit state.

Once again a simple example illustrates the point.  Suppose n is even, and suppose our measurement simply projects the n-qubit state onto a tensor product of n/2 Bell pairs.  Clearly, this measurement accepts every n-qubit product state with exponentially small probability, even as it accepts the entangled state 
$$\left(\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}}\right)^{\otimes n/2}$$

with probability 1.  But this implies that noticing the nontriviality on entangled states, would require knowing the acceptance probabilities on product states to exponential accuracy.

In a sense, then, I come back full circle to my original puzzlement: why should Local Tomography, or (alternatively) the-determination-of-a-circuit’s-behavior-on-arbitrary-states-from-its-behavior-on-product-states, have been important principles for Nature’s laws to satisfy?  Especially given that, in practice, the exponential accuracy required makes it difficult or impossible to exploit these principles anyway?  How could we have known a-priori that these principles would be important—if indeed they are important, and are not just mathematical spandrels?

But, while I remain less than 100% satisfied about “why the complex numbers? why not just the reals?,” there’s one conclusion that my recent circling-back to these questions has made me fully confident about.  Namely: quantum mechanics over the quaternions is a flaming garbage fire, which would’ve been rejected at an extremely early stage of God and the angels’ deliberations about how to construct our universe.

In the literature, when the question of “why not quaternionic amplitudes?” is discussed at all, you’ll typically read things about how the parameter-counting doesn’t quite work out (just like it doesn’t for real QM), or how the tensor product of quaternionic Hermitian matrices need not be Hermitian.  In this paper by McKague, you’ll read that the CHSH game is winnable with probability 1 in quaternionic QM, while in this paper by Fernandez and Schneeberger, you’ll read that the non-commutativity of the quaternions introduces an order-dependence even for spacelike-separated operations.

But none of that does justice to the enormity of the problem.  To put it bluntly: unless something clever is done to fix it, quaternionic QM allows superluminal signaling.  This is easy to demonstrate: suppose Alice holds a qubit in the state |1⟩, while Bob holds a qubit in the state |+⟩ (yes, this will work even for unentangled states!)  Also, let $$U=\left(\begin{array}[c]{cc}1 & 0\\0 & j\end{array}\right) ,~~~V=\left(\begin{array}[c]{cc}1 & 0\\0& i\end{array}\right).$$

We can calculate that, if Alice applies U to her qubit and then Bob applies V to his qubit, Bob will be left with the state $$ \frac{j \left|0\right\rangle +
k \left|1\right\rangle}{\sqrt{2}}.$$

By contrast, if Alice decided to apply U only after Bob applied V, Bob would be left with the state 
$$ \frac{j \left|0\right\rangle – k \left|1\right\rangle}{\sqrt{2}}.$$

But Bob can distinguish these two states with certainty, for example by applying the unitary $$ \frac{1}{\sqrt{2}}\left(\begin{array}[c]{cc}j & k\\k & j\end{array}\right). $$

Therefore Alice communicated a bit to Bob.

I’m aware that there’s a whole literature on quaternionic QM, including for example a book by Adler.  Would anyone who knows that literature be kind enough to enlighten us on how it proposes to escape the signaling problem?  Regardless of the answer, though, it seems worth knowing that the “naïve” version of quaternionic QM—i.e., the version that gets invoked in quantum information discussions like the ones I mentioned above—is just immediately blasted to smithereens by the signaling problem, without the need for any subtle considerations like the ones that differentiate real from complex QM.

Unrelated Update (Dec. 18): Probably many of you have already seen it, and/or already know what it covers, but the NYT profile of Donald Knuth (entitled “The Yoda of Silicon Valley”) is enjoyable and nicely written.

Teaching quantum in junior high: special Thanksgiving guest post by Terry Rudolph

Thursday, November 22nd, 2018

Happy Thanksgiving!

People have sometimes asked me: “how do you do it?  how do you do your research, write papers, teach classes, mentor grad students, build up the quantum center at UT, travel and give talks every week or two, serve on program committees, raise two rambunctious young kids, and also blog and also participate in the comments and also get depressed about people saying mean things on social media?”  The answer is that increasingly I don’t.  Something has to give, and this semester, alas, that something has often been blogging.

And that’s why, today, I’m delighted to have a special guest post by my good friend Terry Rudolph.  Terry, who happens to be Erwin Schrödinger’s grandson, has done lots of fascinating work over the years in quantum computing and the foundations of quantum mechanics, and previously came up on this blog in the context of the PBR (Pusey-Barrett-Rudolph) Theorem.  Today, he’s a cofounder and chief architect at PsiQuantum, a startup in Palo Alto that’s trying to build silicon-photonic quantum computers.

Terry’s guest post is about the prospects for teaching quantum theory at the junior high school level—something he thought about a lot in the context of writing his interesting recent book Q is for Quantum.  I should stress that the opinions in this post are Terry’s, and don’t necessarily reflect the official editorial positions of Shtetl-Optimized.  Personally, I have taught the basics of quantum information to sharp junior high and high school students, so I certainly know that it’s possible.  (By undergrad, it’s not only possible, but maybe should become standard for both physics and CS majors.)  But I would also say that, given the current state of junior high and high school education in the US, it would be a huge step up if most students graduated fully understanding what’s a probability, what’s a classical bit, what’s a complex number, and any of dozens of other topics that feed into quantum information—so why not start by teaching the simpler stuff well?  And also, if students don’t learn the rules of classical probability first, then how will they be properly shocked when they come to quantum? 🙂

But without further ado, here’s Terry—who’s also graciously agreed to stick around and answer some comments.


Can we/should we teach Quantum Theory in Junior High?

by Terry Rudolph

Should we?

Reasons which suggest the answer is “yes” include:

Economic: We are apparently into a labor market shortage in quantum engineers.  We should not, however, need the recent hype around quantum computing to make the economic case – the frontier of many disparate regions of the modern science and technology landscape is quantum.  Surely if students do decide to drop out of school at 16 they should at least be equipped to get an entry-level job as a quantum physicist?

Educational: If young peoples’ first exposures to science are counterintuitive and “cutting edge,” it could help excite them into STEM.  The strong modern quantum information theoretic connections between quantum physics, computer science and math can help all three subjects constructively generate common interest.

Pseudo-Philosophical: Perhaps our issues with understanding/accepting quantum theory are because we come to it late and have lost the mental plasticity for a “quantum reset” of our brain when we eventually require it late in an undergraduate degree.  It may be easier to achieve fluency in the “language of quantum” with early exposure.

Can we?

There are two distinct aspects to this question: Firstly, is it possible at the level of “fitting it in” – training teachers, adjusting curricula and so on?  Secondly, can a nontrivial, worthwhile fraction of quantum theory even be taught at all to pre-calculus students?

With regards to the first question, as the child of two schoolteachers I am very aware that an academic advocating for such disruption will not be viewed kindly by all.  As I don’t have relevant experience to say anything useful about this aspect, I have to leave it for others to consider.

Let me focus for the remainder of this post on the second aspect, namely whether it is even possible to appropriately simplify the content of the theory.  This month it is exactly 20 years since I lectured the first of many varied quantum courses I have taught at multiple universities. For most of that period I would have said it simply wasn’t possible to teach any but the most precocious of high school students nontrivial technical content of quantum theory – despite some brave attempts like Feynman’s use of arrows in QED: The Strange Theory of Light and Matter (a technique that cannot easily get at the mysteries of two-particle quantum theory, which is where the fun really starts).  I now believe, however, that it is actually possible.

A pedagogical method covering nontrivial quantum theory using only basic arithmetic

My experience talking about quantum theory to 12-15 year olds has only been in the idealized setting of spending a few hours with them at science fairs, camps and similar.  In fact it was on the way to a math camp for very young students, desperately trying to plan something non-trivial to engage them with, that I came up with a pedagogical method which I (and a few colleagues) have found does work.

I eventually wrote the method into a short book Q is for Quantum, but if you don’t want to purchase the book then here is a pdf of Part I,, which takes a student knowing only the rules of basic arithmetic through to learning enough quantum computing they can understand the Deutsch–Jozsa algorithm.  In fact not only can they do a calculation to see how it works in detail, they can appreciate conceptual nuances often under-appreciated in popular expositions, such as why gate speed doesn’t matter – it’s all about the number of steps, why classical computing also can have exponential growth in “possible states” so interference is critical, why quantum computers do not compute the uncomputable and so on.

Before pointing out a few features of the approach, here are some rules I set myself while writing the book:

  • No analogies, no jargon – if it can’t be explained quantitatively then leave it out.
  • No math more than basic arithmetic and distribution across brackets.
  • Keep clear the distinction between mathematical objects and the observed physical events they are describing.
  • Be interpretationally neutral.
  • No soap opera: Motivate by intriguing with science, not by regurgitating quasi-mythological stories about the founders of the theory.
  • No using the word “quantum” in the main text! This was partly to amuse myself, but I also thought if I was succeeding in the other points then I should be able to avoid a word almost synonymous with “hard and mysterious.”

One of the main issues to confront is how to represent and explain superposition.  It is typical in popular expositions to draw analogies between a superposition of, say, a cat which is dead and a cat which is alive by saying it is dead “and” alive.  But if superposition was equivalent to logical “and”, or, for that matter, logical “or”, then quantum computing wouldn’t be interesting, and in this and other ways the analogy is ultimately misleading.  The approach I use is closer to the latter – an unordered list of possible states for a system (which is most like an “or”) can be used to represent a superposition. Using a list has some advantages – it is natural to apply a transformation to all elements of a list, for instance doubling the list of ingredients in a recipe.  More critically, given two independent lists of possibilities the new joint list of combined possibilities is a natural concept.  This makes teaching the equivalent of the Kronecker (tensor) product for multiple systems easy, something often a bit tricky even for undergrads to become comfortable with.

Conceptually the weirdest part of the whole construction, particularly for someone biased by the standard formalism, is that I use a standard mathematical object (a negative or minus sign) applied to a diagram of a physical object (a black or white ball).  Moreover, positive and negative balls in a diagram can cancel out (interfere).  This greatly simplifies the exposition, by removing a whole level of abstraction in the standard theory (we do not need to use a vector containing entries whose specific ordering must be remembered in order to equate them to the physical objects).  While it initially seemed odd to me personally to do this, I have yet to have any young person think of it as any more weird than using the negative sign on a number.  And if it is always kept clear that drawing and manipulating the whole diagram is an abstract thing we do, which may or may not have any correspondence to what is “really going on” in the physical setups we are describing, then there really is no difference.

There are some subtleties about the whole approach – while the formalism is universal for quantum computing, it can only make use of unitary evolution which is proportional to a matrix with integer entries.  Thus the Hadamard gate (PETE box) is ok, the Controlled-NOT and Toffoli likewise, but a seemingly innocuous gate like the controlled-Hadamard is not capable of being incorporated (without adding a whole bunch of unintuitive and unjustified rules).  The fact the approach covers a universal gate set means some amazing things can be explained in this simple diagrammatic language.  For example, the recent paper Quantum theory cannot consistently describe the use of itself, which led to considerable discussion on this blog, can be fully reproduced.  That is, a high school student can in principle understand the technical details of a contemporary argument between professional physicists.  I find this amazing.

Based on communication with readers I have come to realize the people at most risk of being confused by the book are actually those already with a little knowledge – someone who has done a year or two’s worth of undergraduate quantum courses, or someone who has taken things they read in pop-sci books too literally.  Initially, as I was developing the method, I thought it would be easy to keep “touching base” with the standard vector space formalism.  But in fact it becomes very messy to do so (and irrelevant for someone learning quantum theory for the first time).  In the end I dropped that goal, but now realize I need to develop some supplementary notes to help someone in that situation.

Q is for Quantum is certainly not designed to be used as a classroom text – if nothing else my particular style and choice of topics will not be to others’ tastes, and I haven’t included all the many, many simple examples and exercises I have students doing along with me in class when I actually teach this stuff.  It should be thought of as more a “proof of principle,” that the expository challenge can be met.  Several colleagues have used parts of these ideas already for teaching, and they have given me some great feedback.  As such I am planning on doing a revised and slightly expanded version at some point, so if you read it and have thoughts for improvement please send me them.

Ten updates

Wednesday, November 7th, 2018

If you like quantum, complexity, etc., then please read to the end! I’ve gotten a bunch of emails lately of the form “why haven’t you ever blogged about such-and-such?,” when it turned out that I damn well did blog about it; it was just somewhere down in a multi-item post.

1. Like many of you, I watched the US midterm election results with (mostly…) disappointment and dismay.  I think that history will judge us harshly for not totally and unequivocally rebuking everything Trump stands for and every politician associated with him.  But that’s not what I wanted to blog about today.

2. There was a breakthrough in communication complexity by Arkadev Chattopadhyay, Nikhil Mande, and Suhail Sherif: the first exponential separation between randomized communication complexity and log approximate rank for a total Boolean function f.  This falsifies the longstanding conjecture that these measures are polynomially related (though it doesn’t resolve the original log rank conjecture).  For those of you keeping score at home, the quantum communication complexity of f is sandwiched in between randomized CC and log approximate rank.  So, at least one of the following must now be true: either randomized CC is exponentially separated from quantum CC, or else quantum CC is exponentially separated from log approximate rank.  My money’s on the latter.

3. Ewin Tang, who achieved fame with a quantum-inspired classical algorithm for recommendation systems (which I blogged about in July), is now back with quantum-inspired classical algorithms for principal component analysis and supervised clustering.  Well, with the announcements of such algorithms; details of the analysis are to come later.

4. A bunch of people asked me about the paper by Sergey Bravyi, David Gosset, and Robert Koenig, Quantum advantage with shallow circuits.  tl;dr: it’s great!  And it was deservedly a highlight of the QIP conference back in January!  That’s why it confused me when everyone started asking about it a couple weeks ago.  The resolution is that the paper was just recently published in Science magazine, which led to popular coverage like this, which in turn led to people asking me whether this result unconditionally proves P≠BQP (that is, quantum computers can solve more problems in polynomial time than classical computers), and if not why not.  The answer is no: the paper proves an unconditional separation, but one that’s a long long way from P≠BQP, or anything else that would entail solving the central open problems of complexity theory like P vs. PSPACE.  Basically, it shows there are problems solvable in constant time with a quantum computer that aren’t solvable in constant time classically, for suitable meanings of “problem” (namely, a relation problem) and “in constant time” (namely, NC0 circuits, in which each output bit depends on only a constant number of input bits).  I understand that a stronger separation has since been achieved, between quantum NC0 and classical AC0, in work that’s not yet on the arXiv.  The problems in question, however, are all easy to solve in P, or even in classical logarithmic time, given a polynomial number of parallel processors.

5. A bunch of people also asked me about the paper by Xun Gao and Luming Duan, Efficient classical simulation of noisy quantum computation.  This paper tries to formalize something that many of us have suspected/feared for years, namely that random quantum circuits (the whole thing is specific to random circuits) can tolerate only a tiny amount of noise and decoherence before they become efficiently simulable classically.  If true, this has obvious implications for the sampling-based quantum supremacy experiments that Google and others are planning for the next few years: namely, that all the engineering effort they’ve already been investing anyway to push down the noise rate, will actually be necessary!  However, correspondence with the authors revealed that there’s a significant gap in the analysis as it currently stands: namely, the current proof applies only to closed quantum systems, which would (for example) rule out all the techniques that people eventually hope to use to achieve quantum fault-tolerance—all of which are based on constantly measuring subsets of the qubits, doing essentially error-free classical computation on the measurement outcomes, throwing away noisy qubits, and pumping in fresh qubits.  Xun and Duan say that they’re currently working on an extension to open systems; in my personal view, such an extension seems essential for this interesting result to have the informal interpretation that the authors want.

6. My friend Bram Cohen asked me to announce that his company, Chia, has launched a competition for best implementation of its Verifiable Delay Functions (VDFs), with real money rewards.  You can find the details at this Github page.

7. The second Q2B (“Quantum 2 Business”) conference, organized by QC Ware Corp., will be held this coming December 10-12, at the Computer History Museum in Mountain View.  There will be two keynote addresses, one by John Preskill and the other by me.  I hope I’ll get a chance to meet some of you there!

8. Longtime colleague and friend-of-the-blog Ashwin Nayak asked me to announce that the 2019 Conference on Computational Complexity, to be held July 18-20 in exciting New Brunswick, NJ, is now accepting submissions.  I hope to be there!

9. OK, what the hell: the 21st annual, and nearly impossible to capitalize correctly, SQuInT (Southwest Quantum Information and Technology) workshop will be held February 2019 in Albuquerque, NM.  UT Austin is now a node of the SQuInT network, and I’ll hopefully be attending along with a posse of students and postdocs.  The deadline for abstract submission is coming up soon: Monday November 12!

10. I went to morning Shabbat services in Austin this past weekend, exactly one week after the tragedy in Pittsburgh.  There was massively increased security, with armed guards interrogating us, Israeli-style, about why we had no membership sticker on our car, whether we knew the name of the rabbi, etc.  Attendance was maybe a factor of three higher than usual.  About thirty priests, ministers, and Christian seminary students, and one Muslim, came up to the bima to say a prayer of solidarity with Jews.  The mayor of Austin, Steve Adler, was also on hand to give a speech.  Then the rabbi read a letter to the synagogue by Sen. Ted Cruz denouncing antisemitism (well, parts of it; he said the letter was really long).  There were murmurs of disapproval from the crowd when Cruz’s name was mentioned, but then everyone quieted down and listened.  The thing is, the US and large parts of the world are now so far outside the norms of liberal democracy, in territory so terrifyingly uncharted since the end of World War II, that shooting up synagogues is bad is actually something that it’s better than not for powerful people to affirm explicitly.  Anyway, while I’m neither a believer nor much of a synagogue-goer, I found the show of unity moving.

It’s hard to think when someone Hadamards your brain

Tuesday, September 25th, 2018

“Unperformed measurements have no results.” —Asher Peres


With two looming paper deadlines, two rambunctious kids, an undergrad class, program committee work, faculty recruiting, and an imminent trip to Capitol Hill to answer congressional staffers’ questions about quantum computing (and for good measure, to give talks at UMD and Johns Hopkins), the only sensible thing to do is to spend my time writing a blog post.

So: a bunch of people asked for my reaction to the new Nature Communications paper by Daniela Frauchiger and Renato Renner, provocatively titled “Quantum theory cannot consistently describe the use of itself.”  Here’s the abstract:

Quantum theory provides an extremely accurate description of fundamental processes in physics.  It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally.  Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity.  The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory.  Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty.  The agents’ conclusions, although all derived within quantum theory, are thus inconsistent.  This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.

I first encountered Frauchiger and Renner’s argument back in July, when Renner (who I’ve known for years, and who has many beautiful results in quantum information) presented it at a summer school in Boulder, CO where I was also lecturing.  I was sufficiently interested (or annoyed?) that I pulled an all-nighter working through the argument, then discussed it at lunch with Renner as well as John Preskill.  I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I do get off their train.  While I found their paper thought-provoking, I reject the contention that there’s any new problem with QM’s logical consistency: for reasons I’ll explain, I think there’s only the same quantum weirdness that (to put it mildly) we’ve known about for quite some time.

In more detail, the paper makes a big deal about how the new argument rests on just three assumptions (briefly, QM works, measurements have definite outcomes, and the “transitivity of knowledge”); and how if you reject the argument, then you must reject at least one of the three assumptions; and how different interpretations (Copenhagen, Many-Worlds, Bohmian mechanics, etc.) make different choices about what to reject.

But I reject an assumption that Frauchiger and Renner never formalize.  That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.”  I say: in QM, even statements that look “certain” in isolation might really mean something like “if measurement X is performed, then Y will certainly be a property of the outcome.”  The trouble arises when we have multiple such statements, involving different measurements X1, X2, …, and (let’s say) performing X1 destroys the original situation in which we were talking about performing X2.

But I’m getting ahead of myself.  The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new.  As Preskill helped me realize, the argument can be understood as simply the “Wigner’s-friendification” of Hardy’s Paradox.  In other words, the new paradox is exactly what you get if you take Hardy’s paradox from 1992, and promote its entangled qubits to the status of conscious observers who are in superpositions over thinking different thoughts.  Having talked to Renner about it, I don’t think he fully endorses the preceding statement.  But since I fully endorse it, let me explain the two ingredients that I think are getting combined here—starting with Hardy’s paradox, which I confess I didn’t know (despite knowing Lucien Hardy himself!) before the Frauchiger-Renner paper forced me to learn it.

Hardy’s paradox involves the two-qubit entangled state

$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}}.$$

And it involves two agents, Alice and Bob, who measure the left and right qubits respectively, both in the {|+〉,|-〉} basis.  Using the Born rule, we can straightforwardly calculate the probability that Alice and Bob both see the outcome |-〉 as 1/12.

So what’s the paradox?  Well, let me now “prove” to you that Alice and Bob can never both get |-〉.  Looking at |ψ〉, we see that conditioned on Alice’s qubit being in the state |0〉, Bob’s qubit is in the state |+〉, so Bob can never see |-〉.  And conversely, conditioned on Bob’s qubit being in the state |0〉, Alice’s qubit is in the state |+〉, so Alice can never see |-〉.  OK, but since |ψ〉 has no |11〉 component, at least one of the two qubits must be in the state |0〉, so therefore at least one of Alice and Bob must see |+〉!

When it’s spelled out so plainly, the error is apparent.  Namely, what do we even mean by a phrase like “conditioned on Bob’s qubit being in the state |0〉,” unless Bob actually measured his qubit in the {|0〉,|1〉} basis?  But if Bob measured his qubit in the {|0〉,|1〉} basis, then we’d be talking about a different, counterfactual experiment.  In the actual experiment, Bob measures his qubit only in the {|+〉,|-〉} basis, and Alice does likewise.  As Asher Peres put it, “unperformed measurements have no results.”

Anyway, as I said, if you strip away the words and look only at the actual setup, it seems to me that Frauchiger and Renner’s contribution is basically to combine Hardy’s paradox with the earlier Wigner’s friend paradox.  They thereby create something that doesn’t involve counterfactuals quite as obviously as Hardy’s paradox does, and so requires a new discussion.

But to back up: what is Wigner’s friend?  Well, it’s basically just Schrödinger’s cat, except that now it’s no longer a cat being maintained in coherent superposition but a person, and we’re emphatic in demanding that this person be treated as a quantum-mechanical observer.  Thus, suppose Wigner entangles his friend with a qubit, like so:

$$ \left|\psi\right\rangle = \frac{\left|0\right\rangle \left|FriendSeeing0\right\rangle + \left|1\right\rangle \left|FriendSeeing1\right\rangle}{\sqrt{2}}. $$

From the friend’s perspective, the qubit has been measured and has collapsed to either |0〉 or |1〉.  From Wigner’s perspective, no such thing has happened—there’s only been unitary evolution—and in principle, Wigner could even confirm that by measuring |ψ〉 in a basis that included |ψ〉 as one of the basis vectors.  But how can they both be right?

Many-Worlders will yawn at this question, since for them, of course “the collapse of the wavefunction” is just an illusion created by the branching worlds, and with sufficiently advanced technology, one observer might experience the illusion even while a nearby observer doesn’t.  Ironically, the neo-Copenhagenists / Quantum Bayesians / whatever they now call themselves, though they consider themselves diametrically opposed to the Many-Worlders (and vice versa), will also yawn at the question, since their whole philosophy is about how physics is observer-relative and it’s sinful even to think about an objective, God-given “quantum state of the universe.”  If, on the other hand, you believed both that

  1. collapse is an objective physical event, and
  2. human mental states can be superposed just like anything else in the physical universe,

then Wigner’s thought experiment probably should rock your world.

OK, but how do we Wigner’s-friendify Hardy’s paradox?  Simple: in the state

$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}},$$

we “promote” Alice’s and Bob’s entangled qubits to two conscious observers, call them Charlie and Diane respectively, who can think two different thoughts that we represent by the states |0〉 and |1〉.  Using far-future technology, Charlie and Diane have been not merely placed into coherent superpositions over mental states but also entangled with each other.

Then, as before, Alice will measure Charlie’s brain in the {|+〉,|-〉} basis, and Bob will measure Diane’s brain in the {|+〉,|-〉} basis.  Since the whole setup is mathematically identical to that of Hardy’s paradox, the probability that Alice and Bob both get the outcome |-〉 is again 1/12.

Ah, but now we can reason as follows:

  1. Whenever Alice gets the outcome |-〉, she knows that Diane must be in the |1〉 state (since, if Diane were in the |0〉 state, then Alice would’ve certainly seen |+〉).
  2. Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component).
  3. Whenever Charlie is in the |0〉 state, she knows that Diane is in the |+〉 state, and hence Bob can’t possibly see the outcome |-〉 when he measures Diane’s brain in the {|+〉,|-〉} basis.

So to summarize, Alice knows that Diane knows that Charlie knows that Bob can’t possibly see the outcome |-〉.  By the “transitivity of knowledge,” this implies that Alice herself knows that Bob can’t possibly see |-〉.  And yet, as we pointed out before, quantum mechanics predicts that Bob can see |-〉, even when Alice has also seen |-〉.  And Alice and Bob could even do the experiment, and compare notes, and see that their “certain knowledge” was false.  Ergo, “quantum theory can’t consistently describe its own use”!

You might wonder: compared to Hardy’s original paradox, what have we gained by waving a magic wand over our two entangled qubits, and calling them “conscious observers”?  Frauchiger and Renner’s central claim is that, by this gambit, they’ve gotten rid of the illegal counterfactual reasoning that we needed to reach a contradiction in our analysis of Hardy’s paradox.  After all, they say, none of the steps in their argument involve any measurements that aren’t actually performed!  But clearly, even if no one literally measures Charlie in the {|0〉,|1〉} basis, he’s still there, thinking either the thought corresponding to |0〉 or the thought corresponding to |1〉.  And likewise Diane.  Just as much as Alice and Bob, Charlie and Diane both exist even if no one measures them, and they can reason about what they know and what they know that others know.  So then we’re free to chain together the “certainties” of Alice, Bob, Charlie, and Diane in order to produce our contradiction.

As I already indicated, I reject this line of reasoning.  Specifically, I get off the train at what I called step 3 above.  Why?  Because the inference from Charlie being in the |0〉 state to Bob seeing the outcome |+〉 holds for the original state |ψ〉, but in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain.  I.e., I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain.

In quantum mechanics, measure or measure not: there is no if you hadn’t measured.


Unrelated Announcement: My awesome former PhD student Michael Forbes, who’s now on the faculty at the University of Illinois Urbana-Champaign, asked me to advertise that the UIUC CS department is hiring this year in all areas, emphatically including quantum computing. And, well, I guess my desire to do Michael a solid outweighed my fear of being tried for treason by my own department’s recruiting committee…


Another Unrelated Announcement: As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.

CS and quantum information at UT Austin: come join us!

Wednesday, September 19th, 2018

Merry Yom Kippur!

This is my annual post where I tell you about opportunities available at UT Austin, which has long been a safe space for CS research, and which we hope will rapidly become (or return to its historical role as…) a safe space for quantum computing and information.

If you’re interested in faculty positions in computer science at UT, I have some great news: we plan to do a lot of hiring this year!  Because of the sheer volume of interviews we’ll be doing, we’d like to start our recruiting season already in the fall.  So we’re extending an unusual invitation: if you already have your materials ready, we encourage you to apply for faculty positions right now.  If you’re chosen for an interview, we could schedule it for the next few months.

We’ll be looking for great candidates across all parts of CS, but one particular interest is hiring another quantum computing theorist in CS (i.e., besides me), most likely a junior person.  While not everyone who reads this blog is a plausible candidate, and not every plausible candidate reads this blog, the intersection is surely non-negligible!  So again: we encourage you to apply right now, so we can start scheduling interviews already.

I’m also on the lookout for postdocs, mainly in theoretical quantum computing and information.  (I, and others in the theory group, are also collectively interested in postdocs in classical computational complexity.)  If you’re interested in doing a postdoc with me starting in Fall 2019, the procedure, like in previous years, is this:

  • Email me introducing yourself (if I don’t already know you), and include your CV and up to three representative papers.  Do this even if you already emailed me before.
  • Arrange for two recommendation letters to be emailed to me.

We’ll set a deadline for this of December 15.

Finally, if you’re interested in pursuing a PhD in CS at UT, please apply here!  The deadline, again, is December 15.  Just like every year, I’m on the lookout for superb, complexity-loving, quantum- or quantum-curious, lower-bound-hungry students of every background, and if you specify that you want to work with me, I’ll be sure to see your application.  Emailing me won’t help: everything is done through the application process.

As we like to say down here in Texas, hook ’em Hadamards!  (Well OK, no, we don’t especially like to say that.  It’s just a slogan that I found amusing a few years ago.)

My Tomassoni-Chisesi Prize talk

Saturday, September 15th, 2018

Update (Sep. 21) Video of Philip Kim’s and my talks is now available! (But not streaming, just a giant mp4 that you can download.)


On Thursday, I had the incredible honor of accepting the 2018 Tomassoni-Chisesi Prize in Physics at Università “La Sapienza” in Rome—“incredible” mostly because I’m of course not a physicist.  (I kept worrying they’d revoke the award when they realized I could barely solve the wave equation.)  This is not the first time quantum information was recognized; the prize has previously gone to Serge Haroche and Alain Aspect.  This year, for the first time, there was both an under-40 and an over-40 award; the latter went to Philip Kim, a quantum materials researcher at Harvard who I had the privilege to meet on this trip (he’s the taller one below).

I’m unbelievably grateful, not only to the committee, and its chair Giorgio Parisi (whose seminal work on phase transitions and satisfiability I’d long known, but who I met for the first time on this trip), but to Fabio Sciarrino, Paolo Mataloni, Fernanda Lupinacci, and everyone else who graciously hosted me and helped make my hastily-planned visit to Europe a success.

The department I visited has a storied history: here are the notes that Enrico Fermi left, documenting what he covered each day in his physics class in 1938.  The reason the last squares are blank is that, when Fermi and his Jewish wife left for Stockholm on the occasion of Fermi’s Nobel Prize, they continued directly to the US rather than return to an Italy that had just passed the racial laws.

On my way to Rome, I also gave two talks at a “quantum computing hackathon” in Zurich, called QuID (Quantum Information for Developers).  Thanks so much to Lidia del Rio for arranging that visit, which was fantastic as well.

To accept the Tomassoni-Chisesi prize, I had to give a 40-minute talk summarizing all my research from 2000 to the present—the hardest part being that I had to do it while wearing a suit, and sweating at least half my body weight.  (I also had a cold and a hacking cough.)  I think there will eventually be video of my and Prof. Kim’s talks, but it’s not yet available.  In the meantime, for those who are interested, here are my PowerPoint slides, and here’s the title and abstract:

Three Questions About Quantum Computing
Scott Aaronson (University of Texas at Austin)

I’ll discuss some of my work in quantum computing over the past 18 years, organizing it in terms of three questions.  First, how can we demonstrate, using near-future hardware, that quantum computers can get any genuine speedups at all over classical computers (ideally useful speedups)?  Second, what sorts of problems would be hard even for quantum computers, and can we turn the intractability of those problems to our advantage?  Third, are there physically reasonable models of computation even more powerful than quantum computing, or does quantum computing represent an ultimate limit?

If you’re a regular reader here, most of the content will be stuff you’ve seen before, with the exception of a story or two like the following:

Last night I was talking to my mom about my grandfather, who as it happens came through Rome 73 years ago, as an engineer with the US Army.  Disabling landmines was, ironically, one of the safer ways to be a Jew in Europe at that time.  If you’d told him then that, three-quarters of a century later, his grandson would be back here in Rome to accept an award for research in quantum computational complexity … well, I’m sure he’d have any number of questions about it.  But one thing I clearly remember is that my grandfather was always full of effusive praise for the warmth of the people he met in Italy—how, for example, Italian farmers would share food with the hungry and inadequately-provisioned Allied soldiers, despite supposedly being on the opposing side.  Today, every time I’m in Italy for a conference or a talk, I get to experience that warmth myself, and certainly the food part.

(Awww!  But I meant it.  Italians are super-warm.)

There’s a view that scientists should just pursue the truth and be serenely unaffected by prizes, recognition, and other baubles.  I think that view has a great deal to be said for it.  But thinking it over recently, I struck the following mental bargain: if I’m going to get depressed on a semi-regular basis by people attacking me online—and experience shows that I will—well then, I also get to enjoy whatever’s the opposite of that with a clear conscience.  It’s not arrogance or self-importance; it’s just trying to balance things out a bit!

So again, thanks so much—to the physics department of La Sapienza, but also to my family, friends, mentors, readers, colleagues at UT Austin and around the world, and everyone else who helps make possible whatever it is that I do.

Lecture notes! Intro to Quantum Information Science

Sunday, August 26th, 2018

Someone recently wrote that my blog is “too high on nerd whining content and too low on actual compsci content to be worth checking too regularly.”  While that’s surely one of the mildest criticisms I’ve ever received, I hope that today’s post will help to even things out.

In Spring 2017, I taught a new undergraduate course at UT Austin, entitled Introduction to Quantum Information Science.  There were about 60 students, mostly CS but also with strong representation from physics, math, and electrical engineering.  One student, Ewin Tang, made a previous appearance on this blog.  But today belongs to another student, Paulo Alves, who took it upon himself to make detailed notes of all of my lectures.  Using Paulo’s notes as a starting point, and after a full year of procrastination and delays, I’m now happy to release the full lecture notes for the course.  Among other things, I’ll be using these notes when I teach the course a second time, starting … holy smokes … this Wednesday.

I don’t pretend that these notes break any new ground.  Even if we restrict to undergrad courses only (which rules out, e.g., Preskill’s legendary notes), there are already other great quantum information lecture notes available on the web, such as these from Berkeley (based on a course taught by, among others, my former adviser Umesh Vazirani and committee member Birgitta Whaley), and these from John Watrous in Waterloo.  There are also dozens of books—including Mermin’s, which we used in this course.  The only difference with these notes is that … well, they cover exactly the topics I’d cover, in exactly the order I’d cover them, and with exactly the stupid jokes and stories I’d tell in a given situation.  So if you like my lecturing style, you’ll probably like these, and if not, not (but given that you’re here, there’s hopefully some bias toward the former).

The only prerequisite for these notes is some minimal previous exposure to linear algebra and algorithms.  If you read them all, you might not be ready yet to do research in quantum information—that’s what a grad course is for—but I feel good that you’ll have an honest understanding of what quantum information is all about and where it currently stands.  (In fact, where it already stood by the late 1990s and early 2000s, but with many comments about the theoretical and experimental progress that’s been made since then.)

Also, if you’re one of the people who read Quantum Computing Since Democritus and who was disappointed by the lack of basic quantum algorithms in that book—a function of the book’s origins, as notes of lectures given to graduate students who already knew basic quantum algorithms—then consider these new notes my restitution.  If nothing else, no one can complain about a dearth of basic quantum algorithms here.

I welcome comments, bugfixes, etc.  Thanks so much, not only to Paulo for transcribing the lectures (and making the figures!), but also to Patrick Rall and Corey Ostrove for TA’ing the course, to Tom Wong and Supartha Podder for giving guest lectures, and of course, to all the students for making the course what it was.

  • Lecture 1: Course Intro, Church-Turing Thesis (3 pages)
  • Lecture 2: Probability Theory and QM (5 pages)
  • Lecture 3: Basic Rules of QM (4 pages)
  • Lecture 4: Quantum Gates and Circuits, Zeno Effect, Elitzur-Vaidman Bomb (5 pages)
  • Lecture 5: Coin Problem, Inner Products, Multi-Qubit States, Entanglement (5 pages)
  • Lecture 6: Mixed States (6 pages)
  • Lecture 7: Bloch Sphere, No-Cloning, Wiesner’s Quantum Money (6 pages)
  • Lecture 8: More on Quantum Money, BB84 Quantum Key Distribution (5 pages)
  • Lecture 9: Superdense Coding (2 pages)
  • Lecture 10: Teleportation, Entanglement Swapping, GHZ State, Monogamy (5 pages)
  • Lecture 11: Quantifying Entanglement, Mixed State Entanglement (4 pages)
  • Lecture 12: Interpretation of QM (Copenhagen, Dynamical Collapse, MWI, Decoherence) (10 pages)
  • Lecture 13: Hidden Variables, Bell’s Inequality (5 pages)
  • Lecture 14: Nonlocal Games (7 pages)
  • Lecture 15: Einstein-Certified Randomness (4 pages)
  • Lecture 16: Quantum Computing, Universal Gate Sets (8 pages)
  • Lecture 17: Quantum Query Complexity, Deutsch-Jozsa (8 pages)
  • Lecture 18: Bernstein-Vazirani, Simon (7 pages)
  • Lecture 19: RSA and Shor’s Algorithm (6 pages)
  • Lecture 20: Shor, Quantum Fourier Transform (8 pages)
  • Lecture 21: Continued Fractions, Shor Wrap-Up (4 pages)
  • Lecture 22: Grover (9 pages)
  • Lecture 23: BBBV, Applications of Grover (7 pages)
  • Lecture 24: Collision and Other Applications of Grover (6 pages)
  • Lecture 25: Hamiltonians (10 pages)
  • Lecture 26: Adiabatic Algorithm (10 pages)
  • Lecture 27: Quantum Error Correction (8 pages)
  • Lecture 28: Stabilizer Formalism (9 pages)
  • Lecture 29: Experimental Realizations of QC (9 pages)

And by popular request, here are the 2017 problem sets!

I might post solutions at a later date.

Note: If you’re taking the course in 2018 or a later year, these sets should be considered outdated and for study purposes only.


Notes and Updates (Aug. 27)

Here’s a 184-page combined file. Thanks so much to Robert Rand, Oscar Cunningham, Petter S, and Noon van der Silk for their help with this.

If it wasn’t explicit: these notes are copyright Scott Aaronson 2018, free for personal or academic use, but not for modification or sale.

I’ve freely moved material between lectures so that it wasn’t arbitrarily cut across lecture boundaries. This is one of the reasons why some lectures are much longer than others.

I apologize that some of the displayed equations are ugly. This is because we never found an elegant way to edit equations in Google Docs.

If you finish these notes and are still hankering for more, try my Quantum Complexity Theory or Great Ideas in Theoretical Computer Science lecture notes, or my Barbados lecture notes.  I now have links to all of them on the sidebar on the right.

Summer recapitulates life

Tuesday, July 24th, 2018

Last week, I was back at the IAS in Princeton, to speak at a wonderful PITP summer school entitled “From Qubits to Spacetime,” co-organized by Juan Maldacena and Edward Witten. This week, I’ll be back in Waterloo, to visit old and new friends at the Perimeter Institute and Institute for Quantum Computing and give a couple talks.  Then, over the weekend, I’ll be back in Boston to see old friends, colleagues, and students.  After some other miscellaneous travel, I’ll then return to Austin in late August when the semester begins.  The particular sequence IAS → Waterloo → Boston → Austin is of course one that I’ve followed before, over a longer timescale.

Two quick announcements:

First, at the suggestion of reader Sanketh Menda, I’m thinking of holding a Shtetl-Optimized meetup in Waterloo this week.  Please send me an email if you’re interested, and we’ll figure out a time and place that work for everyone.

Second, many of the videos from the IAS summer school are now available, including mine: Part I and Part II.  I cover some basics of complexity theory, the complexity of quantum states and unitary transformations, the Harlow-Hayden argument about the complexity of turning a black hole event horizon into a firewall (with my refinement), and my and Lenny Susskind’s work on circuit complexity, wormholes, and AdS/CFT.  As a special bonus, check out the super-embarrassing goof at the beginning of my first lecture—claiming a mistaken symmetry of conditional entropy and even attributing it to Edward Witten’s lecture!  (But Witten, who I met for the first time on this visit, was kind enough to call my talk “lots of fun” anyway, and give me other positive comments, which I should put on my CV or something.)

Addendum: Many of the PITP videos are well worth watching!  As one example, I found Witten’s talks to be shockingly accessible.  I’d been to a previous talk of his involving Khovanov homology, but beyond the first few minutes, it went so far over my head that I couldn’t tell you how it was for its intended audience.  I’d also been to a popular talk of Witten’s on string theory, but that’s something he could do with only 3 awake brain cells.  In these talks, by contrast, Witten proves some basic inequalities of classical and quantum information theory, then proves the Reeh-Schlieder Theorem of quantum field theory and the Hawking and Penrose singularity theorems of GR, and finally uses quantum information theory to prove positive energy conditions from quantum field theory that are often needed to make statements about GR.

Customers who liked this quantum recommendation engine might also like its dequantization

Thursday, July 12th, 2018

I’m in Boulder, CO right now for the wonderful Boulder summer school on quantum information, where I’ll be lecturing today and tomorrow on introductory quantum algorithms.  But I now face the happy obligation of taking a break from all the lecture-preparing and schmoozing, to blog about a striking new result by a student of mine—a result that will probably make an appearance in my lectures as well.

Yesterday, Ewin Tang—an 18-year-old who just finished a bachelor’s at UT Austin, and who will be starting a PhD in CS at the University of Washington in the fall—posted a preprint entitled A quantum-inspired classical algorithm for recommendation systems. Ewin’s new algorithm solves the following problem, very loosely stated: given m users and n products, and incomplete data about which users like which products, organized into a convenient binary tree data structure; and given also the assumption that the full m×n preference matrix is low-rank (i.e., that there are not too many ways the users vary in their preferences), sample some products that a given user is likely to want to buy.  This is an abstraction of the problem that’s famously faced by Amazon and Netflix, every time they tell you which books or movies you “might enjoy.”  What’s striking about Ewin’s algorithm is that it uses only polylogarithmic time: that is, time polynomial in log(m), log(n), the matrix rank, and the inverses of the relevant error parameters.  Admittedly, the polynomial involves exponents of 33 and 24: so, not exactly “practical”!  But it seems likely to me that the algorithm will run much, much faster in practice than it can be guaranteed to run in theory.  Indeed, if any readers would like to implement the thing and test it out, please let us know in the comments section!

As the title suggests, Ewin’s algorithm was directly inspired by a quantum algorithm for the same problem, which Kerenidis and Prakash (henceforth KP) gave in 2016, and whose claim to fame was that it, too, ran in polylog(m,n) time.  Prior to Ewin’s result, the KP algorithm was arguably the strongest candidate there was for an exponential quantum speedup for a real-world machine learning problem.  The new result thus, I think, significantly changes the landscape of quantum machine learning, by killing off one of its flagship applications.  (Note that whether KP gives a real exponential speedup was one of the main open problems mentioned in John Preskill’s survey on the applications of near-term quantum computers.)  At the same time, Ewin’s result yields a new algorithm that can be run on today’s computers, that could conceivably be useful to those who need to recommend products to customers, and that was only discovered by exploiting intuition that came from quantum computing. So I’d consider this both a defeat and a victory for quantum algorithms research.

This result was the outcome of Ewin’s undergraduate thesis project (!), which I supervised. A year and a half ago, Ewin took my intro quantum information class, whereupon it quickly became clear that I should offer this person an independent project.  So I gave Ewin the problem of proving a poly(m,n) lower bound on the number of queries that any classical randomized algorithm would need to make to the user preference data, in order to generate product recommendations for a given user, in exactly the same setting that KP had studied.  This seemed obvious to me: in their algorithm, KP made essential use of quantum phase estimation, the same primitive used in Shor’s factoring algorithm.  Without phase estimation, you seemed to be stuck doing linear algebra on the full m×n matrix, which of course would take poly(m,n) time.  But KP had left the problem open, I didn’t know how to solve it either, and nailing it down seemed like an obvious challenge, if we wanted to establish the reality of quantum speedups for at least one practical machine learning problem.  (For the difficulties in finding such speedups, see my essay for Nature Physics, much of which is still relevant even though it was written prior to KP.)

Anyway, for a year, Ewin tried and failed to rule out a superfast classical algorithm for the KP problem—eventually, of course, discovering the unexpected reason for the failure!  Throughout this journey, I served as Ewin’s occasional sounding board, but can take no further credit for the result.  Indeed, I admit that I was initially skeptical when Ewin told me that phase estimation did not look essential after all for generating superfast recommendations—that a classical algorithm could get a similar effect by randomly sampling a tiny submatrix of the user preference matrix, and then carefully exploiting a variant of a 2004 result by Frieze, Kannan, and Vempala.  So when I was in Berkeley a few weeks ago for the Simons quantum computing program, I had the idea of flying Ewin over to explain the new result to the experts, including Kerenidis and Prakash themselves.  After four hours of lectures and Q&A, a consensus emerged that the thing looked solid.  Only after that gauntlet did I advise Ewin to put the preprint online.

So what’s next?  Well, one obvious challenge is to bring down the running time of Ewin’s algorithm, and (as I mentioned before) to investigate whether or not it could give a practical benefit today.  A different challenge is to find some other example of a quantum algorithm that solves a real-world machine learning problem with only a polylogarithmic number of queries … one for which the exponential quantum speedup will hopefully be Ewin-proof, ideally even provably so!  The field is now wide open.  It’s possible that my Forrelation problem, which Raz and Tal recently used for their breakthrough oracle separation between BQP and PH, could be an ingredient in such a separation.

Anyway, there’s much more to say about Ewin’s achievement, but I now need to run to lecture about quantum algorithms like Simon’s and Shor’s, which do achieve provable exponential speedups in query complexity!  Please join me in offering hearty congratulations, see Ewin’s nicely-written paper for details, and if you have any questions for me or (better yet) Ewin, feel free to ask in the comments.


Update: On the Hacker News thread, some commenters are lamenting that such a brilliant mind as Ewin’s would spend its time figuring out how to entice consumers to buy even more products that they don’t need. I confess that that’s an angle that hadn’t even occurred to me: I simply thought that it was a beautiful question whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time, and if the application to recommendation systems helped to motivate that question, then so much the better. Now, though, I feel compelled to point out that, in addition to the potentially lucrative application to Amazon and Netflix, research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well.

Another Update: For those who are interested, streaming video of my quantum algorithms lectures in Boulder are now available:

You can also see all the other lectures here.

My Y Combinator podcast

Friday, June 29th, 2018

Here it is, recorded last week at Y Combinator’s office in San Francisco.  For regular readers of this blog, there will be a few things that are new—research projects I’ve been working on this year—and many things that are old.  Hope you enjoy it!  Thanks so much to Craig Cannon of Y Combinator for inviting me.

Associated with the podcast, Hacker News will be doing an AMA with me later today.  I’ll post a link to that when it’s available.  Update: here it is.

I’m at STOC’2018 TheoryFest in Los Angeles right now, where theoretical computer scientists celebrated the 50th anniversary of the conference that in some sense was the birthplace of the P vs. NP problem.  (Two participants in the very first STOC in 1969, Richard Karp and Allan Borodin, were on a panel to share their memories, along with Ronitt Rubinfeld and Avrim Blum, who joined the action in the 1980s.)  There’s been a great program this year—if you’d like to ask me about it, maybe do so in the comments of this post rather than in the AMA.