Archive for the ‘Quantum’ Category

Raise a martini glass for Google and Martinis!

Saturday, September 6th, 2014

We’ve already been discussing this in the comments section of my previous post, but a few people emailed me to ask when I’d devote a separate blog post to the news.

OK, so for those who haven’t yet heard: this week Google’s Quantum AI Lab announced that it’s teaming up with John Martinis, of the University of California, Santa Barbara, to accelerate the Martinis group‘s already-amazing efforts in superconducting quantum computing.  (See here for the MIT Tech‘s article, here for Wired‘s, and here for the WSJ‘s.)  Besides building some of the best (if not the best) superconducting qubits in the world, Martinis, along with Matthias Troyer, was also one of the coauthors of two important papers that found no evidence for any speedup in the D-Wave machines.  (However, in addition to working with the Martinis group, Google says it will also continue its partnership with D-Wave, in an apparent effort to keep reality more soap-operatically interesting than any hypothetical scenario one could make up on a blog.)

I have the great honor of knowing John Martinis, even once sharing the stage with him at a “Physics Cafe” in Aspen.  Like everyone else in our field, I profoundly admire the accomplishments of his group: they’ve achieved coherence times in the tens of microseconds, demonstrated some of the building blocks of quantum error-correction, and gotten tantalizingly close to the fault-tolerance threshold for the surface code.  (When, in D-Wave threads, people have challenged me: “OK Scott, so then which experimental quantum computing groups should be supported more?,” my answer has always been some variant of: “groups like John Martinis’s.”)

So I’m excited about this partnership, and I wish it the very best.

But I know people will ask: apart from the support and well-wishes, do I have any predictions?  Alright, here’s one.  I predict that, regardless of what happens, commenters here will somehow make it out that I was wrong.  So for example, if the Martinis group, supported by Google, ultimately succeeds in building a useful, scalable quantum computer—by emphasizing error-correction, long coherence times (measured in the conventional way), “gate-model” quantum algorithms, universality, and all the other things that D-Wave founder Geordie Rose has pooh-poohed from the beginning—commenters will claim that still most of the credit belongs to D-Wave, for whetting Google’s appetite, and for getting it involved in superconducting QC in the first place.  (The unstated implication being that, even if there were little or no evidence that D-Wave’s approach would ever lead to a genuine speedup, we skeptics still would’ve been wrong to state that truth in public.)  Conversely, if this venture doesn’t live up to the initial hopes, commenters will claim that that just proves Google’s mistake: rather than “selling out to appease the ivory-tower skeptics,” they should’ve doubled down on D-Wave.  Even if something completely different happens—let’s say, Google, UCSB, and D-Wave jointly abandon their quantum computing ambitions, and instead partner with ISIS to establish the world’s first “Qualiphate,” ruling with a niobium fist over California and parts of Oregon—I would’ve been wrong for having failed to foresee that.  (Even if I did sort of foresee it in the last sentence…)

Yet, while I’ll never live to see the blog-commentariat acknowledge the fundamental reasonableness of my views, I might live to see scalable quantum computers become a reality, and that would surely be some consolation.  For that reason, even if for no others, I once again wish the Martinis group and Google’s Quantum AI Lab the best in their new partnership.


Unrelated Announcement: Check out a lovely (very basic) introductory video on quantum computing and information, narrated by John Preskill and Spiros Michalakis, and illustrated by Jorge Cham of PhD Comics.

“Could a Quantum Computer Have Subjective Experience?”

Monday, August 25th, 2014

Author’s Note: Below is the prepared version of a talk that I gave two weeks ago at the workshop Quantum Foundations of a Classical Universe, which was held at IBM’s TJ Watson Research Center in Yorktown Heights, NY.  My talk is for entertainment purposes only; it should not be taken seriously by anyone.  If you reply in a way that makes clear you did take it seriously (“I’m shocked and outraged that someone who dares to call himself a scientist would … [blah blah]“), I will log your IP address, hunt you down at night, and force you to put forward an account of consciousness and decoherence that deals with all the paradoxes discussed below—and then reply at length to all criticisms of your account.

If you’d like to see titles, abstracts, and slides for all the talks from the workshop—including by Charles Bennett, Sean Carroll, James Hartle, Adrian Kent, Stefan Leichenauer, Ken Olum, Don Page, Jason Pollack, Jess Riedel, Mark Srednicki, Wojciech Zurek, and Michael Zwolak—click here.  You’re also welcome to discuss these other nice talks in the comments section, though I might or might not be able to answer questions about them.  Apparently videos of all the talks will be available before long (Jess Riedel has announced that videos are now available).

(Note that, as is probably true for other talks as well, the video of my talk differs substantially from the prepared version—it mostly just consists of interruptions and my responses to them!  On the other hand, I did try to work some of the more salient points from the discussion into the text below.)

Thanks so much to Charles Bennett and Jess Riedel for organizing the workshop, and to all the participants for great discussions.


I didn’t prepare slides for this talk—given the topic, what slides would I use exactly?  “Spoiler alert”: I don’t have any rigorous results about the possibility of sentient quantum computers, to state and prove on slides.  I thought of giving a technical talk on quantum computing theory, but then I realized that I don’t really have technical results that bear directly on the subject of the workshop, which is how the classical world we experience emerges from the quantum laws of physics.  So, given the choice between a technical talk that doesn’t really address the questions we’re supposed to be discussing, or a handwavy philosophical talk that at least tries to address them, I opted for the latter, so help me God.

Let me start with a story that John Preskill told me years ago.  In the far future, humans have solved not only the problem of building scalable quantum computers, but also the problem of human-level AI.  They’ve built a Turing-Test-passing quantum computer.  The first thing they do, to make sure this is actually a quantum computer, is ask it to use Shor’s algorithm to factor a 10,000-digit number.  So the quantum computer factors the number.  Then they ask it, “while you were factoring that number, what did it feel like?  did you feel yourself branching into lots of parallel copies, which then recohered?  or did you remain a single consciousness—a ‘unitary’ consciousness, as it were?  can you tell us from introspection which interpretation of quantum mechanics is the true one?”  The quantum computer ponders this for a while and then finally says, “you know, I might’ve known before, but now I just … can’t remember.”

I like to tell this story when people ask me whether the interpretation of quantum mechanics has any empirical consequences.

Look, I understand the impulse to say “let’s discuss the measure problem, or the measurement problem, or derivations of the Born rule, or Boltzmann brains, or observer-counting, or whatever, but let’s take consciousness off the table.”  (Compare: “let’s debate this state law in Nebraska that says that, before getting an abortion, a woman has to be shown pictures of cute babies.  But let’s take the question of whether or not fetuses have human consciousness—i.e., the actual thing that’s driving our disagreement about that and every other subsidiary question—off the table, since that one is too hard.”)  The problem, of course, is that even after you’ve taken the elephant off the table (to mix metaphors), it keeps climbing back onto the table, often in disguises.  So, for better or worse, my impulse tends to be the opposite: to confront the elephant directly.

Having said that, I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other.  Many people would say that the connection doesn’t go any deeper than: “quantum mechanics is mysterious, consciousness is also mysterious, ergo maybe they’re related somehow.”  But I’m not sure that’s entirely true.  One thing that crystallized my thinking about this was a remark made in a lecture by Peter Byrne, who wrote a biography of Hugh Everett.  Byrne was discussing the question, why did it take so many decades for Everett’s Many-Worlds Interpretation to become popular?  Of course, there are people who deny quantum mechanics itself, or who have basic misunderstandings about it, but let’s leave those people aside.  Why did people like Bohr and Heisenberg dismiss Everett?  More broadly: why wasn’t it just obvious to physicists from the beginning that “branching worlds” is a picture that the math militates toward, probably the simplest, easiest story one can tell around the Schrödinger equation?  Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it?

Here was Byrne’s answer: he said, before you can really be on board with Everett, you first need to be on board with Daniel Dennett (the philosopher).  He meant: you first need to accept that a “mind” is just some particular computational process.  At the bottom of everything is the physical state of the universe, evolving via the equations of physics, and if you want to know where consciousness is, you need to go into that state, and look for where computations are taking place that are sufficiently complicated, or globally-integrated, or self-referential, or … something, and that’s where the consciousness resides.  And crucially, if following the equations tells you that after a decoherence event, one computation splits up into two computations, in different branches of the wavefunction, that thereafter don’t interact—congratulations!  You’ve now got two consciousnesses.

And if everything above strikes you as so obvious as not to be worth stating … well, that’s a sign of how much things changed in the latter half of the 20th century.  Before then, many thinkers would’ve been more likely to say, with Descartes: no, my starting point is not the physical world.  I don’t even know a priori that there is a physical world.  My starting point is my own consciousness, which is the one thing besides math that I can be certain about.  And the point of a scientific theory is to explain features of my experience—ultimately, if you like, to predict the probability that I’m going to see X or Y if I do A or B.  (If I don’t have prescientific knowledge of myself, as a single, unified entity that persists in time, makes choices, and later observes their consequences, then I can’t even get started doing science.)  I’m happy to postulate a world external to myself, filled with unseen entities like electrons behaving in arbitrarily unfamiliar ways, if it will help me understand my experience—but postulating other versions of me is, at best, irrelevant metaphysics.  This is a viewpoint that could lead you Copenhagenism, or to its newer variants like quantum Bayesianism.

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett.  I certainly have sympathies in that direction too.  In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer.  But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered.  I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

Of course, there are already tremendous difficulties here, even if we ignore quantum mechanics entirely.  Ken Olum was over much of this ground in his talk yesterday (see here for a relevant paper by Davenport and Olum).  You’ve all heard the ones about, would you agree to be painlessly euthanized, provided that a complete description of your brain would be sent to Mars as an email attachment, and a “perfect copy” of you would be reconstituted there?  Would you demand that the copy on Mars be up and running before the original was euthanized?  But what do we mean by “before”—in whose frame of reference?

Some people say: sure, none of this is a problem!  If I’d been brought up since childhood taking family vacations where we all emailed ourselves to Mars and had our original bodies euthanized, I wouldn’t think anything of it.  But the philosophers of mind are barely getting started.

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around.  It took them several years just to simulate a single second of your thought processes.  Would that bring your subjectivity into being?  Would you accept it as a replacement for your current body?  If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table?  That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive.  Would that bring about your consciousness?  Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table?  Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more.  Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes.  Would that bring three “copies” of your consciousness into being?  Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries?  Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time?  Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

You might say, sure, maybe these questions are puzzling, but what’s the alternative?  Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na.  Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and the lookup table and homomorphically-encrypted brain and Vaidman brain and all these other apparitions aren’t, because we alone have “biological causal powers.”  And what do those causal powers consist of?  Hey, you’re not supposed to ask that!  Just accept that we have them.  Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity.  But neither of those two options ever struck me as much of an improvement.

Yet I submit to you that, between these extremes, there’s another position we can stake out—one that I certainly don’t know to be correct, but that would solve so many different puzzles if it were correct that, for that reason alone, it seems to me to merit more attention than it usually receives.  (In an effort to give the view that attention, a couple years ago I wrote an 85-page essay called The Ghost in the Quantum Turing Machine, which one or two people told me they actually read all the way through.)  If, after a lifetime of worrying (on weekends) about stuff like whether a giant lookup table would be conscious, I now seem to be arguing for this particular view, it’s less out of conviction in its truth than out of a sense of intellectual obligation: to whatever extent people care about these slippery questions at all, to whatever extent they think various alternative views deserve a hearing, I believe this one does as well.

The intermediate position that I’d like to explore says the following.  Yes, consciousness is a property of any suitably-organized chunk of matter.  But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious.  Namely, it has to participate fully in the Arrow of Time.  More specifically, it has to produce irreversible decoherence as an intrinsic part of its operation.  It has to be continually taking microscopic fluctuations, and irreversibly amplifying them into stable, copyable, macroscopic classical records.

Before I go further, let me be extremely clear about what this view is not saying.  Firstly, it’s not saying that the brain is a quantum computer, in any interesting sense—let alone a quantum-gravitational computer, like Roger Penrose wants!  Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical.  The view I’m discussing doesn’t challenge conventional neuroscience on that account.

Secondly, this view doesn’t say that consciousness is in any sense necessary for decoherence, or for the emergence of a classical world.  I’ve never understood how one could hold such a belief, while still being a scientific realist.  After all, there are trillions of decoherence events happening every second in stars and asteroids and uninhabited planets.  Do those events not “count as real” until a human registers them?  (Or at least a frog, or an AI?)  The view I’m discussing only asserts the converse: that decoherence is necessary for consciousness.  (By analogy, presumably everyone agrees that some amount of computation is necessary for an interesting consciousness, but that doesn’t mean consciousness is necessary for computation.)

Thirdly, the view I’m discussing doesn’t say that “quantum magic” is the explanation for consciousness.  It’s silent on the explanation for consciousness (to whatever extent that question makes sense); it seeks only to draw a defensible line between the systems we want to regard as conscious and the systems we don’t—to address what I recently called the Pretty-Hard Problem.  And the (partial) answer it suggests doesn’t seem any more “magical” to me than any other proposed answer to the same question.  For example, if one said that consciousness arises from any computation that’s sufficiently “integrated” (or something), I could reply: what’s the “magical force” that imbues those particular computations with consciousness, and not other computations I can specify?  Or if one said (like Searle) that consciousness arises from the biology of the brain, I could reply: so what’s the “magic” of carbon-based biology, that could never be replicated in silicon?  Or even if one threw up one’s hands and said everything was conscious, I could reply: what’s the magical power that imbues my stapler with a mind?  Each of these views, along with the view that stresses the importance of decoherence and the arrow of time, is worth considering.  In my opinion, each should be judged according to how well it holds up under the most grueling battery of paradigm-cases, thought experiments, and reductios ad absurdum we can devise.

So, why might one conjecture that decoherence, and participation in the arrow of time, were necessary conditions for consciousness?  I suppose I could offer some argument about our subjective experience of the passage of time being a crucial component of our consciousness, and the passage of time being bound up with the Second Law.  Truthfully, though, I don’t have any a-priori argument that I find convincing.  All I can do is show you how many apparent paradoxes get resolved if you make this one speculative leap.

For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk.  But that has an interesting consequence: if you’re an outside observer who doesn’t know the chunk’s quantum state, it might be difficult or impossible for you to predict what the chunk is going to do next—even just to give decent statistical predictions, like you can for a hydrogen atom.  And of course, you can’t in general perform a measurement that will tell you the chunk’s quantum state, without violating the No-Cloning Theorem.  For the same reason, there’s in general no physical procedure that you can apply to the chunk to duplicate it exactly: that is, to produce a second chunk that you can be confident will behave identically (or almost identically) to the first, even just in a statistical sense.  (Again, this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.)

It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator.  You can make, if not an exact copy of a Geiger counter, surely one that’s close enough for practical purposes.  And, even though the two counters will record different sequences of clicks when pointed at identical sources, the statistical distribution of clicks will be the same (and precisely calculable), and surely that’s all that matters.  So, what separates these examples from the sorts of examples I want to discuss?

What separates them is the undisputed existence of what I’ll call a clean digital abstraction layer.  By that, I mean a macroscopic approximation to a physical system that an external observer can produce, in principle, without destroying the system; that can be used to predict what the system will do to excellent accuracy (given knowledge of the environment); and that “sees” quantum-mechanical uncertainty—to whatever extent it does—as just a well-characterized source of random noise.  If a system has such an abstraction layer, then we can regard any quantum noise as simply part of the “environment” that the system observes, rather than part of the system itself.  I’ll take it as clear that such clean abstraction layers exist for a Geiger counter, a photodetector, or a computer with a quantum random number generator.  By contrast, for (say) an animal brain, I regard it as currently an open question whether such an abstraction layer exists or not.  If, someday, it becomes routine for nanobots to swarm through people’s brains and make exact copies of them—after which the “original” brains can be superbly predicted in all circumstances, except for some niggling differences that are traceable back to different quantum-mechanical dice rolls—at that point, perhaps educated opinion will have shifted to the point where we all agree the brain does have a clean digital abstraction layer.  But from where we stand today, it seems entirely possible to agree that the brain is a physical system obeying the laws of physics, while doubting that the nanobots would work as advertised.  It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels.  Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—not only as a contingent matter of technology but as a fundamental matter of physics.

(As a side note, I should stress that obviously, even without invasive nanobots, our brains are constantly changing, but we normally don’t say as a result that we become completely different people at each instant!  To my way of thinking, though, this transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you, in the sense we’re talking about.  For one thing, all your transtemporal doppelgängers are connected by a single, linear chain of causation.  For another, outside movies like Bill and Ted’s Excellent Adventure, you can’t meet your transtemporal doppelgängers and have a conversation with them, nor can scientists do experiments on some of them, then apply what they learned to others that remained unaffected by their experiments.)

So, on this view, a conscious chunk of matter would be one that not only acts irreversibly, but that might well be unclonable for fundamental physical reasons.  If so, that would neatly resolve many of the puzzles that I discussed before.  So for example, there’s now a straightforward reason why you shouldn’t consent to being killed, while your copy gets recreated on Mars from an email attachment.  Namely, that copy will have a microstate with no direct causal link to your “original” microstate—so while it might behave similarly to you in many ways, you shouldn’t expect that your consciousness will “transfer” to it.  If you wanted to get your exact microstate to Mars, you could do that in principle using quantum teleportation—but as we all know, quantum teleportation inherently destroys the original copy, so there’s no longer any philosophical problem!  (Or, of course, you could just get on a spaceship bound for Mars: from a philosophical standpoint, it amounts to the same thing.)

Similarly, in the case where the simulation of your brain was run three times for error-correcting purposes: that could bring about three consciousnesses if, and only if, the three simulations were tied to different sets of decoherence events.  The giant lookup table and the Earth-sized brain simulation wouldn’t bring about any consciousness, unless they were implemented in such a way that they no longer had a clean digital abstraction layer.  What about the homomorphically-encrypted brain simulation?  That might no longer work, simply because we can’t assume that the microscopic fluctuations that get amplified are homomorphically encrypted.  Those are “in the clear,” which inevitably leaks information.  As for the quantum computer that simulates your thought processes and then perfectly reverses the simulation, or that queries you like a Vaidman bomb—in order to implement such things, we’d of course need to use quantum fault-tolerance, so that the simulation of you stayed in an encoded subspace and didn’t decohere.  But under our assumption, that would mean the simulation wasn’t conscious.

Now, it might seem to some of you like I’m suggesting something deeply immoral.  After all, the view I’m considering implies that, even if a system passed the Turing Test, and behaved identically to a human, even if it eloquently pleaded for its life, if it wasn’t irreversibly decohering microscopic events then it wouldn’t be conscious, so it would be fine to kill it, torture it, whatever you want.

But wait a minute: if a system isn’t doing anything irreversible, then what exactly does it mean to “kill” it?  If it’s a classical computation, then at least in principle, you could always just restore from backup.  You could even rewind and not only erase the memories of, but “uncompute” (“untorture”?) whatever tortures you had performed.  If it’s a quantum computation, you could always invert the unitary transformation U that corresponded to killing the thing (then reapply U and invert it again for good measure, if you wanted).  Only for irreversible systems are there moral acts with irreversible consequences.

This is related to something that’s bothered me for years in quantum foundations.  When people discuss Schrödinger’s cat, they always—always—insert some joke about, “obviously, this experiment wouldn’t pass the Ethical Review Board.  Nowadays, we try to avoid animal cruelty in our quantum gedankenexperiments.”  But actually, I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment.  And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}.  But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states.  (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.)  But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all?  Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from.

(There followed discussion among some audience members about the question of whether, if you destroyed all records of some terrible atrocity, like the Holocaust, everywhere in the physical world, you would thereby cause the atrocity “never to have happened.”  Many people seemed surprised by my willingness to accept that implication of what I was saying.  By way of explaining, I tried to stress just how far our everyday, intuitive notion of “destroying all records of something” falls short of what would actually be involved here: when we think of “destroying records,” we think about burning books, destroying the artifacts in museums, silencing witnesses, etc.  But even if all those things were done and many others, still the exact configurations of the air, the soil, and photons heading away from the earth at the speed of light would retain their silent testimony to the Holocaust’s reality.  “Erasing all records” in the physics sense would be something almost unimaginably more extreme: it would mean inverting the entire physical evolution in the vicinity of the earth, stopping time’s arrow and running history itself backwards.  Such ‘unhappening’ of what’s happened is something that we lack any experience of, at least outside of certain quantum interference experiments—though in the case of the Holocaust, one could be forgiven for wishing it were possible.)

OK, so much for philosophy of mind and morality; what about the interpretation of quantum mechanics?  If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders?  You could make a case for either.  The Many-Worlders would be right that we could always, if we chose, think of decoherence events as “splitting” our universe into multiple branches, each with different versions of ourselves, that thereafter don’t interact.  On the other hand, the Copenhagenists would be right that, even in principle, we could never do any experiment where this “splitting” of our minds would have any empirical consequence.  On this view, if you can control a system well enough that you can actually observe interference between the different branches, then it follows that you shouldn’t regard the system as conscious, because it’s not doing anything irreversible.

In my essay, the implication that concerned me the most was the one for “free will.”  If being conscious entails amplifying microscopic events in an irreversible and unclonable way, then someone looking at a conscious system from the outside might not, in general, be able to predict what it’s going to do next, not even probabilistically.  In other words, its decisions might be subject to at least some “Knightian uncertainty”: uncertainty that we can’t even quantify in a mutually-agreed way using probabilities, in the same sense that we can quantify our uncertainty about (say) the time of a radioactive decay.  And personally, this is actually the sort of “freedom” that interests me the most.  I don’t really care if my choices are predictable by God, or by a hypothetical Laplace demon: that is, if they would be predictable (at least probabilistically), given complete knowledge of the microstate of the universe.  By definition, there’s essentially no way for my choices not to be predictable in that weak and unempirical sense!  On the other hand, I’d prefer that my choices not be completely predictable by other people.  If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought.  But you’d have to actually do the experiment (or convince me that it could be done): it doesn’t count just to talk about it, or to extrapolate from fMRI experiments that predict which of two buttons a subject is going to press with 60% accuracy a few seconds in advance.

But since we’ve got some cosmologists in the house, let me now turn to discussing the implications of this view for Boltzmann brains.

(For those tuning in from home: a Boltzmann brain is a hypothetical chance fluctuation in the late universe, which would include a conscious observer with all the perceptions that a human being—say, you—is having right now, right down to false memories and false beliefs of having arisen via Darwinian evolution.  On statistical grounds, the overwhelming majority of Boltzmann brains last just long enough to have a single thought—like, say, the one you’re having right now—before they encounter the vacuum and freeze to death.  If you measured some part of the vacuum state toward which our universe seems to be heading, asking “is there a Boltzmann brain here?,” quantum mechanics predicts that the probability would be ridiculously astronomically small, but nonzero.  But, so the argument goes, if the vacuum lasts for infinite time, then as long as the probability is nonzero, it doesn’t matter how tiny it is: you’ll still get infinitely many Boltzmann brains indistinguishable from any given observer; and for that reason, any observer should consider herself infinitely likelier to be a Boltzmann brain than to be the “real,” original version.  For the record, even among the strange people at the IBM workshop, no one actually worried about being a Boltzmann brain.  The question, rather, is whether, if a cosmological model predicts Boltzmann brains, then that’s reason enough to reject the model, or whether we can live with such a prediction, since we have independent grounds for knowing that we can’t be Boltzmann brains.)

At this point, you can probably guess where this is going.  If decoherence, entropy production, full participation in the arrow of time are necessary conditions for consciousness, then it would follow, in particular, that a Boltzmann brain is not conscious.  So we certainly wouldn’t be Boltzmann brains, even under a cosmological model that predicts infinitely more of them than of us.  We can wipe our hands; the problem is solved!

I find it extremely interesting that, in their recent work, Kim Boddy, Sean Carroll, and Jason Pollack reached a similar conclusion, but from a completely different starting point.  They said: look, under reasonable assumptions, the late universe is just going to stay forever in an energy eigenstate—just sitting there doing nothing.  It’s true that, if someone came along and measured the energy eigenstate, asking “is there a Boltzmann brain here?,” then with a tiny but nonzero probability the answer would be yes.  But since no one is there measuring, what licenses us to interpret the nonzero overlap in amplitude with the Boltzmann brain state, as a nonzero probability of there being a Boltzmann brain?  I think they, too, are implicitly suggesting: if there’s no decoherence, no arrow of time, then we’re not authorized to say that anything is happening that “counts” for anthropic purposes.

Let me now mention an obvious objection.  (In fact, when I gave the talk, this objection was raised much earlier.)  You might say, “look, if you really think irreversible decoherence is a necessary condition for consciousness, then you might find yourself forced to say that there’s no consciousness, because there might not be any such thing as irreversible decoherence!  Imagine that our entire solar system were enclosed in an anti de Sitter (AdS) boundary, like in Greg Egan’s science-fiction novel Quarantine.  Inside the box, there would just be unitary evolution in some Hilbert space: maybe even a finite-dimensional Hilbert space.  In which case, all these ‘irreversible amplifications’ that you lay so much stress on wouldn’t be irreversible at all: eventually all the Everett branches would recohere; in fact they’d decohere and recohere infinitely many times.  So by your lights, how could anything be conscious inside the box?”

My response to this involves one last speculation.  I speculate that the fact that we don’t appear to live in AdS space—that we appear to live in (something evolving toward) a de Sitter space, with a positive cosmological constant—might be deep and important and relevant.  I speculate that, in our universe, “irreversible decoherence” means: the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone—even if for no others—you can’t put Humpty Dumpty back together again.  (Here I should point out, as several workshop attendees did to me, that Bousso and Susskind explored something similar in their paper The Multiverse Interpretation of Quantum Mechanics.)

Does this mean that, if cosmologists discover tomorrow that the cosmological constant is negative, or will become negative, then it will turn out that none of us were ever conscious?  No, that’s stupid.  What it would suggest is that the attempt I’m now making on the Pretty-Hard Problem had smacked into a wall (an AdS wall?), so that I, and anyone else who stressed in-principle irreversibility, should go back to the drawing board.  (By analogy, if some prescription for getting rid of Boltzmann brains fails, that doesn’t mean we are Boltzmann brains; it just means we need a new prescription.  Tempting as it is to skewer our opponents’ positions with these sorts of strawman inferences, I hope we can give each other the courtesy of presuming a bare minimum of sense.)

Another question: am I saying that, in order to be absolutely certain of whether some entity satisfied the postulated precondition for consciousness, one might, in general, need to look billions of years into the future, to see whether the “decoherence” produced by the entity was really irreversible?  Yes (pause to gulp bullet).  I am saying that.  On the other hand, I don’t think it’s nearly as bad as it sounds.  After all, the category of “consciousness” might be morally relevant, or relevant for anthropic reasoning, but presumably we all agree that it’s unlikely to play any causal role in the fundamental laws of physics.  So it’s not as if we’ve introduced any teleology into the laws of physics by this move.

Let me end by pointing out what I’ll call the “Tegmarkian slippery slope.”  It feels scientific and rational—from the perspective of many of us, even banal—to say that, if we’re conscious, then any sufficiently-accurate computer simulation of us would also be.  But I tried to convince you that this view depends, for its aura of obviousness, on our agreeing not to probe too closely exactly what would count as a “sufficiently-accurate” simulation.  E.g., does it count if the simulation is done in heavily-encrypted form, or encoded as a giant lookup table?  Does it matter if anyone actually runs the simulation, or consults the lookup table?  Now, all the way at the bottom of the slope is Max Tegmark, who asks: to produce consciousness, what does it matter if the simulation is physically instantiated at all?  Why isn’t it enough for the simulation to “exist” mathematically?  Or, better yet: if you’re worried about your infinitely-many Boltzmann brain copies, then why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π?  Why not hold workshops about how to avoid the prediction that we’re infinitely likelier to be “living in π” than to be our “real” selves?

From this extreme, even most scientific rationalists recoil.  They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow.  But now I have the opening I want.  I can say: once we agree that physical existence is a prerequisite for consciousness, why not participation in the Arrow of Time?  After all, our ordinary ways of talking about sentient beings—outside of quantum mechanics, cosmology, and maybe theology—don’t even distinguish between the concepts “exists” and “exists and participates in the Arrow of Time.”  And to say we have no experience of reversible, clonable, coherently-executable, atemporal consciousnesses is a massive understatement.

Of course, we should avoid the sort of arbitrary prejudice that Turing warned against in Computing Machinery and Intelligence.  Just because we lack experience with extraterrestrial consciousnesses, doesn’t mean it would be OK to murder an intelligent extraterrestrial if we met one tomorrow.  In just the same way, just because we lack experience with clonable, atemporal consciousnesses, doesn’t mean it would be OK to … wait!  As we said before, clonability, and aloofness from time’s arrow, call severely into question what it even means to “murder” something.  So maybe this case isn’t as straightforward as the extraterrestrials after all.

At this point, I’ve probably laid out enough craziness, so let me stop and open things up for discussion.

“How Might Quantum Information Transform Our Future?”

Tuesday, July 22nd, 2014

So, the Templeton Foundation invited me to write a 1500-word essay on the above question.  It’s like a blog post, except they pay me to do it!  My essay is now live, here.  I hope you enjoy my attempt at techno-futurist prose.  You can comment on the essay either here or over at Templeton’s site.  Thanks very much to Ansley Roan for commissioning the piece.

Randomness Rules in Quantum Mechanics

Monday, June 16th, 2014

So, Part II of my two-part series for American Scientist magazine about how to recognize random numbers is now out.  This part—whose original title was the one above, but was changed to “Quantum Randomness” to fit the allotted space—is all about quantum mechanics and the Bell inequality, and their use in generating “Einstein-certified random numbers.”  I discuss the CHSH game, the Free Will Theorem, and Gerard ‘t Hooft’s “superdeterminism” (just a bit), before explaining the striking recent protocols of Colbeck, Pironio et al., Vazirani and Vidick, Couldron and Yuen, and Miller and Shi, all of which expand a short random seed into additional random bits that are “guaranteed to be random unless Nature resorted to faster-than-light communication to bias them.”  I hope you like it.

[Update: See here for Hacker News thread]

In totally unrelated news, President Obama’s commencement speech at UC Irvine, about climate change and the people who still deny its reality, is worth reading.

The NEW Ten Most Annoying Questions in Quantum Computing

Tuesday, May 13th, 2014

Eight years ago, I put up a post entitled The Ten Most Annoying Questions in Quantum Computing.  One of the ten wasn’t a real question—it was simply a request for readers to submit questions—so let’s call it nine.  I’m delighted to say that, of the nine questions, six have by now been completely settled—most recently, my question about the parallel-repeated value of the CHSH game, which Andris Ambainis pointed out to me last week can be answered using a 2008 result of Barak et al. combined with a 2013 result of Dinur and Steurer.

To be clear, the demise of so many problems is exactly the outcome I wanted. In picking problems, my goal wasn’t to shock and awe with difficulty—as if to say “this is how smart I am, that whatever stumps me will also stump everyone else for decades.” Nor was it to showcase my bottomless profundity, by proffering questions so vague, multipartite, and open-ended that no matter what progress was made, I could always reply “ah, but you still haven’t addressed the real question!” Nor, finally, was my goal to list the biggest research directions for the entire field, the stuff everyone already knows about (“is there a polynomial-time quantum algorithm for graph isomorphism?”). My interest was exclusively in “little” questions, in weird puzzles that looked (at least at the time) like there was no deep obstruction to just killing them one by one, whichever way their answers turned out. What made them annoying was that they hadn’t succumbed already.

So, now that two-thirds of my problems have met the fate they deserved, at Andris’s suggestion I’m presenting a new list of Ten Most Annoying Questions in Quantum Computing—a list that starts with the three still-unanswered questions from the old list, and then adds seven more.

But we’ll get to that shortly. First, let’s review the six questions that have been answered.


CLOSED, NO-LONGER ANNOYING QUESTIONS IN QUANTUM COMPUTING

1. Given an n-qubit pure state, is there always a way to apply Hadamard gates to some subset of the qubits, so as to make all 2n computational basis states have nonzero amplitudes?  Positive answer by Ashley Montanaro and Dan Shepherd, posted to this blog in 2006.

3. Can any QMA(2) (QMA with two unentangled yes-provers) protocol be amplified to exponentially small error probability?  Positive answer by Aram Harrow and Ashley Montanaro, from a FOCS’2010 paper.

4. If a unitary operation U can be applied in polynomial time, then can some square root of U also be applied in polynomial time?  Positive answer by Lana Sheridan, Dmitri Maslov, and Michele Mosca, from a 2008 paper.

5. Suppose Alice and Bob are playing n parallel CHSH games, with no communication or entanglement. Is the probability that they’ll win all n games at most pn, for some p bounded below 0.853?

OK, let me relay what Andris Ambainis told me about this question, with Andris’s kind permission. First of all, we’ve known for a while that the optimal success probability is not the (3/4)n that Alice and Bob could trivially achieve by just playing all n games separately. I observed in 2006 that, by correlating their strategies between pairs of games in a clever way, Alice and Bob can win with probability (√10 / 4)n ~ 0.79n. And Barak et al. showed in 2008 that they can win with probability ((1+√5)/4)n ~ 0.81n. (Unfortunately, I don’t know the actual strategy that achieves the latter bound!  Barak et al. say they’ll describe it in the full version of their paper, but the full version hasn’t yet appeared.)

Anyway, Dinur-Steurer 2013 gave a general recipe to prove that the value of a repeated projection game is at most αn, where α is some constant that depends on the game in question. When Andris followed their recipe for the CHSH game, he obtained the result α=(1+√5)/4—thereby showing that Barak et al.’s strategy, whatever it is, is precisely optimal! Andris also observes that, for any two-prover game G, the Dinur-Steurer bound α(G) is always strictly less than the entangled value ω*(G), unless the classical and entangled values are the same for one copy of the game (i.e., unless ω(G)=ω*(G)). This implies that parallel repetition can never completely eliminate a quantum advantage.

6. Forget about an oracle relative to which BQP is not in PH (the Polynomial Hierarchy). Forget about an oracle relative to which BQP is not in AM (Arthur-Merlin). Is there an oracle relative to which BQP is not in SZK (Statistical Zero-Knowledge)?  Positive answer by me, posted to this blog in 2006.  See also my BQP vs. PH paper for a different proof.

9. Is there an n-qubit pure state that can be prepared by a circuit of size n3, and that can’t be distinguished from the maximally mixed state by any circuit of size n2?  A positive answer follows from this 2009 paper by Richard Low—thanks very much to Fernando Brandao for bringing that to my attention a few months ago.


OK, now on to:

THE NEW TEN MOST ANNOYING QUESTIONS IN QUANTUM COMPUTING

1. Can we get any upper bound whatsoever on the complexity class QMIP—i.e., quantum multi-prover interactive proofs with unlimited prior entanglement? (Since I asked this question in 2006, Ito and Vidick achieved the breakthrough lower bound NEXP⊆QMIP, but there’s been basically no progress on the upper bound side.)

2. Given any n-qubit unitary operation U, does there exist an oracle relative to which U can be (approximately) applied in polynomial time? (Since 2006, my interest in this question has only increased. See this paper by me and Greg Kuperberg for background and related results.)

3. How many mutually unbiased bases are there in non-prime-power dimensions?

4. Since Chris Fuchs was so thrilled by my including one of his favorite questions on my earlier list (question #3 above), let me add another of his favorites: do SIC-POVMs exist in arbitrary finite dimensions?

5. Is there a Boolean function f:{0,1}n→{0,1} whose bounded-error quantum query complexity is strictly greater than n/2?  (Thanks to Shelby Kimmel for this question!  Note that this paper by van Dam shows that the bounded-error quantum query complexity never exceeds n/2+O(√n), while this paper by Ambainis et al. shows that it’s at least n/2-O(√n) for almost all Boolean functions f.)

6. Is there a “universal disentangler”: that is, a superoperator S that takes nO(1) qubits as input; that produces a 2n-qubit bipartite state (with n qubits on each side) as output; whose output S(ρ) is always close in variation distance to a separable state; and that given an appropriate input state, can produce as output an approximation to any desired separable state?  (See here for background about this problem, originally posed by John Watrous. Note that if such an S existed and were computationally efficient, it would imply QMA=QMA(2).)

7. Suppose we have explicit descriptions of n two-outcome POVM measurements—say, as d×d Hermitian matrices E1,…,En—and are also given k=(log(nd))O(1) copies of an unknown quantum state ρ in d dimensions.  Is there a way to measure the copies so as to estimate the n expectation values Tr(E1ρ),…,Tr(Enρ), each to constant additive error?  (A forthcoming paper of mine on private-key quantum money will contain some background and related results.)

8. Is there a collection of 1- and 2-qubit gates that generates a group of unitary matrices that is (a) not universal for quantum computation, (b) not just conjugate to permuted diagonal matrices or one-qubit gates plus swaps, and (c) not conjugate to a subgroup of the Clifford group?

9. Given a partial Boolean function f:S→{0,1} with S⊆{0,1}n, is the bounded-error quantum query complexity of f always polynomially related to the smallest degree of any polynomial p:{0,1}n→R such that (a) p(x)∈[0,1] for all x∈{0,1}n, and (b) |p(x)-f(x)|≤1/3 for all x∈S?

10. Is there a quantum finite automaton that reads in an infinite sequence of i.i.d. coin flips, and whose limiting probability of being found in an “accept” state is at least 2/3 if the coin is fair and at most 1/3 if the coin is unfair?  (See this paper by me and Andy Drucker for background and related results.)

Is There Anything Beyond Quantum Computing?

Friday, April 11th, 2014

So I’ve written an article about the above question for PBS’s website—a sort of tl;dr version of my 2005 survey paper NP-Complete Problems and Physical Reality, but updated with new material about the simulation of quantum field theories and about AdS/CFT.  Go over there, read the article (it’s free), then come back here to talk about it if you like.  Thanks so much to Kate Becker for commissioning the article.

In other news, there’s a profile of me at MIT News (called “The Complexonaut”) that some people might find amusing.

Oh, and anyone who thinks the main reason to care about quantum computing is that, if our civilization ever manages to surmount the profound scientific and technological obstacles to building a scalable quantum computer, then that little padlock icon on your web browser would no longer represent ironclad security?  Ha ha.  Yeah, it turns out that, besides factoring integers, you can also break OpenSSL by (for example) exploiting a memory bug in C.  The main reason to care about quantum computing is, and has always been, science.

Waiting for BQP Fever

Tuesday, April 1st, 2014

Update (April 5): By now, three or four people have written in asking for my reaction to the preprint “Computational solution to quantum foundational problems” by Arkady Bolotin.  (See here for the inevitable Slashdot discussion, entitled “P vs. NP Problem Linked to the Quantum Nature of the Universe.”)  It gives me no pleasure to respond to this sort of thing—it would be far better to let papers this gobsmackingly uninformed about the relevant issues fade away in quiet obscurity—but since that no longer seems to be possible in the age of social media, my brief response is here.


(note: sorry, no April Fools post, just a post that happens to have gone up on April Fools)

This weekend, Dana and I celebrated our third anniversary by going out to your typical sappy romantic movie: Particle Fever, a documentary about the Large Hadron Collider.  As it turns out, the movie was spectacularly good; anyone who reads this blog should go see it.  Or, to offer even higher praise:

If watching Particle Fever doesn’t cause you to feel in your bones the value of fundamental science—the thrill of discovery, unmotivated by any application—then you are not truly human.  You are a barnyard animal who happens to walk on its hind legs.

Indeed, I regard Particle Fever as one of the finest advertisements for science itself ever created.  It’s effective precisely because it doesn’t try to tell you why science is important (except for one scene, where an economist asks a physicist after a public talk about the “return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”).  Instead, the movie simply shows you the lives of particle physicists, of people who take for granted the urgency of knowing the truth about the basic constituents of reality.  And in showing you the scientists’ quest, it makes you feel as they feel.  Incidentally, the movie also shows footage of Congressmen ridiculing the uselessness of the Superconducting Supercollider, during the debates that led to the SSC’s cancellation.  So, gently, implicitly, you’re invited to choose: whose side are you on?

I do have a few, not quite criticisms of the movie, but points that any viewer should bear in mind while watching it.

First, it’s important not to come away with the impression that Particle Fever shows “what science is usually like.”  Sure, there are plenty of scenes that any scientist would find familiar: sleep-deprived postdocs; boisterous theorists correcting each other’s statements over Chinese food; a harried lab manager walking to the office oblivious to traffic.  On the other hand, the decades-long quest to find the Higgs boson, the agonizing drought of new data before the one big money shot, the need for an entire field to coalesce around a single machine, the whole careers hitched to specific speculative scenarios that this one machine could favor or disfavor—all of that is a profoundly abnormal situation in the history of science.  Particle physics didn’t used to be that way, and other parts of science are not that way today.  Of course, the fact that particle physics became that way makes it unusually suited for a suspenseful movie—a fact that the creators of Particle Fever understood perfectly and exploited to the hilt.

Second, the movie frames the importance of the Higgs search as follows: if the Higgs boson turned out to be relatively light, like 115 GeV, then that would favor supersymmetry, and hence an “elegant, orderly universe.”  If, on the other hand, the Higgs turned out to be relatively heavy, like 140 GeV, then that would favor anthropic multiverse scenarios (and hence a “messy, random universe”).  So the fact that the Higgs ended up being 125 GeV means the universe is coyly refusing to tell us whether it’s orderly or random, and more research is needed.

In my view, it’s entirely appropriate for a movie like this one to relate its subject matter to big, metaphysical questions, to the kinds of questions anyone can get curious about (in contrast to, say, “what is the mechanism of electroweak symmetry breaking?”) and that the scientists themselves talk about anyway.  But caution is needed here.  My lay understanding, which might be wrong, is as follows: while it’s true that a lighter Higgs would tend to favor supersymmetric models, the only way to argue that a heavier Higgs would “favor the multiverse,” is if you believe that a multiverse is automatically favored by a lack of better explanations.  More broadly, I wish the film had made clearer that the explanation for (some) apparent “fine-tunings” in the Standard Model might be neither supersymmetry, nor the multiverse, nor “it’s just an inexplicable accident,” but simply some other explanation that no one has thought of yet, but that would emerge from a better understanding of quantum field theory.  As one example, on reading up on the subject after watching the film, I was surprised to learn that a very conservative-sounding idea—that of “asymptotically safe gravity”—was used in 2009 to predict the Higgs mass right on the nose, at 126.3 ± 2.2 GeV.  Of course, it’s possible that this was just a lucky guess (there were, after all, lots of Higgs mass predictions).  But as an outsider, I’d love to understand why possibilities like this don’t seem to get discussed more (there might, of course, be perfectly good reasons that I don’t know).

Third, for understandable dramatic reasons, the movie focuses almost entirely on the “younger generation,” from postdocs working on ATLAS and CMS detectors, to theorists like Nima Arkani-Hamed who are excited about the LHC because of its ability to test scenarios like supersymmetry.  From the movie’s perspective, the creation of the Standard Model itself, in the 60s and 70s, might as well be ancient history.  Indeed, when Peter Higgs finally appears near the end of the film, it’s as if Isaac Newton has walked onstage.  At several points, I found myself wishing that some of the original architects of the Standard Model, like Steven Weinberg or Sheldon Glashow, had been interviewed to provide their perspectives.  After all, their model is really the one that’s been vindicated at the LHC, not (so far) any of the newer ideas like supersymmetry or large extra dimensions.

OK, but let me come to the main point of this post.  I confess that my overwhelming emotion on watching Particle Fever was one of regret—regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown.

See, from my perspective, there’s a lot to envy about the high-energy physicists.  Most importantly, they don’t perceive any need to justify what they do in terms of practical applications.  Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN.  But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you.

Now, no marketing consultant would ever in a trillion years endorse such an out-of-touch, elitist sales pitch.  But the remarkable fact is that the message has more-or-less worked.  While the cancellation of the SSC was a setback, the high-energy physicists did succeed in persuading the world to pony up the $11 billion needed to build the LHC, and to gain the information that the mass of the Higgs boson is about 125 GeV.

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)  The reasons to build quantum computers are things that could make any buzzword-spouting dullard nod in recognition: cracking uncrackable encryption, finding bugs in aviation software, sifting through massive data sets, maybe even curing cancer, predicting the weather, or finding aliens.  And all of this could be yours in a few short years—or some say it’s even commercially available today.  So, if you check back in a few years and it’s still not on store shelves, probably it went the way of flying cars or moving sidewalks: another technological marvel that just failed to materialize for some reason.

Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field—because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself.  All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes—that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms?  What happens when it turns out that the real applications of quantum computing—like breaking RSA and simulating quantum systems—are nice, but not important enough by themselves to justify the cost?  (E.g., when the imminent risk of a quantum computer simply causes people to switch from RSA to other cryptographic codes?  Or when the large polynomial overheads of quantum simulation algorithms limit their usefulness?)  Finally, what happens when it turns out that the promises of useful quantum computers in 5-10 years were wildly unrealistic?

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing—the case based on basic science—because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.

Many people would say it’s impossible to make the latter pitch, that funders and laypeople would never understand it or buy it.  But there’s an $11-billion, 17-mile ring under Geneva that speaks against their cynicism.

Anyway, let me end this “movie review” with an anecdote.  The other day a respected colleague of mine—someone who doesn’t normally follow such matters—asked me what I thought about D-Wave.  After I’d given my usual spiel, he smiled and said:

“See Scott, but you could imagine scientists of the 1400s saying the same things about Columbus!  He had no plan that could survive academic scrutiny.  He raised money under the false belief that he could reach India by sailing due west.  And he didn’t understand what he’d found even after he’d found it.  Yet for all that, it was Columbus, and not some academic critic on the sidelines, who discovered the new world.”

With this one analogy, my colleague had eloquently summarized the case for D-Wave, a case often leveled against me much more verbosely.  But I had an answer.

“I accept your analogy!” I replied.  “But to me, Columbus and the other conquerors of the Americas weren’t heroes to be admired or emulated.  Motivated by gold and spices rather than knowledge, they spread disease, killed and enslaved millions in one of history’s greatest holocausts, and burned the priceless records of the Maya and Inca civilizations so that the world would never even understand what was lost.  I submit that, had it been undertaken by curious and careful scientists—or at least people with a scientific mindset—rather than by swashbucklers funded by greedy kings, the European exploration and colonization of the Americas could have been incalculably less tragic.”

The trouble is, when I say things like that, people just laugh at me knowingly.  There he goes again, the pie-in-the-sky complexity theorist, who has no idea what it takes to get anything done in the real world.  What an amusingly contrary perspective he has.

And that, in the end, is why I think Particle Fever is such an important movie.  Through the stories of the people who built the LHC, you’ll see how it really is possible to reach a new continent without the promise of gold or the allure of lies.

Recent papers by Susskind and Tao illustrate the long reach of computation

Sunday, March 2nd, 2014

Most of the time, I’m a crabby, cantankerous ogre, whose only real passion in life is using this blog to shoot down the wrong ideas of others.  But alas, try as I might to maintain my reputation as a pure bundle of seething negativity, sometimes events transpire that pierce my crusty exterior.  Maybe it’s because I’m in Berkeley now, visiting the new Simons Institute for Theory of Computing during its special semester on Hamiltonian complexity.  And it’s tough to keep up my acerbic East Coast skepticism of everything new in the face of all this friggin’ sunshine.  (Speaking of which, if you’re in the Bay Area and wanted to meet me, this week’s the week!  Email me.)  Or maybe it’s watching Lily running around, her face wide with wonder.  If she’s so excited by her discovery of (say) a toilet plunger or some lint on the floor, what right do I have not to be excited by actual scientific progress?

Which brings me to the third reason for my relatively-sunny disposition: two long and fascinating recent papers on the arXiv.  What these papers have in common is that they use concepts from theoretical computer science in unexpected ways, while trying to address open problems at the heart of “traditional, continuous” physics and math.  One paper uses quantum circuit complexity to help understand black holes; the other uses fault-tolerant universal computation to help understand the Navier-Stokes equations.

Recently, our always-pleasant string-theorist friend Luboš Motl described computational complexity theorists as “extraordinarily naïve” (earlier, he also called us “deluded” and “bigoted”).  Why?  Because we’re obsessed with “arbitrary, manmade” concepts like the set of problems solvable in polynomial time, and especially because we assume things we haven’t yet proved such as P≠NP.  (Jokes about throwing stones from a glass house—or a stringy house—are left as exercises for the reader.)  The two papers that I want to discuss today reflect a different perspective: one that regards computation as no more “arbitrary” than other central concepts of mathematics, and indeed, as something that shows up even in contexts that seem incredibly remote from it, from the AdS/CFT correspondence to turbulent fluid flow.


Our first paper is Computational Complexity and Black Hole Horizons, by Lenny Susskind.  As readers of this blog might recall, last year Daniel Harlow and Patrick Hayden made a striking connection between computational complexity and the black-hole “firewall” question, by giving complexity-theoretic evidence that performing the measurement of Hawking radiation required for the AMPS experiment would require an exponentially-long quantum computation.  In his new work, Susskind makes a different, and in some ways even stranger, connection between complexity and firewalls.  Specifically, given an n-qubit pure state |ψ〉, recall that the quantum circuit complexity of |ψ〉 is the minimum number of 2-qubit gates needed to prepare |ψ〉 starting from the all-|0〉 state.  Then for reasons related to black holes and firewalls, Susskind wants to use the quantum circuit complexity of |ψ〉 as an intrinsic clock, to measure how long |ψ〉 has been evolving for.  Last week, I had the pleasure of visiting Stanford, where Lenny spent several hours explaining this stuff to me.  I still don’t fully understand it, but since it’s arguable that no one (including Lenny himself) does, let me give it a shot.

My approach will be to divide into two questions.  The first question is: why, in general (i.e., forgetting about black holes), might one want to use quantum circuit complexity as a clock?  Here the answer is: because unlike most other clocks, this one should continue to tick for an exponentially long time!

Consider some standard, classical thermodynamic system, like a box filled with gas, with the gas all initially concentrated in one corner.  Over time, the gas will diffuse across the box, in accord with the Second Law, until it completely equilibrates.  Furthermore, if we know the laws of physics, then we can calculate exactly how fast this diffusion will happen.  But this implies that we can use the box as a clock!  To do so, we’d simply have to measure how diffused the gas was, then work backwards to determine how much time had elapsed since the gas started diffusing.

But notice that this “clock” only works until the gas reaches equilibrium—i.e., is equally spread across the box.  Once the gas gets to equilibrium, which it does in a reasonably short time, it just stays there (at least until the next Poincaré recurrence time).  So, if you see the box in equilibrium, there’s no measurement you could make—or certainly no “practical” measurement—that would tell you how long it’s been there.  Indeed, if we model the collisions between gas particles (and between gas particles and the walls of the box) as random events, then something even stronger is true.  Namely, the probability distribution over all possible configurations of the gas particles will quickly converge to an equilibrium distribution.  And if you all you knew was that the particles were in the equilibrium distribution, then there’s no property of their distribution that you could point to—not even an abstract, unmeasurable property—such that knowing that property would tell you how long the gas had been in equilibrium.

Interestingly, something very different happens if we consider a quantum pure state, in complete isolation from its environment.  If you have some quantum particles in a perfectly-isolating box, and you start them out in a “simple” state (say, with all particles unentangled and in a corner), then they too will appear to diffuse, with their wavefunctions spreading out and getting entangled with each other, until the system reaches “equilibrium.”  After that, there will once again be no “simple” measurement you can make—say, of the density of particles in some particular location—that will give you any idea of how long the box has been in equilibrium.  On the other hand, the laws of unitary evolution assure us that the quantum state is still evolving, rotating serenely through Hilbert space, just like it was before equilibration!  Indeed, in principle you could even measure that the evolution was still happening, but to do so, you’d need to perform an absurdly precise and complicated measurement—one that basically inverted the entire unitary transformation that had been applied since the particles started diffusing.

Lenny now asks the question: if the quantum state of the particles continues to evolve even after “equilibration,” then what physical quantity can we point to as continuing to increase?  By the argument above, it can’t be anything simple that physicists are used to talking about, like coarse-grained entropy.  Indeed, the most obvious candidate that springs to mind, for a quantity that should keep increasing even after equilibration, is just the quantum circuit complexity of the state!  If there’s no “magic shortcut” to simulating this system—that is, if the fastest way to learn the quantum state at time T is just to run the evolution equations forward for T time steps—then the quantum circuit complexity will continue to increase linearly with T, long after equilibration.  Eventually, the complexity will “max out” at ~cn, where n is the number of particles, simply because (neglecting small multiplicative terms) the dimension of the Hilbert space is always an upper bound on the circuit complexity.  After even longer amounts of time—like ~cc^n—the circuit complexity will dip back down (sometimes even to 0), as the quantum state undergoes recurrences.  But both of those effects only occur on timescales ridiculously longer than anything normally relevant to physics or everyday life.

Admittedly, given the current status of complexity theory, there’s little hope of proving unconditionally that the quantum circuit complexity continues to rise until it becomes exponential, when some time-independent Hamiltonian is continuously applied to the all-|0〉 state.  (If we could prove such a statement, then presumably we could also prove PSPACE⊄BQP/poly.)  But maybe we could prove such a statement modulo a reasonable conjecture.  And we do have suggestive weaker results.  In particular (and as I just learned this Friday), in 2012 Brandão, Harrow, and Horodecki, building on earlier work due to Low, showed that, if you apply S>>n random two-qubit gates to n qubits initially in the all-|0〉 state, then with high probability, not only do you get a state with large circuit complexity, you get a state that can’t even be distinguished from the maximally mixed state by any quantum circuit with at most ~S1/6 gates.

OK, now on to the second question: what does any of this have to do with black holes?  The connection Lenny wants to make involves the AdS/CFT correspondence, the “duality” between two completely different-looking theories that’s been the rage in string theory since the late 1990s.  On one side of the ring is AdS (Anti de Sitter), a quantum-gravitational theory in D spacetime dimensions—one where black holes can form and evaporate, etc., but on the other hand, the entire universe is surrounded by a reflecting boundary a finite distance away, to help keep everything nice and unitary.  On the other side is CFT (Conformal Field Theory): an “ordinary” quantum field theory, with no gravity, that lives only on the (D-1)-dimensional “boundary” of the AdS space, and not in its interior “bulk.”  The claim of AdS/CFT is that despite how different they look, these two theories are “equivalent,” in the sense that any calculation in one theory can be transformed to a calculation in the other theory that yields the same answer.  Moreover, we get mileage this way, since a calculation that’s hard on the AdS side is often easy on the CFT side and vice versa.

As an example, suppose we’re interested in what happens inside a black hole—say, because we want to investigate the AMPS firewall paradox.  Now, figuring out what happens inside a black hole (or even on or near the event horizon) is a notoriously hard problem in quantum gravity; that’s why people have been arguing about firewalls for the past two years, and about the black hole information problem for the past forty!  But what if we could put our black hole in an AdS box?  Then using AdS/CFT, couldn’t we translate questions about the black-hole interior to questions about the CFT on the boundary, which don’t involve gravity and which would therefore hopefully be easier to answer?

In fact people have tried to do that—but frustratingly, they haven’t been able to use the CFT calculations to answer even the grossest, most basic questions about what someone falling into the black hole would actually experience.  (For example, would that person hit a “firewall” and die immediately at the horizon, or would she continue smoothly through, only dying close to the singularity?)  Lenny’s paper explores a possible reason for this failure.  It turns out that the way AdS/CFT works, the closer to the black hole’s event horizon you want to know what happens, the longer you need to time-evolve the quantum state of the CFT to find out.  In particular, if you want to know what’s going on at distance ε from the event horizon, then you need to run the CFT for an amount of time that grows like log(1/ε).  And what if you want to know what’s going on inside the black hole?  In line with the holographic principle, it turns out that you can express an observable inside the horizon by an integral over the entire AdS space outside the horizon.  Now, that integral will include a part where the distance ε from the event horizon goes to 0—so log(1/ε), and hence the complexity of the CFT calculation that you have to do, diverges to infinity.  For some kinds of calculations, the ε→0 part of the integral isn’t very important, and can be neglected at the cost of only a small error.  For other kinds of calculations, unfortunately—and in particular, for the kind that would tell you whether or not there’s a firewall—the ε→0 part is extremely important, and it makes the CFT calculation hopelessly intractable.

Note that yes, we even need to continue the integration for ε much smaller than the Planck length—i.e., for so-called “transplanckian” distances!  As Lenny puts it, however:

For most of this vast sub-planckian range of scales we don’t expect that the operational meaning has anything to do with meter sticks … It has more to do with large times than small distances.

One could give this transplanckian blowup in computational complexity a pessimistic spin: darn, so it’s probably hopeless to use AdS/CFT to prove once and for all that there are no firewalls!  But there’s also a more positive interpretation: the interior of a black hole is “protected from meddling” by a thick armor of computational complexity.  To explain this requires a digression about firewalls.

The original firewall paradox of AMPS could be phrased as follows: if you performed a certain weird, complicated measurement on the Hawking radiation emitted from a “sufficiently old” black hole, then the expected results of that measurement would be incompatible with also seeing a smooth, Einsteinian spacetime if you later jumped into the black hole to see what was there.  (Technically, because you’d violate the monogamy of entanglement.)  If what awaited you behind the event horizon wasn’t a “classical” black hole interior with a singularity in the middle, but an immediate breakdown of spacetime, then one says you would’ve “hit a firewall.”

Yes, it seems preposterous that “firewalls” would exist: at the least, it would fly in the face of everything people thought they understood for decades about general relativity and quantum field theory.  But crucially—and here I have to disagree with Stephen Hawking—one can’t “solve” this problem by simply repeating the physical absurdities of firewalls, or by constructing scenarios where firewalls “self-evidently” don’t arise.  Instead, as I see it, solving the problem means giving an account of what actually happens when you do the AMPS experiment, or of what goes wrong when you try to do it.

On this last question, it seems to me that Susskind and Juan Maldacena achieved a major advance in their much-discussed ER=EPR paper last year.  Namely, they presented a picture where, sure, a firewall arises (at least temporarily) if you do the AMPS experiment—but no firewall arises if you don’t do the experiment!  In other words, doing the experiment sends a nonlocal signal to the interior of the black hole (though you do have to jump into the black hole to receive the signal, so causality outside the black hole is still preserved).  Now, how is it possible for your measurement of the Hawking radiation to send an instantaneous signal to the black hole interior, which might be light-years away from you when you measure?  On Susskind and Maldacena’s account, it’s possible because the entanglement between the Hawking radiation and the degrees of freedom still in the black hole, can be interpreted as creating wormholes between the two.  Under ordinary conditions, these wormholes (like most wormholes in general relativity) are “non-traversable”: they “pinch off” if you try to send signals through them, so they can’t be used for faster-than-light communication.  However, if you did the AMPS experiment, then the wormholes would become traversable, and could carry a firewall (or an innocuous happy-birthday message, or whatever) from the Hawking radiation to the black hole interior.  (Incidentally, ER stands for Einstein and Rosen, who wrote a famous paper on wormholes, while EPR stands for Einstein, Podolsky, and Rosen, who wrote a famous paper on entanglement.  “ER=EPR” is Susskind and Maldacena’s shorthand for their proposed connection between wormholes and entanglement.)

Anyway, these heady ideas raise an obvious question: how hard would it be to do the AMPS experiment?  Is sending a nonlocal signal to the interior of a black hole via that experiment actually a realistic possibility?  In their work a year ago on computational complexity and firewalls, Harlow and Hayden already addressed that question, though from a different perspective than Susskind.  In particular, Harlow and Hayden gave strong evidence that carrying out the AMPS experiment would require solving a problem believed to be exponentially hard even for a quantum computer: specifically, a complete problem for QSZK (Quantum Statistical Zero Knowledge).  In followup work (not yet written up, though see my talk at KITP and my PowerPoint slides), I showed that the Harlow-Hayden problem is actually at least as hard as inverting one-way functions, which is even stronger evidence for hardness.

All of this suggests that, even supposing we could surround an astrophysical black hole with a giant array of perfect photodetectors, wait ~1069 years for the black hole to (mostly) evaporate, then route the Hawking photons into a super-powerful, fault-tolerant quantum computer, doing the AMPS experiment (and hence, creating traversable wormholes to the black hole interior) still wouldn’t be a realistic prospect, even if the equations formally allow it!  There’s no way to sugarcoat this: computational complexity limitations seem to be the only thing protecting the geometry of spacetime from nefarious experimenters.

Anyway, Susskind takes that amazing observation of Harlow and Hayden as a starting point, but then goes off on a different tack.  For one thing, he isn’t focused on the AMPS experiment (the one involving monogamy of entanglement) specifically: he just wants to know how hard it is to do any experiment (possibly a different one) that would send nonlocal signals to the black hole interior.  For another, unlike Harlow and Hayden, Susskind isn’t trying to show that such an experiment would be exponentially hard.  Instead, he’s content if the experiment is “merely” polynomially hard—but in the same sense that (say) unscrambling an egg, or recovering a burned book from the smoke and ash, are polynomially hard.  In other words, Susskind only wants to argue that creating a traversable wormhole would be “thermodynamics-complete.”  A third, related, difference is that Susskind considers an extremely special model scenario: namely, the AdS/CFT description of something called the “thermofield double state.”  (This state involves two entangled black holes in otherwise-separated spacetimes; according to ER=EPR, we can think of those black holes as being connected by a wormhole.)  While I don’t yet understand this point, apparently the thermofield double state is much more favorable for firewall production than a “realistic” spacetime—and in particular, the Harlow-Hayden argument doesn’t apply to it.  Susskind wants to show that even so (i.e., despite how “easy” we’ve made it), sending a signal through the wormhole connecting the two black holes of the thermofield double state would still require solving a thermodynamics-complete problem.

So that’s the setup.  What new insights does Lenny get?  This, finally, is where we circle back to the view of quantum circuit complexity as a clock.  Briefly, Lenny finds that the quantum state getting more and more complicated in the CFT description—i.e., its quantum circuit complexity going up and up—directly corresponds to the wormhole getting longer and longer in the AdS description.  (Indeed, the length of the wormhole increases linearly with time, growing like the circuit complexity divided by the total number of qubits.)  And the wormhole getting longer and longer is what makes it non-traversable—i.e., what makes it impossible to send a signal through.

Why has quantum circuit complexity made a sudden appearance here?  Because in the CFT description, the circuit complexity continuing to increase is the only thing that’s obviously “happening”!  From a conventional physics standpoint, the quantum state of the CFT very quickly reaches equilibrium and then just stays there.  If you measured some “conventional” physical observable—say, the energy density at a particular point—then it wouldn’t look like the CFT state was continuing to evolve at all.  And yet we know that the CFT state is evolving, for two extremely different reasons.  Firstly, because (as we discussed early on in this post) unitary evolution is still happening, so presumably the state’s quantum circuit complexity is continuing to increase.  And secondly, because in the dual AdS description, the wormhole is continuing to get longer!

From this connection, at least three striking conclusions follow:

  1. That even when nothing else seems to be happening in a physical system (i.e., it seems to have equilibrated), the fact that the system’s quantum circuit complexity keeps increasing can be “physically relevant” all by itself.  We know that it’s physically relevant, because in the AdS dual description, it corresponds to the wormhole getting longer!
  2. That even in the special case of the thermofield double state, the geometry of spacetime continues to be protected by an “armor” of computational complexity.  Suppose that Alice, in one half of the thermofield double state, wants to send a message to Bob in the other half (which Bob can retrieve by jumping into his black hole).  In order to get her message through, Alice needs to prevent the wormhole connecting her black hole to Bob’s from stretching uncontrollably—since as long as it stretches, the wormhole remains non-traversable.  But in the CFT picture, stopping the wormhole from stretching corresponds to stopping the quantum circuit complexity from increasing!  And that, in turn, suggests that Alice would need to act on the radiation outside her black hole in an incredibly complicated and finely-tuned way.  For “generically,” the circuit complexity of an n-qubit state should just continue to increase, the longer you run unitary evolution for, until it hits its exp(n) maximum.  To prevent that from happening would essentially require “freezing” or “inverting” the unitary evolution applied by nature—but that’s the sort of thing that we expect to be thermodynamics-complete.  (How exactly do Alice’s actions in the “bulk” affect the evolution of the CFT state?  That’s an excellent question that I don’t understand AdS/CFT well enough to answer.  All I know is that the answer involves something that Lenny calls “precursor operators.”)
  3. The third and final conclusion is that there can be a physically-relevant difference between pseudorandom n-qubit pure states and “truly” random states—even though, by the definition of pseudorandom, such a difference can’t be detected by any small quantum circuit!  Once again, the way to see the difference is using AdS/CFT.  It’s easy to show, by a counting argument, that almost all n-qubit pure states have nearly-maximal quantum circuit complexity.  But if the circuit complexity is already maximal, that means in particular that it’s not increasing!  Lenny argues that this corresponds to the wormhole between the two black holes no longer stretching.  But if the wormhole is no longer stretching, then it’s “vulnerable to firewalls” (i.e., to messages going through!).  It had previously been argued that random CFT states almost always correspond to black holes with firewalls—and since the CFT states formed by realistic physical processes will look “indistinguishable from random states,” black holes that form under realistic conditions should generically have firewalls as well.  But Lenny rejects this argument, on the ground that the CFT states that arise in realistic situations are not random pure states.  And what distinguishes them from random states?  Simply that they have non-maximal (and increasing) quantum circuit complexity!

I’ll leave you with a question of my own about this complexity / black hole connection: one that I’m unsure how to think about, but that perhaps interests me more than any other here.  My question is: could you ever learn the answer to an otherwise-intractable computational problem by jumping into a black hole?  Of course, you’d have to really want the answer—so much so that you wouldn’t mind dying moments after learning it, or not being able to share it with anyone else!  But never mind that.  What I have in mind is first applying some polynomial-size quantum circuit to the Hawking radiation, then jumping into the black hole to see what nonlocal effect (if any) the circuit had on the interior.  The fact that the mapping between interior and exterior states is so complicated suggests that there might be complexity-theoretic mileage to be had this way, but I don’t know what.  (It’s also possible that you can get a computational speedup in special cases like the thermofield double state, but that a Harlow-Hayden-like obstruction prevents you from getting one with real astrophysical black holes.  I.e., that for real black holes, you’ll just see a smooth, boring, Einsteinian black hole interior no matter what polynomial-size quantum circuit you applied to the Hawking radiation.)


If you’re still here, the second paper I want to discuss today is Finite-time blowup for an averaged three-dimensional Navier-Stokes equation by Terry Tao.  (See also the excellent Quanta article by Erica Klarreich.)  I’ll have much, much less to say about this paper than I did about Susskind’s, but that’s not because it’s less interesting: it’s only because I understand the issues even less well.

Navier-Stokes existence and smoothness is one of the seven Clay Millennium Problems (alongside P vs. NP, the Riemann Hypothesis, etc).  The problem asks whether the standard, classical differential equations for three-dimensional fluid flow are well-behaved, in the sense of not “blowing up” (e.g., concentrating infinite energy on a single point) after a finite amount of time.

Expanding on ideas from his earlier blog posts and papers about Navier-Stokes (see here for the gentlest of them), Tao argues that the Navier-Stokes problem is closely related to the question of whether or not it’s possible to “build a fault-tolerant universal computer out of water.”  Why?  Well, it’s not the computational universality per se that matters, but if you could use fluid flow to construct general enough computing elements—resistors, capacitors, transistors, etc.—then you could use those elements to recursively shift the energy in a given region into a region half the size, and from there to a region a quarter the size, and so on, faster and faster, until you got infinite energy density after a finite amount of time.

Strikingly, building on an earlier construction by Katz and Pavlovic, Tao shows that this is actually possible for an “averaged” version of the Navier-Stokes equations!  So at the least, any proof of existence and smoothness for the real Navier-Stokes equations will need to “notice” the difference between the real and averaged versions.  In his paper, though, Tao hints at the possibility (or dare one say likelihood?) that the truth might go the other way.  That is, maybe the “universal computer” construction can be ported from the averaged Navier-Stokes equations to the real ones.  In that case, we’d have blowup in finite time for the real equations, and a negative solution to the Navier-Stokes existence and smoothness problem.  Of course, such a result wouldn’t imply that real, physical water was in any danger of “blowing up”!  It would simply mean that the discrete nature of water (i.e., the fact that it’s made of H2O molecules, rather than being infinitely divisible) was essential to understanding its stability given arbitrary initial conditions.

So, what are the prospects for such a blowup result?  Let me quote from Tao’s paper:

Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program [to prove a blowup result for the "real" Navier-Stokes equations] are of a “software engineering” nature, and would be in principle achievable, even if the details could be extremely complicated in practice.  The main mathematical difficulty in executing this “fluid computing” program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties.  In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key difference of having a linear evolution rather than a nonlinear one) may prove to be useful.

One minor point that I’d love to understand is, what happens in two dimensions?  Existence and smoothness are known to hold for the 2-dimensional analogues of the Navier-Stokes equations.  If they also held for the averaged 2-dimensional equations, then it would follow that Tao’s “universal computer” must be making essential use of the third dimension. How?  If I knew the answer to that, then I’d feel for the first time like I had some visual crutch for understanding why 3-dimensional fluid flow is so complicated, even though 2-dimensional fluid flow isn’t.

I see that, in blog comments here and here, Tao says that the crucial difference between the 2- and 3-dimensional Navier-Stokes equations arises from the different scaling behavior of the dissipation term: basically, you can ignore it in 3 or more dimensions, but you can’t ignore it in 2.  But maybe there’s a more doofus-friendly explanation, which would start with some 3-dimensional fluid logic gate, and then explain why the gate has no natural 2-dimensional analogue, or why dissipation causes its analogue to fail.


Obviously, there’s much more to say about both papers (especially the second…) than I said in this post, and many people more knowledgeable than I am to say those things.  But that’s what the comments section is for.  Right now I’m going outside to enjoy the California sunshine.

Umesh Vazirani responds to Geordie Rose

Thursday, February 6th, 2014

You might recall that Shin, Smith, Smolin, and Vazirani posted a widely-discussed preprint a week ago, questioning the evidence for large-scale quantum behavior in the D-Wave machine.  Geordie Rose responded here.   Tonight, in a Shtetl-Optimized exclusive scoop, I bring you Umesh Vazirani’s response to Geordie’s comments. Without further ado:


Even a cursory reading of our paper will reveal that Geordie Rose is attacking a straw man. Let me quickly outline the main point of our paper and the irrelevance of Rose’s comments:

To date the Boixo et al paper was the only serious evidence in favor of large scale quantum behavior by the D-Wave machine. We investigated their claims and showed that there are serious problems with their conclusions. Their conclusions were based on the close agreement between the input-output data from D-Wave and quantum simulated annealing, and their inability despite considerable effort to find any classical model that agreed with the input-output data. In our paper, we gave a very simple classical model of interacting magnets that closely agreed with the input-output data. We stated that our results implied that “it is premature to conclude that D-Wave machine exhibits large scale quantum behavior”.

Rose attacks our paper for claiming that “D-Wave processors are inherently classical, and can be described by a classical model with no need to invoke quantum mechanics.”  A reading of our paper will make it perfectly clear that this is not a claim that we make.  We state explicitly “It is worth emphasizing that the goal of this paper is not to provide a classical model for the D-Wave machine, … The classical model introduced here is useful for the purposes of studying the large-scale algorithmic features of the D-Wave machine. The task of finding an accurate model for the D-Wave machine (classical, quantum or otherwise), would be better pursued with direct access, not only to programming the D-Wave machine, but also to its actual hardware.”

Rose goes on to point to a large number of experiments conducted by D-Wave to prove small scale entanglement over 2-8 qubits and criticizes our paper for not trying to model those aspects of D-Wave. But such small scale entanglement properties are not directly relevant to prospects for a quantum speedup. Therefore we were specifically interested in claims about the large scale quantum behavior of D-Wave. There was exactly one such claim, which we duly investigated, and it did not stand up to scrutiny.

TIME’s cover story on D-Wave: A case study in the conventions of modern journalism

Thursday, February 6th, 2014

This morning, commenter rrtucci pointed me to TIME Magazine’s cover story about D-Wave (yes, in today’s digital media environment, I need Shtetl-Optimized readers to tell me what’s on the cover of TIME…).  rrtucci predicted that, soon after reading the article, I’d be hospitalized with a severe stress-induced bleeding ulcer.  Undeterred, I grit my teeth, paid the $5 to go behind the paywall, and read the article.

The article, by Lev Grossman, could certainly be a lot worse.  If you get to the end, it discusses the experiments by Matthias Troyer’s group, and it makes clear the lack of any practically-relevant speedup today from the D-Wave devices.  It also includes a few skeptical quotes:

“In quantum computing, we have to be careful what we mean by ‘utilizing quantum effects,'” says Monroe, the University of Maryland scientist, who’s among the doubters. “This generally means that we are able to store superpositions of information in such a way that the system retains its ‘fuzziness,’ or quantum coherence, so that it can perform tasks that are impossible otherwise. And by that token there is no evidence that the D-Wave machine is utilizing quantum effects.”

One of the closest observers of the controversy has been Scott Aaronson, an associate professor at MIT and the author of a highly influential quantum-computing blog [aww, shucks --SA]. He remains, at best, cautious. “I’m convinced … that interesting quantum effects are probably present in D-Wave’s devices,” he wrote in an email. “But I’m not convinced that those effects, right now, are playing any causal role in solving any problems faster than we could solve them with a classical computer. Nor do I think there’s any good argument that D-Wave’s current approach, scaled up, will lead to such a speedup in the future. It might, but there’s currently no good reason to think so.”

Happily, the quote from me is something that I actually agreed with at the time I said it!  Today, having read the Shin et al. paper—which hadn’t yet come out when Grossman emailed me—I might tone down the statement “I’m convinced … that interesting quantum effects are probably present” to something like: “there’s pretty good evidence for quantum effects like entanglement at a ‘local’ level, but at the ‘global’ level we really have no idea.”

Alas, ultimately I regard this article as another victim (through no fault of the writer, possibly) of the strange conventions of modern journalism.  Maybe I can best explain those conventions with a quickie illustration:

MAGIC 8-BALL: THE RENEGADE MATH WHIZ WHO COULD CHANGE NUMBERS FOREVER

An eccentric billionaire, whose fascinating hobbies include nude skydiving and shark-taming, has been shaking up the scientific world lately with his controversial claim that 8+0 equals 17  [... six more pages about the billionaire redacted ...]  It must be said that mathematicians, who we reached for comment because we’re diligent reporters, have tended to be miffed, skeptical, and sometimes even sarcastic about the billionaire’s claims.  Not surprisingly, though, the billionaire and his supporters have had some dismissive comments of their own about the mathematicians.  So, which side is right?  Or is the truth somewhere in the middle?  At this early stage, it’s hard for an outsider to say.  In the meantime, the raging controversy itself is reason enough for us to be covering this story using this story template.  Stay tuned for more!

As shown (for example) by Will Bourne’s story in Inc. magazine, it’s possible for a popular magazine to break out of the above template when covering D-Wave, or at least bend it more toward reality.  But it’s not easy.

More detailed comments:

  • The article gets off on a weird foot in the very first paragraph, describing the insides of D-Wave’s devices as “the coldest place in the universe.”  Err, 20mK is pretty cold, but colder temperatures are routinely achieved in many other physics experiments.  (Are D-Wave’s the coldest current, continuously-operating experiments, or something like that?  I dunno: counterexamples, anyone?  I’ve learned from experts that they’re not, not even close.  I heard from someone who had a bunch of dilution fridges running at 10mK in the lab he was emailing me from…)
  • The article jumps enthusiastically into the standard Quantum Computing = Exponential Parallelism Fallacy (the QC=EPF), which is so common to QC journalism that I don’t know if it’s even worth pointing it out anymore (but here I am doing so).
  • Commendably, the article states clearly that QCs would offer speedups only for certain specific problems, not others; that D-Wave’s devices are designed only for adiabatic optimization, and wouldn’t be useful (e.g.) for codebreaking; and that even for optimization, “D-Wave’s hardware isn’t powerful enough or well enough understood to show serious quantum speedup yet.”  But there’s a crucial further point that the article doesn’t make: namely, that we have no idea yet whether adiabatic optimization is something where quantum computers can give any practically-important speedup.  In other words, even if you could implement adiabatic optimization perfectly—at zero temperature, with zero decoherence—we still don’t know whether there’s any quantum speedup to be had that way, for any of the nifty applications that the article mentions: “software design, tumor treatments, logistical planning, the stock market, airlines schedules, the search for Earth-like planets in other solar systems, and in particular machine learning.”  In that respect, adiabatic optimization is extremely different from (e.g.) Shor’s factoring algorithm or quantum simulation: things where we know how much speedup we could get, at least compared to the best currently-known classical algorithms.  But I better stop now, since I feel myself entering an infinite loop (and I didn’t even need the adiabatic algorithm to detect it).