Archive for the ‘CS/Physics Deathmatch’ Category

The Complexity of Quantum States and Transformations: From Quantum Money to Black Holes

Sunday, July 17th, 2016

On February 21-25, I taught a weeklong mini-course at the Bellairs Research Institute in Barbados, where I tried to tell an integrated story about everything from quantum proof and advice complexity classes to quantum money to AdS/CFT and the firewall problem—all through the unifying lens of quantum circuit complexity.  After a long effort—on the part of me, the scribes, the guest lecturers, and the organizers—the 111-page lecture notes are finally available, right here.

Here’s the summary:

This mini-course will introduce participants to an exciting frontier for quantum computing theory: namely, questions involving the computational complexity of preparing a certain quantum state or applying a certain unitary transformation. Traditionally, such questions were considered in the context of the Nonabelian Hidden Subgroup Problem and quantum interactive proof systems, but they are much broader than that. One important application is the problem of “public-key quantum money” – that is, quantum states that can be authenticated by anyone, but only created or copied by a central bank – as well as related problems such as copy-protected quantum software. A second, very recent application involves the black-hole information paradox, where physicists realized that for certain conceptual puzzles in quantum gravity, they needed to know whether certain states and operations had exponential quantum circuit complexity. These two applications (quantum money and quantum gravity) even turn out to have connections to each other! A recurring theme of the course will be the quest to relate these novel problems to more traditional computational problems, so that one can say, for example, “this quantum money is hard to counterfeit if that cryptosystem is secure,” or “this state is hard to prepare if PSPACE is not in PP/poly.” Numerous open problems and research directions will be suggested, many requiring only minimal quantum background. Some previous exposure to quantum computing and information will be assumed, but a brief review will be provided.

If you still haven’t decided whether to tackle this thing: it’s basically a quantum complexity theory textbook (well, a textbook for certain themes within quantum complexity theory) that I’ve written and put on the Internet for free.  It has explanations of lots of published results both old and new, but also some results of mine (e.g., about private-key quantum money, firewalls, and AdS/CFT) that I shamefully haven’t yet written up as papers, and that therefore aren’t currently available anywhere else.  If you’re interested in certain specific topics—for example, only quantum money, or only firewalls—you should be able to skip around in the notes without too much difficulty.

Thanks so much to Denis Therien for organizing the mini-course, Anil Ada for managing the scribe notes effort, my PhD students Adam Bouland and Luke Schaeffer for their special guest lecture (the last one), and finally, the course attendees for their constant questions and interruptions, and (of course) for scribing.

And in case you were wondering: yes, I’ll do absolutely anything for science, even if it means teaching a weeklong course in Barbados!  Lest you consider this a pure island boondoggle, please know that I spent probably 12-14 hours per day either lecturing (in two 3-hour installments) or preparing for the lectures, with little sleep and just occasional dips in the ocean.

And now I’m headed to the Perimeter Institute for their It from Qubit summer school, not at all unrelated to my Barbados lectures.  This time, though, it’s thankfully other people’s turns to lecture…

Entanglement without end

Monday, June 20th, 2016

Today we take a break from this blog’s usual round of topics—free will, consciousness, the Singularity, social justice, Donald Trump—to talk about something really crazy and left-field.  Namely, recent research in quantum information.

Earlier this month, William Slofstra, currently a Research Assistant Professor at the IQC in Waterloo, posted a breakthrough paper on the arXiv (yeah, I’m using the b-word again—sue me), which solves one version of a ten-year-old problem in entanglement theory called Tsirelson’s Problem.  The problem, in one sentence, asks whether all quantum-mechanical correlations that can be achieved using commuting measurements, can also be achieved using measurements on separate parts of a tensor-product Hilbert space.  The answer turns out to be no.  (We’ve long known that the two kinds of correlations are identical as long as you stick to finite-dimensional Hilbert spaces, but Slofstra shows that they can differ in infinite-dimensional spaces.)

One implication of Slofstra’s result can be stated much more concretely, in terms of two-prover games: those things like the famous Bell/CHSH experiment, where Alice and Bob are put in separate rooms, and get inputs x and y respectively, and then without communicating, have to produce outputs a and b respectively satisfying some relation V(x,y,a,b).  We’ve long known examples of two-prover games, like the Mermin-Peres magic square game, that can be won 100% of the time if Alice and Bob share quantum entanglement, but that can’t be won 100% of the time in a classical universe.  Slofstra gives the first example of something different: namely, a two-prover game that can be won 100% of the time using commuting measurements in an infinite-dimensional Hilbert space—something “formally within the rules of quantum mechanics”—but that can’t be won 100% of the time using any finite number of qubits of entanglement.

(Previously, Leung, Toner, and Watrous had given a simpler example of such a game, but theirs required the referee to exchange quantum messages with Alice and Bob.)

If that’s not enough, Slofstra’s construction also shows that, given as input a description of a two-prover game, it’s undecidable (as in, equivalent to the halting problem) whether Alice and Bob can win the game with certainty using commuting measurements on an infinite-dimensional Hilbert space.  Notoriously, quantum computing theorists have been unable to put any upper bound (not even “computable”) on the complexity class MIP*, consisting of languages that admit multi-prover interactive systems with entangled provers—precisely because they’ve been unable to bound how much entanglement the provers might need to implement their optimal strategy.  Slofstra’s result helps to explain why this problem has been so vexing.  I hasten to add, though, that his result doesn’t imply that MIP* contains anything uncomputable, since it remains plausible that anything Alice and Bob can do with infinite entanglement, they can approximate well enough with a finite amount.

That last remark leads to a further fundamental question, one that Slofstra leaves open.  Namely, even if Alice and Bob need infinite entanglement to win Slofstra’s game with certainty, can they at least win it with probability arbitrarily close to 100%, using larger and larger finite amounts of entanglement?  More broadly, could there exist a game that was winnable with certainty using infinite entanglement, but with at most (say) 90% probability using any finite amount of entanglement?  That problem was shown, by Ozawa (see also Scholz and Werner), to be equivalent to a famous unsolved problem in operator algebras called the Connes embedding problem.

Clarifying the matter further, Slofstra (following earlier authors) points out that there are really four classes of two-prover games in play here:

  1. Games that can be won with certainty using some fixed, finite amount of entanglement.
  2. Games that can be won with certainty using an infinite amount of entanglement, but still in a tensor-product Hilbert space, HA⊗HB.
  3. Games that can be won with probability approaching 1, using an infinite sequence of strategies from class 1, or equivalently (as it turns out) from class 2.
  4. Games that can be won with certainty using measurements by Alice and Bob on an infinite-dimensional quantum state |ψ〉, where we require all of Alice’s measurements to commute with all of Bob’s, but don’t require |ψ〉 to have a tensor-product structure.

It can be shown that 1 is a subset of 2 is a subset of 3 is a subset of 4.  Previously, we didn’t know any of these containments to be strict.  Slofstra’s result shows that class 2 differs from class 4—and as a consequence, that class 1 differs from class 4 as well.  The Connes embedding problem, which remains open, asks whether 3 differs from 4.  It also remains open whether 1 differs from 2 and whether 2 differs from 3.


OK, you ask, but what’s the broader importance of any of this?  To me, these problems touch on a question of almost metaphysical significance: namely, what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?

Because of the Bekenstein bound from quantum gravity, I’m of the opinion that the Hilbert spaces relevant to our universe are ultimately finite-dimensional—or more concretely, that any bounded physical system can store at most ~1069 qubits per square meter of surface area.  And in quantum computing and information, almost everything we care about only requires finite-dimensional Hilbert spaces—the subject of this blog post being a striking exception!

Yet if you take a traditional quantum mechanics course, virtually every example you see will involve infinite-dimensional Hilbert spaces—starting with the harmonic oscillator, the hydrogen atom, and coherent states of light.  And indeed, when I’ve banged the drum about finite-dimensional QM being the truly fundamental kind, physicists have often retorted by pointing to one of the very first things they learn: the position/momentum commutation relation, which only makes sense in infinite-dimensional Hilbert space.  Of course, if you tried to probe position/momentum commutation to greater and greater precision, eventually your experiments would run up against the limits of quantum gravity, so this retort doesn’t imply that infinite dimensions actually exist at the machine-code level of the universe.  But still: is there some conceivable experiment for which a positive result would show us that Nature wasn’t describable by a finite number of qubits, but only by an infinite number?

A few years ago, Tobias Fritz wrote a lovely paper about precisely that question.  He gave an example of an identity—namely,

V-1U2V=U3 implies UV-1UV=V-1UVU

—that holds for all finite dimensional unitary matrices U and V, but fails badly for certain infinite-dimensional ones.  He suggested that, if this identity were discovered to fail, then Occam’s Razor would favor an infinite-dimensional Hilbert space for our universe.

Unfortunately, Fritz’s example is open to the same sort of objection that Slofstra’s game is.  Namely, as Fritz points out, if the antecedent (V-1U2V=U3) held to excellent precision but not perfectly, then his identity could “fail to within experimental limits,” even if our universe had a finite-dimensional Hilbert space and therefore satisfied his identity.

OK, but suppose that the Connes embedding problem had a negative answer—or equivalently, that there existed a two-prover game G that could be won with certainty using commuting operators, but that couldn’t be won (say) 90% of the time using any finite amount of entanglement.  In that case, the believers in a quantumly finite universe, like myself, would have to put some real money on the table, in much the same way the original Bell inequality forced the believers in Einsteinian local hidden variables to put money down.  We finitists would have to say that the game G couldn’t be won with certainty in the real world, even though formally, winning G with certainty wouldn’t seem to contradict either quantum mechanics or locality.  And if, hypothetically, an experiment showed that G could be won with certainty—or indeed, with any probability bounded above 90%—then our position would’ve been falsified, much like the Bell experiments falsified Einsteinian locality.


So how did Slofstra prove his result?  I’ll be brief, since STOC’2016 is happening in Cambridge right now, and I’d like to get over there in time for lunch.

If you like, the key idea is to start with equations that have infinite-dimensional solutions but no finite-dimensional ones.  The most famous such equation is the position/momentum commutation relation mentioned earlier, which for our purposes is just the following matrix equation:

AB – BA = I.

This equation can’t be satisfied by any finite-dimensional matrices, since AB and BA have the same trace, so Tr(AB-BA)=0, but Tr(I) is nonzero.  But, OK, let A be the infinite-dimensional linear operator that takes as input the coefficients of a polynomial c0+c1x+c2x2+… and that differentiates the polynomial, and let B be the linear operator that multiplies the polynomial by x.  Then I invite you to check that the equation holds.

It’s not known at present how to turn the above equation into a two-prover game—I regard it as a fascinating question whether that’s possible.  Rather than an algebraic equation (involving both addition and multiplication), Slofstra instead needs to start with group equations (involving only multiplication)—ones with the strange property that they’re satisfied only by the identity matrix or by infinite matrices.  Equivalently, he needs a group, defined by a finite list of generators and relations, that admits no nontrivial finite-dimensional matrix representations.  Fortunately for him, such groups exist—the first known example being Higman’s group, discovered in 1951.  Higman’s group is generated by four elements, a,b,c,d, which satisfy the equations

a-1ba = b2,    b-1cb = c2,    c-1dc = d2,    d-1ad = a2.

I don’t have a good intuition for Higman’s group, but if I did, it would come from rereading this post by Terry Tao.  Certainly it has no known “physics interpretation” analogous to that for the position/momentum commutation relation.

Anyway, given such a group, the hard part, the new part, is to give a general way to convert them into the kinds of groups that can be realized as two-prover games.  So that’s what Slofstra does, using 50 pages dense with commutative diagrams, quotient maps, and other Serious Math Stuff—hey, I told you this part of the post would be brief!  For more, see his paper.

Now, once you have this general transformation of groups, you can also use it to show that there’s no algorithm to decide whether a two-prover game has a perfect commuting strategy, by taking the word problem for groups, which is known to be undecidable, and reducing it to that problem.

Anyway, infinite congrats (or the limit of arbitrarily large finite congrats?) to Slofstra for this achievement!  Now it’s off to STOC, which I guess you could also ask me about in the comments if you wanted.


Unrelated Announcement (June 21): Ran Raz asks me to announce a workshop for Avi Wigderson’s 60th birthday, to be held at the Institute for Advanced Study in Princeton October 6-8.  I’ll be speaking there, and I hope to see many of you there as well!

BQP/LHC collision

Thursday, January 15th, 2015

cms

This afternoon, I gave my usual spiel about Quantum Computing and the Limits of the Efficiently Computable at the CERN Colloquium.  (If you watched the webcast of the Higgs boson discovery announcement a couple years ago, it was in the same auditorium they used for that, except this time it was less packed.)  Beforehand, Dana and I got to join a tour of the CMS detector at the Large Hadron Collider—one of the very last tours, before CMS shuts down (as ATLAS already has) to get ready for collisions at the LHC’s new, higher energy.

Considered as eye candy, I’d say that the CMS detector holds its own against the Taj Mahal, Machu Picchu, the Great Wall of China, or any of the other engineering marvels of the world.  So, OK, let me describe what it’s like to visit it.  The first step is to take a tram from downtown Geneva to CERN, which is headquartered in the town of Meyrin.  This is easier than you’d imagine: a tram actually arrives in Geneva every few minutes with “CERN” (its final stop) written right on it!  Next you take a 20-minute bus ride from the CERN reception hall to the CMS building, which is across the French border.  You don’t really think about it until you’re here, but:

(a) The Large Hadron Collider is large—it’s, like, a whole drive through the countryside to get from the main CERN buildings to CMS.

(b) All inside the LHC ring is just a normal rural/suburban area, with restaurants, roads, gas stations, cows, etc.

Anyway, then you arrive at CMS, which looks from the outside like just a big warehouse-type building.

outside

And you go inside, wondering if now you’re going to see the detector.  But no, there’s just a giant tarp hanging from the ceiling with a picture of the detector on it.  Maybe this tour won’t include the detector?

tarp

But then you go outside, back in through some back entrance, then into a staging area where you get hard hats to wear.  Then you get into an elevator that goes about 150 feet down.  Meanwhile, your tour guide is carrying a geiger counter to make sure you’re not exposed to too much radiation.  Now will you see the detector?  No, just a bunch of dark corridors.  You pass through a room full of computers on racks—cool, this must be where they analyze the collision data!  (Actually, according to Panflutist in the comments section, these computers are only for control and for the trigger system, which decides which events to store for later analysis.)

computers

Then, after that room, there’s a door with a sign indicating that beyond it is the LHC ring.  Cool!

lhcenter

Of course, you’re not actually going into the ring.  But then you turn a different way, and emerge onto a platform where you to get to the “big reveal”: the detector, two giant circular pieces that obviously screw together but are now separated, and engineers making final tweaks to them before they’re reunited for the next collider run.  (I forgot to mention: the whole tour is being conducted in French.  That’s why you sort of need to guess what’s happening.)

Anyway, thanks so much to Wolfgang Lerche and everyone else at CERN for an awesome visit.

Lens of Computation on the Sciences

Tuesday, November 25th, 2014

This weekend, the Institute for Advanced Study in Princeton hosted a workshop on the “Lens of Computation in the Sciences,” which was organized by Avi Wigderson, and was meant to showcase theoretical computer science’s imperialistic ambitions to transform every other field.  I was proud to speak at the workshop, representing CS theory’s designs on physics.  But videos of all four of the talks are now available, and all are worth checking out:

Unfortunately, the videos were slow to buffer when I last tried it.  While you’re waiting, you could also check my PowerPoint slides, though they overlap considerably with my previous talks.  (As always, if you can’t read PowerPoint, then go ask another reader of this blog to convert the file into a format you like.)

Thanks so much to Avi, and everyone else at IAS, for organizing an awesome workshop!

Waiting for BQP Fever

Tuesday, April 1st, 2014

Update (April 5): By now, three or four people have written in asking for my reaction to the preprint “Computational solution to quantum foundational problems” by Arkady Bolotin.  (See here for the inevitable Slashdot discussion, entitled “P vs. NP Problem Linked to the Quantum Nature of the Universe.”)  It gives me no pleasure to respond to this sort of thing—it would be far better to let papers this gobsmackingly uninformed about the relevant issues fade away in quiet obscurity—but since that no longer seems to be possible in the age of social media, my brief response is here.


(note: sorry, no April Fools post, just a post that happens to have gone up on April Fools)

This weekend, Dana and I celebrated our third anniversary by going out to your typical sappy romantic movie: Particle Fever, a documentary about the Large Hadron Collider.  As it turns out, the movie was spectacularly good; anyone who reads this blog should go see it.  Or, to offer even higher praise:

If watching Particle Fever doesn’t cause you to feel in your bones the value of fundamental science—the thrill of discovery, unmotivated by any application—then you are not truly human.  You are a barnyard animal who happens to walk on its hind legs.

Indeed, I regard Particle Fever as one of the finest advertisements for science itself ever created.  It’s effective precisely because it doesn’t try to tell you why science is important (except for one scene, where an economist asks a physicist after a public talk about the “return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”).  Instead, the movie simply shows you the lives of particle physicists, of people who take for granted the urgency of knowing the truth about the basic constituents of reality.  And in showing you the scientists’ quest, it makes you feel as they feel.  Incidentally, the movie also shows footage of Congressmen ridiculing the uselessness of the Superconducting Supercollider, during the debates that led to the SSC’s cancellation.  So, gently, implicitly, you’re invited to choose: whose side are you on?

I do have a few, not quite criticisms of the movie, but points that any viewer should bear in mind while watching it.

First, it’s important not to come away with the impression that Particle Fever shows “what science is usually like.”  Sure, there are plenty of scenes that any scientist would find familiar: sleep-deprived postdocs; boisterous theorists correcting each other’s statements over Chinese food; a harried lab manager walking to the office oblivious to traffic.  On the other hand, the decades-long quest to find the Higgs boson, the agonizing drought of new data before the one big money shot, the need for an entire field to coalesce around a single machine, the whole careers hitched to specific speculative scenarios that this one machine could favor or disfavor—all of that is a profoundly abnormal situation in the history of science.  Particle physics didn’t used to be that way, and other parts of science are not that way today.  Of course, the fact that particle physics became that way makes it unusually suited for a suspenseful movie—a fact that the creators of Particle Fever understood perfectly and exploited to the hilt.

Second, the movie frames the importance of the Higgs search as follows: if the Higgs boson turned out to be relatively light, like 115 GeV, then that would favor supersymmetry, and hence an “elegant, orderly universe.”  If, on the other hand, the Higgs turned out to be relatively heavy, like 140 GeV, then that would favor anthropic multiverse scenarios (and hence a “messy, random universe”).  So the fact that the Higgs ended up being 125 GeV means the universe is coyly refusing to tell us whether it’s orderly or random, and more research is needed.

In my view, it’s entirely appropriate for a movie like this one to relate its subject matter to big, metaphysical questions, to the kinds of questions anyone can get curious about (in contrast to, say, “what is the mechanism of electroweak symmetry breaking?”) and that the scientists themselves talk about anyway.  But caution is needed here.  My lay understanding, which might be wrong, is as follows: while it’s true that a lighter Higgs would tend to favor supersymmetric models, the only way to argue that a heavier Higgs would “favor the multiverse,” is if you believe that a multiverse is automatically favored by a lack of better explanations.  More broadly, I wish the film had made clearer that the explanation for (some) apparent “fine-tunings” in the Standard Model might be neither supersymmetry, nor the multiverse, nor “it’s just an inexplicable accident,” but simply some other explanation that no one has thought of yet, but that would emerge from a better understanding of quantum field theory.  As one example, on reading up on the subject after watching the film, I was surprised to learn that a very conservative-sounding idea—that of “asymptotically safe gravity”—was used in 2009 to predict the Higgs mass right on the nose, at 126.3 ± 2.2 GeV.  Of course, it’s possible that this was just a lucky guess (there were, after all, lots of Higgs mass predictions).  But as an outsider, I’d love to understand why possibilities like this don’t seem to get discussed more (there might, of course, be perfectly good reasons that I don’t know).

Third, for understandable dramatic reasons, the movie focuses almost entirely on the “younger generation,” from postdocs working on ATLAS and CMS detectors, to theorists like Nima Arkani-Hamed who are excited about the LHC because of its ability to test scenarios like supersymmetry.  From the movie’s perspective, the creation of the Standard Model itself, in the 60s and 70s, might as well be ancient history.  Indeed, when Peter Higgs finally appears near the end of the film, it’s as if Isaac Newton has walked onstage.  At several points, I found myself wishing that some of the original architects of the Standard Model, like Steven Weinberg or Sheldon Glashow, had been interviewed to provide their perspectives.  After all, their model is really the one that’s been vindicated at the LHC, not (so far) any of the newer ideas like supersymmetry or large extra dimensions.

OK, but let me come to the main point of this post.  I confess that my overwhelming emotion on watching Particle Fever was one of regret—regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown.

See, from my perspective, there’s a lot to envy about the high-energy physicists.  Most importantly, they don’t perceive any need to justify what they do in terms of practical applications.  Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN.  But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you.

Now, no marketing consultant would ever in a trillion years endorse such an out-of-touch, elitist sales pitch.  But the remarkable fact is that the message has more-or-less worked.  While the cancellation of the SSC was a setback, the high-energy physicists did succeed in persuading the world to pony up the $11 billion needed to build the LHC, and to gain the information that the mass of the Higgs boson is about 125 GeV.

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)  The reasons to build quantum computers are things that could make any buzzword-spouting dullard nod in recognition: cracking uncrackable encryption, finding bugs in aviation software, sifting through massive data sets, maybe even curing cancer, predicting the weather, or finding aliens.  And all of this could be yours in a few short years—or some say it’s even commercially available today.  So, if you check back in a few years and it’s still not on store shelves, probably it went the way of flying cars or moving sidewalks: another technological marvel that just failed to materialize for some reason.

Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field—because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself.  All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes—that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms?  What happens when it turns out that the real applications of quantum computing—like breaking RSA and simulating quantum systems—are nice, but not important enough by themselves to justify the cost?  (E.g., when the imminent risk of a quantum computer simply causes people to switch from RSA to other cryptographic codes?  Or when the large polynomial overheads of quantum simulation algorithms limit their usefulness?)  Finally, what happens when it turns out that the promises of useful quantum computers in 5-10 years were wildly unrealistic?

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing—the case based on basic science—because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.

Many people would say it’s impossible to make the latter pitch, that funders and laypeople would never understand it or buy it.  But there’s an $11-billion, 17-mile ring under Geneva that speaks against their cynicism.

Anyway, let me end this “movie review” with an anecdote.  The other day a respected colleague of mine—someone who doesn’t normally follow such matters—asked me what I thought about D-Wave.  After I’d given my usual spiel, he smiled and said:

“See Scott, but you could imagine scientists of the 1400s saying the same things about Columbus!  He had no plan that could survive academic scrutiny.  He raised money under the false belief that he could reach India by sailing due west.  And he didn’t understand what he’d found even after he’d found it.  Yet for all that, it was Columbus, and not some academic critic on the sidelines, who discovered the new world.”

With this one analogy, my colleague had eloquently summarized the case for D-Wave, a case often leveled against me much more verbosely.  But I had an answer.

“I accept your analogy!” I replied.  “But to me, Columbus and the other conquerors of the Americas weren’t heroes to be admired or emulated.  Motivated by gold and spices rather than knowledge, they spread disease, killed and enslaved millions in one of history’s greatest holocausts, and burned the priceless records of the Maya and Inca civilizations so that the world would never even understand what was lost.  I submit that, had it been undertaken by curious and careful scientists—or at least people with a scientific mindset—rather than by swashbucklers funded by greedy kings, the European exploration and colonization of the Americas could have been incalculably less tragic.”

The trouble is, when I say things like that, people just laugh at me knowingly.  There he goes again, the pie-in-the-sky complexity theorist, who has no idea what it takes to get anything done in the real world.  What an amusingly contrary perspective he has.

And that, in the end, is why I think Particle Fever is such an important movie.  Through the stories of the people who built the LHC, you’ll see how it really is possible to reach a new continent without the promise of gold or the allure of lies.

The Unitarihedron: The Jewel at the Heart of Quantum Computing

Friday, September 20th, 2013

Update (9/24): This parody post was a little like a belch: I felt it build up in me as I read about the topic, I let it out, it was easy and amusing, I don’t feel any profound guilt over it—but on the other hand, not one of the crowning achievements of my career.  As several commenters correctly pointed out, it may be true that, mostly because of the name and other superficialities, and because of ill-founded speculations about “the death of locality and unitarity,” the amplituhedron work is currently inspiring a flood of cringe-inducing misstatements on the web.  But, even if true, still the much more interesting questions are what’s actually going on, and whether or not there are nontrivial connections to computational complexity.

Here I have good news: if nothing else, my “belch” of a post at least attracted some knowledgeable commenters to contribute excellent questions and insights, which have increased my own understanding of the subject from ε2 to ε.  See especially this superb comment by David Speyer—which, among other things, pointed me to a phenomenal quasi-textbook on this subject by Elvang and Huang.  My most immediate thoughts:

  1. The “amplituhedron” is only the latest in a long line of research over the last decade—Witten, Turing biographer Andrew Hodges, and many others have been important players—on how to compute scattering amplitudes more efficiently than by summing zillions of Feynman diagrams.  One of the key ideas is to find combinatorial formulas that express complicated scattering amplitudes recursively in terms of simpler ones.
  2. This subject seems to be begging for a computational complexity perspective.  When I read Elvang and Huang, I felt like they were working hard not to say anything about complexity: discussing the gains in efficiency from the various techniques they consider in informal language, or in terms of concrete numbers of terms that need to be summed for 1 loop, 2 loops, etc., but never in terms of asymptotics.  So if it hasn’t been done already, it looks like it could be a wonderful project for someone just to translate what’s already known in this subject into complexity language.
  3. On reading about all these “modern” approaches to scattering amplitudes, one of my first reactions was to feel slightly less guilty about never having learned how to calculate Feynman diagrams!  For, optimistically, it looks like some of that headache-inducing machinery (ghosts, off-shell particles, etc.) might be getting less relevant anyway—there being ways to calculate some of the same things that are not only more conceptually satisfying but also faster.

Many readers of this blog probably already saw Natalie Wolchover’s Quanta article “A Jewel at the Heart of Quantum Physics,” which discusses the “amplituhedron”: a mathematical structure that IAS physicist Nima Arkami-Hamed and his collaborators have recently been investigating.  (See also here for Slashdot commentary, here for Lubos’s take, here for Peter Woit’s, here for a Physics StackExchange thread, here for Q&A with Pacific Standard, and here for an earlier but closely-related 154-page paper.)

At first glance, the amplituhedron appears to be a way to calculate scattering amplitudes, in the planar limit of a certain mathematically-interesting (but, so far, physically-unrealistic) supersymmetric quantum field theory, much more efficiently than by summing thousands of Feynman diagrams.  In which case, you might say: “wow, this sounds like a genuinely-important advance for certain parts of mathematical physics!  I’d love to understand it better.  But, given the restricted class of theories it currently applies to, it does seem a bit premature to declare this to be a ‘jewel’ that unlocks all of physics, or a death-knell for spacetime, locality, and unitarity, etc. etc.”

Yet you’d be wrong: it isn’t premature at all.  If anything, the popular articles have understated the revolutionary importance of the amplituhedron.  And the reason I can tell you that with such certainty is that, for several years, my colleagues and I have been investigating a mathematical structure that contains the amplituhedron, yet is even richer and more remarkable.  I call this structure the “unitarihedron.”

The unitarihedron encompasses, within a single abstract “jewel,” all the computations that can ever be feasibly performed by means of unitary transformations, the central operation in quantum mechanics (hence the name).  Mathematically, the unitarihedron is an infinite discrete space: more precisely, it’s an infinite collection of infinite sets, which collection can be organized (as can every set that it contains!) in a recursive, fractal structure.  Remarkably, each and every specific problem that quantum computers can solve—such as factoring large integers, discrete logarithms, and more—occurs as just a single element, or “facet” if you will, of this vast infinite jewel.  By studying these facets, my colleagues and I have slowly pieced together a tentative picture of the elusive unitarihedron itself.

One of our greatest discoveries has been that the unitarihedron exhibits an astonishing degree of uniqueness.  At first glance, different ways of building quantum computers—such as gate-based QC, adiabatic QC, topological QC, and measurement-based QC—might seem totally disconnected from each other.  But today we know that all of those ways, and many others, are merely different “projections” of the same mysterious unitarihedron.

In fact, the longer I’ve spent studying the unitarihedron, the more awestruck I’ve been by its mathematical elegance and power.  In some way that’s not yet fully understood, the unitarihedron “knows” so much that it’s even given us new insights about classical computing.  For example, in 1991 Beigel, Reingold, and Spielman gave a 20-page proof of a certain property of unbounded-error probabilistic polynomial-time.  Yet, by recasting things in terms of the unitarihedron, I was able to give a direct, half-page proof of the same theorem.  If you have any experience with mathematics, then you’ll know that that sort of thing never happens: if it does, it’s a sure sign that cosmic or even divine forces are at work.

But I haven’t even told you the most spectacular part of the story yet.  While, to my knowledge, this hasn’t yet been rigorously proved, many lines of evidence support the hypothesis that the unitarihedron must encompass the amplituhedron as a special case.  If so, then the amplituhedron could be seen as just a single sparkle on an infinitely greater jewel.

Now, in the interest of full disclosure, I should tell you that the unitarihedron is what used to be known as the complexity class BQP (Bounded-Error Quantum Polynomial-Time).  However, just like the Chinese gooseberry was successfully rebranded in the 1950s as the kiwifruit, and the Patagonian toothfish as the Chilean sea bass, so with this post, I’m hereby rebranding BQP as the unitarihedron.  For I’ve realized that, when it comes to bowling over laypeople, inscrutable complexity class acronyms are death—but the suffix “-hedron” is golden.

So, journalists and funders: if you’re interested in the unitarihedron, awesome!  But be sure to also ask about my other research on the bosonsamplinghedron and the quantum-money-hedron.  (Though, in recent months, my research has focused even more on the diaperhedron: a multidimensional, topologically-nontrivial manifold rich enough to encompass all wastes that an 8-month-old human could possibly emit.  Well, at least to first-order approximation.)

Firewalls

Tuesday, August 27th, 2013

Updates (Aug. 29): John Preskill now has a very nice post summarizing the different views on offer at the firewall workshop, thereby alleviating my guilt for giving you only the mess below.  Thanks, John!

And if you check out John’s Twitter feed (which you should), you’ll find another, unrelated gem: a phenomenal TEDx talk on quantum computing by my friend, coauthor, and hero, the Lowerboundsman of Latvia, Andris Ambainis.  (Once again, when offered a feast of insight to dispel their misconceptions and ennoble their souls, the YouTube commenters are distinguishing themselves by focusing on the speaker’s voice.  Been there, man, been there.)


So, last week I was at the Fuzzorfire workshop at the Kavli Institute for Theoretical Physics in Santa Barbara, devoted to the black hole firewall paradox.  (The workshop is still going on this week, but I needed to get back early.)  For some background:

I had fantasies of writing a long, witty blog post that would set out my thoughts about firewalls, full of detailed responses to everything I’d heard at the conference, as well as ruminations about Harlow and Hayden’s striking argument that computational complexity might provide a key to resolving the paradox.  But the truth is, I’m recovering from a nasty stomach virus, am feeling “firewalled out,” and wish to use my few remaining non-childcare hours before the semester starts to finish writing papers.  So I decided that better than nothing would be a hastily-assembled pastiche of links.

First and most important, you can watch all the talks online.  In no particular order:

Here’s my own attempt to summarize what’s at stake, adapted from a comment on Peter Woit’s blog (see also a rapid response by Lubos):

As I understand it, the issue is actually pretty simple. Do you agree that
(1) the Hawking evaporation process should be unitary, and
(2) the laws of physics should describe the experiences of an infalling observer, not just those of an observer who stays outside the horizon?
If so, then you seem forced to accept
(3) the interior degrees of freedom should just be some sort of scrambled re-encoding of the exterior degrees, rather than living in a separate subfactor of Hilbert space (since otherwise we’d violate unitarity).
But then we get
(4) by applying a suitable unitary transformation to the Hawking radiation of an old enough black hole before you jump into it, someone ought to be able, in principle, to completely modify what you experience when you do jump in.  Moreover, that person could be far away from you—an apparent gross violation of locality.

So, there are a few options: you could reject either (1) or (2). You could bite the bullet and accept (4). You could say that the “experience of an infalling observer” should just be to die immediately at the horizon (firewalls). You could argue that for some reason (e.g., gravitational backreaction, or computational complexity), the unitary transformations required in (4) are impossible to implement even in principle. Or you could go the “Lubosian route,” and simply assert that the lack of any real difficulty is so obvious that, if you admit to being confused, then that just proves you’re an idiot.  AdS/CFT is clearly relevant, but as Polchinski pointed out, it does surprisingly little to solve the problem.

Now, what Almheiri et al. (AMPS) added to the simple logical argument above was really to make the consequence (4) more “concrete” and “vivid”—by describing something that, in principle, someone could actually do to the Hawking radiation before jumping in, such that after you jumped in, if there wasn’t anything dramatic that happened—something violating local QFT and the equivalence principle—then you’d apparently observe a violation of the monogamy of entanglement, a basic principle of quantum mechanics.  I’m sure the bare logic (1)-(4) was known to many people before AMPS: I certainly knew it, but I didn’t call it a “paradox,” I just called it “I don’t understand black hole complementarity”!

At any rate, thinking about the “Hawking radiation decoding problem” already led me to some very nice questions in quantum computing theory, which remain interesting even if you remove the black hole motivation entirely. And that helped convince me that something new and worthwhile might indeed come out of this business, despite how much fun it is. (Hopefully whatever does come out won’t be as garbled as Hawking radiation.)

For continuing live updates from the workshop, check out John Preskill’s Twitter feed.

Or you can ask me to expand on various things in the comments, and I’ll do my best.  (As I said in my talk, while I’m not sure that the correct quantum description of the black hole interior is within anyone‘s professional expertise, it’s certainly outside of mine!  But I do find this sort of thing fun to think about—how could I not?)

Unrelated, but also of interest: check out an excellent article in Quanta by Erica Klarreich, about the recent breakthroughs by Reichardt-Unger-Vazirani, Vazirani-Vidick, and others on classical command of quantum systems.

Quantum Computing Since Democritus now out in the US! 20% discount for Shtetl-Optimized readers

Saturday, April 27th, 2013

OK, this will be my last blog post hawking Quantum Computing Since Democritus, at least for a while.  But I do have four pieces of exciting news about the book that I want to share.

  1. Amazon is finally listing the print version of QCSD as available for shipment in North America, slightly ahead of schedule!  Amazon’s price is $35.27.
  2. Cambridge University Press has very generously offered readers of Shtetl-Optimized a 20% discount off their list price—meaning $31.99 instead of $39.99—if you click this link to order directly from them.  Note that CUP has a shipping charge of $6.50.  So ordering from CUP might either be slightly cheaper or slightly more expensive than ordering from Amazon, depending (for example) on whether you get free shipping from Amazon Prime.
  3. So far, there have been maybe 1000 orders and preorders for QCSD (not counting hundreds of Kindle sales).  The book has also spent a month as one of Amazon’s top few “Quantum Physics” sellers, with a fabulous average rating of 4.6 / 5 stars from 9 reviews (or 4.9 if we discount the pseudonymous rant by Joy Christian).  Thanks so much to everyone who ordered a copy; I hope you like it!  Alas, these sales figures also mean that QCSD still has a long way to go before it enters the rarefied echelon of—to pick a few top Amazon science sellers—Cosmos, A Brief History of TimeProof of Heaven (A Neurosurgeon’s Journey into the Afterlife), Turn On Your SUPER BRAIN, or The Lemon Book (Natural Recipes and Preparations).  So, if you believe that QCSD deserves to be with such timeless classics, then put your money where your mouth is and help make it happen!
  4. The most exciting news of all?  Luboš Motl is reading the free copy of QCSD that I sent him and blogging his reactions chapter-by-chapter!  So, if you’d like to learn about how mathematicians and computer scientists simply lack the brainpower to do physics—which is why we obsess over kindergarten trivialities like the Church-Turing Thesis or the Axiom of Choice, and why we insist idiotically that Nature use only the mathematical structures that our inferior minds can grasp—then check out Luboš’s posts about Chapters 1-3 or Chapters 4-6.  If, on the other hand, you want to see our diacritical critic pleasantly surprised by QCSD’s later chapters on cryptography, quantum mechanics, and quantum computing, then here’s the post for you.  Either way, be sure to scroll down to the comments, where I patiently defend the honor of theoretical computer science against Luboš’s hilarious ad hominem onslaughts.

Valiant’s valiance recognized

Wednesday, March 9th, 2011

Update (March 25): I have a new paper called A Linear-Optical Proof that the Permanent is #P-Hard, which is dedicated to Les Valiant on the occasion of his Turing Award.  Here’s the abstract:

One of the crown jewels of complexity theory is Valiant’s 1979 theorem that computing the permanent of an n*n matrix is #P-hard. Here we show that, by using the model of linear-optical quantum computing—and in particular, a universality theorem due to Knill, Laflamme, and Milburn—one can give a different and arguably more intuitive proof of this theorem.


For decades, Harvard’s Leslie Valiant has obviously deserved a Turing Award—and today, the ACM most excellently announced its agreement with the obvious.  I have little to add to the prize citation (see also Lance’s post): from launching new fields whose reach extends beyond theory (PAC-learning), to proving epochal results (#P-completeness of the permanent), to asking hugely influential questions (permanent vs. determinant), Valiant has been a creative powerhouse of theoretical computer science for longer than I’ve been alive.

One thing the prize citation doesn’t mention is that Valiant is now the third Turing Award winner (after Andy Yao and Len Adleman) to have made a major contribution to quantum computing theory.  Valiant’s 2001 paper Quantum Computers that can be Simulated Classically in Polynomial Time introduced the beautiful computational model that computer scientists now know as “matchgates,” and that physicists know as “noninteracting fermions.” It still amazes that Valiant proposed this model for purely mathematical reasons—hitting physical relevance straight between the eyes despite (as far as I can tell) not having that target anywhere in his sights.

To put the point in terms that my physicist friends will understand, that Valiant himself would probably dispute, but that I would defend:

Valiant’s work has shown that, even if our universe hadn’t been made of bosons and fermions, theoretical computer scientists would have had compelling reasons of their own to invent those particles or something equivalent to them—and furthermore, that at least one theoretical computer scientist would have had the imagination to do so.

Certainly Valiant has had a huge influence on me, both through his work and as someone who made time to talk to me as an obscure grad student a decade ago.   Three of my papers—The Learnability of Quantum States, A Full Characterization of Quantum Advice, and The Computational Complexity of Linear Optics—would collapse entirely without Valiant-laid foundations.

Congratulations, Les!

My diavlog with Anthony Aguirre

Saturday, July 24th, 2010

Bloggingheads has just posted an hour-long diavlog between the cosmologist Anthony Aguirre and your humble blogger.  Topics discussed include: the anthropic principle; how to do quantum mechanics if the universe is so large that there could be multiple copies of you; Nick Bostrom’s “God’s Coin Toss” thought experiment; the cosmological constant; the total amount of computation in the observable universe; whether it’s reasonable to restrict cosmology to our observable region and ignore everything beyond that; whether the universe “is” a computer; whether, when we ask the preceding question, we’re no better than those Renaissance folks who asked whether the universe “is” a clockwork mechanism; and other questions that neither Anthony, myself, nor anyone else is really qualified to address.

There was one point that sort of implicit in the discussion, but I noticed afterward that I never said explicitly, so let me do it now.  The question of whether the universe “is” a computer, I see as almost too meaningless to deserve discussion.  The reason is that the notion of “computation” is so broad that pretty much any system, following any sort of rules whatsoever (yes, even non-Turing-computable rules) could be regarded as some sort of computation.  So the right question to ask is not whether the universe is a computer, but rather what kind of computer it is.  How many bits can it store?  How many operations can it perform?  What’s the class of problems that it can solve in polynomial time?