Archive for the ‘Complexity’ Category

Entanglement without end

Monday, June 20th, 2016

Today we take a break from this blog’s usual round of topics—free will, consciousness, the Singularity, social justice, Donald Trump—to talk about something really crazy and left-field.  Namely, recent research in quantum information.

Earlier this month, William Slofstra, currently a Research Assistant Professor at the IQC in Waterloo, posted a breakthrough paper on the arXiv (yeah, I’m using the b-word again—sue me), which solves one version of a ten-year-old problem in entanglement theory called Tsirelson’s Problem.  The problem, in one sentence, asks whether all quantum-mechanical correlations that can be achieved using commuting measurements, can also be achieved using measurements on separate parts of a tensor-product Hilbert space.  The answer turns out to be no.  (We’ve long known that the two kinds of correlations are identical as long as you stick to finite-dimensional Hilbert spaces, but Slofstra shows that they can differ in infinite-dimensional spaces.)

One implication of Slofstra’s result can be stated much more concretely, in terms of two-prover games: those things like the famous Bell/CHSH experiment, where Alice and Bob are put in separate rooms, and get inputs x and y respectively, and then without communicating, have to produce outputs a and b respectively satisfying some relation V(x,y,a,b).  We’ve long known examples of two-prover games, like the Mermin-Peres magic square game, that can be won 100% of the time if Alice and Bob share quantum entanglement, but that can’t be won 100% of the time in a classical universe.  Slofstra gives the first example of something different: namely, a two-prover game that can be won 100% of the time using commuting measurements in an infinite-dimensional Hilbert space—something “formally within the rules of quantum mechanics”—but that can’t be won 100% of the time using any finite number of qubits of entanglement.

(Previously, Leung, Toner, and Watrous had given a simpler example of such a game, but theirs required the referee to exchange quantum messages with Alice and Bob.)

If that’s not enough, Slofstra’s construction also shows that, given as input a description of a two-prover game, it’s undecidable (as in, equivalent to the halting problem) whether Alice and Bob can win the game with certainty using commuting measurements on an infinite-dimensional Hilbert space.  Notoriously, quantum computing theorists have been unable to put any upper bound (not even “computable”) on the complexity class MIP*, consisting of languages that admit multi-prover interactive systems with entangled provers—precisely because they’ve been unable to bound how much entanglement the provers might need to implement their optimal strategy.  Slofstra’s result helps to explain why this problem has been so vexing.  I hasten to add, though, that his result doesn’t imply that MIP* contains anything uncomputable, since it remains plausible that anything Alice and Bob can do with infinite entanglement, they can approximate well enough with a finite amount.

That last remark leads to a further fundamental question, one that Slofstra leaves open.  Namely, even if Alice and Bob need infinite entanglement to win Slofstra’s game with certainty, can they at least win it with probability arbitrarily close to 100%, using larger and larger finite amounts of entanglement?  More broadly, could there exist a game that was winnable with certainty using infinite entanglement, but with at most (say) 90% probability using any finite amount of entanglement?  That problem was shown, by Ozawa (see also Scholz and Werner), to be equivalent to a famous unsolved problem in operator algebras called the Connes embedding problem.

Clarifying the matter further, Slofstra (following earlier authors) points out that there are really four classes of two-prover games in play here:

  1. Games that can be won with certainty using some fixed, finite amount of entanglement.
  2. Games that can be won with certainty using an infinite amount of entanglement, but still in a tensor-product Hilbert space, HA⊗HB.
  3. Games that can be won with probability approaching 1, using an infinite sequence of strategies from class 1, or equivalently (as it turns out) from class 2.
  4. Games that can be won with certainty using measurements by Alice and Bob on an infinite-dimensional quantum state |ψ〉, where we require all of Alice’s measurements to commute with all of Bob’s, but don’t require |ψ〉 to have a tensor-product structure.

It can be shown that 1 is a subset of 2 is a subset of 3 is a subset of 4.  Previously, we didn’t know any of these containments to be strict.  Slofstra’s result shows that class 2 differs from class 4—and as a consequence, that class 1 differs from class 4 as well.  The Connes embedding problem, which remains open, asks whether 3 differs from 4.  It also remains open whether 1 differs from 2 and whether 2 differs from 3.


OK, you ask, but what’s the broader importance of any of this?  To me, these problems touch on a question of almost metaphysical significance: namely, what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?

Because of the Bekenstein bound from quantum gravity, I’m of the opinion that the Hilbert spaces relevant to our universe are ultimately finite-dimensional—or more concretely, that any bounded physical system can store at most ~1069 qubits per square meter of surface area.  And in quantum computing and information, almost everything we care about only requires finite-dimensional Hilbert spaces—the subject of this blog post being a striking exception!

Yet if you take a traditional quantum mechanics course, virtually every example you see will involve infinite-dimensional Hilbert spaces—starting with the harmonic oscillator, the hydrogen atom, and coherent states of light.  And indeed, when I’ve banged the drum about finite-dimensional QM being the truly fundamental kind, physicists have often retorted by pointing to one of the very first things they learn: the position/momentum commutation relation, which only makes sense in infinite-dimensional Hilbert space.  Of course, if you tried to probe position/momentum commutation to greater and greater precision, eventually your experiments would run up against the limits of quantum gravity, so this retort doesn’t imply that infinite dimensions actually exist at the machine-code level of the universe.  But still: is there some conceivable experiment for which a positive result would show us that Nature wasn’t describable by a finite number of qubits, but only by an infinite number?

A few years ago, Tobias Fritz wrote a lovely paper about precisely that question.  He gave an example of an identity—namely,

V-1U2V=U3 implies UV-1UV=V-1UVU

—that holds for all finite dimensional unitary matrices U and V, but fails badly for certain infinite-dimensional ones.  He suggested that, if this identity were discovered to fail, then Occam’s Razor would favor an infinite-dimensional Hilbert space for our universe.

Unfortunately, Fritz’s example is open to the same sort of objection that Slofstra’s game is.  Namely, as Fritz points out, if the antecedent (V-1U2V=U3) held to excellent precision but not perfectly, then his identity could “fail to within experimental limits,” even if our universe had a finite-dimensional Hilbert space and therefore satisfied his identity.

OK, but suppose that the Connes embedding problem had a negative answer—or equivalently, that there existed a two-prover game G that could be won with certainty using commuting operators, but that couldn’t be won (say) 90% of the time using any finite amount of entanglement.  In that case, the believers in a quantumly finite universe, like myself, would have to put some real money on the table, in much the same way the original Bell inequality forced the believers in Einsteinian local hidden variables to put money down.  We finitists would have to say that the game G couldn’t be won with certainty in the real world, even though formally, winning G with certainty wouldn’t seem to contradict either quantum mechanics or locality.  And if, hypothetically, an experiment showed that G could be won with certainty—or indeed, with any probability bounded above 90%—then our position would’ve been falsified, much like the Bell experiments falsified Einsteinian locality.


So how did Slofstra prove his result?  I’ll be brief, since STOC’2016 is happening in Cambridge right now, and I’d like to get over there in time for lunch.

If you like, the key idea is to start with equations that have infinite-dimensional solutions but no finite-dimensional ones.  The most famous such equation is the position/momentum commutation relation mentioned earlier, which for our purposes is just the following matrix equation:

AB – BA = I.

This equation can’t be satisfied by any finite-dimensional matrices, since AB and BA have the same trace, so Tr(AB-BA)=0, but Tr(I) is nonzero.  But, OK, let A be the infinite-dimensional linear operator that takes as input the coefficients of a polynomial c0+c1x+c2x2+… and that differentiates the polynomial, and let B be the linear operator that multiplies the polynomial by x.  Then I invite you to check that the equation holds.

It’s not known at present how to turn the above equation into a two-prover game—I regard it as a fascinating question whether that’s possible.  Rather than an algebraic equation (involving both addition and multiplication), Slofstra instead needs to start with group equations (involving only multiplication)—ones with the strange property that they’re satisfied only by the identity matrix or by infinite matrices.  Equivalently, he needs a group, defined by a finite list of generators and relations, that admits no nontrivial finite-dimensional matrix representations.  Fortunately for him, such groups exist—the first known example being Higman’s group, discovered in 1951.  Higman’s group is generated by four elements, a,b,c,d, which satisfy the equations

a-1ba = b2,    b-1cb = c2,    c-1dc = d2,    d-1ad = a2.

I don’t have a good intuition for Higman’s group, but if I did, it would come from rereading this post by Terry Tao.  Certainly it has no known “physics interpretation” analogous to that for the position/momentum commutation relation.

Anyway, given such a group, the hard part, the new part, is to give a general way to convert them into the kinds of groups that can be realized as two-prover games.  So that’s what Slofstra does, using 50 pages dense with commutative diagrams, quotient maps, and other Serious Math Stuff—hey, I told you this part of the post would be brief!  For more, see his paper.

Now, once you have this general transformation of groups, you can also use it to show that there’s no algorithm to decide whether a two-prover game has a perfect commuting strategy, by taking the word problem for groups, which is known to be undecidable, and reducing it to that problem.

Anyway, infinite congrats (or the limit of arbitrarily large finite congrats?) to Slofstra for this achievement!  Now it’s off to STOC, which I guess you could also ask me about in the comments if you wanted.


Unrelated Announcement (June 21): Ran Raz asks me to announce a workshop for Avi Wigderson’s 60th birthday, to be held at the Institute for Advanced Study in Princeton October 6-8.  I’ll be speaking there, and I hope to see many of you there as well!

Me interviewed by John Horgan (the author of “The End of Science”)

Thursday, April 21st, 2016

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

Quantum. Crypto. Things happen. I blog.

Sunday, March 6th, 2016

1. A bunch of people emailed me to ask about the paper “Realization of a scalable Shor algorithm”: a joint effort by the groups of my MIT colleague Ike Chuang and of Innsbruck’s Rainer Blatt.  The paper has been on the arXiv since July, but last week everyone suddenly noticed it because it appeared in Science.  See also the articles in MIT News and IEEE Spectrum.

Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5.  Now, one might object that the “15=3×5 theorem” has by now been demonstrated many times using quantum computing (indeed, Chuang himself was involved in the historic first such demonstration, with Neil Gershenfeld in 1997).  Furthermore, if one counts demonstrations not based on quantum computing, some people have claimed even earlier precedents for that theorem.

Nevertheless, as far as I can tell, the new work is a genuine milestone in experimental QC, because it dispenses with most of the precompilation tricks that previous demonstrations of Shor’s algorithm used.  “Precompilation tricks” are a fancier term for “cheating”: i.e., optimizing a quantum circuit in ways that would only make sense if you already assumed that 15 was, indeed, 3×5.  So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before.

Of course, as I’m sure the authors would acknowledge, the word “scalable” in their title admits multiple interpretations, rather like the word “possible.”  (It’s possible to buy strawberry Mentos, and it’s also possible to convert the Sun into computronium, but for different senses of “possible.”)  As I wrote in the comments section of my last post:

There are still all the difficulties of integrating a huge number of qubits—which, in ion-trap implementations, would almost certainly mean having many traps that can communicate with each other using gate teleportation—as well as implementing quantum fault-tolerance (meaning: doing 2-qubit gates at the fault-tolerance threshold, moving qubits around to the right places, pumping in fresh qubits, pumping out dirty ones, etc).  Those all remain major engineering problems for the future.

See also this comment by Vaughan Pratt, who remarks: “the MIT press release … would appear to have translated [‘scalable’] to mean that RSA was now approaching its best-by date, although the paper itself makes no such claim.”

In any case, regardless of how long it takes until we can factor enormous numbers like 91, congratulations to the MIT and Innsbruck groups on what’s certainly progress toward scalable ion-trap QC!

2. Other people wrote to ask about a striking recent preprint of Kaplan, Leurent, Leverrier, and Naya-Plasencia, which points out how Simon’s algorithm—i.e., the forerunner of Shor’s algorithm—can be used to break all sorts of practical private-key authentication schemes in quantum polynomial time, assuming the adversary can query the scheme being attacked on a coherent superposition of inputs.  In practice, this assumption is unlikely to hold, unless the adversary gets the actual obfuscated code of the scheme being attacked (in which case it holds).  Also, this is not the first time Simon’s algorithm has been used to attack cryptography; previous work in the same spirit by Kuwakado and Morii showed how to use Simon’s algorithm to break the 3-round Feistel scheme and the Even-Mansour scheme, again if we assume superposition queries.

Even so, Kaplan et al. seem to pretty dramatically expand the range of “practical” cryptosystems that are known to be vulnerable to Simon attacks in the superposed-query model.  I suspect this will force a revision in how we talk about Simon’s algorithm: from “useless, but theoretically important, and historically important because it led to Shor’s algorithm” to “actually maybe not that useless.”  (See here for a previous attempt of mine to give an interesting “explicit” problem that Simon’s algorithm solves in polynomial time, but that’s classically hard.  Alas, my candidate problem turned out to be classically easy.)  This is analogous to the revision that “Einstein-certified randomness” and the RUV theorem recently forced in how we talk about Bell’s inequality: we can no longer tell students that Bell’s work was important because of the conceptual point it proved about local hidden variables, and because of all the other stuff it led to, even though it obviously has no applications in and of itself.  Now it does have applications in and of itself.

To a quantum complexity theorist like me, who doesn’t know nearly as much applied crypto as he should, the real news in the Kaplan et al. paper is not that Simon’s algorithm can break the sorts of systems they study.  Rather, it’s that so many systems that are vulnerable to Simon attack exist and are used in the first place!  Once people understand the problem, I doubt it will be hard to design schemes of similar efficiency that remain quantum-secure even in the superposed-query model (under some plausible assumption, like that an underlying one-way function is quantum-secure).  Indeed, recent work of Boneh and Zhandry, among others, has already taken significant steps in that direction.  So the situation doesn’t seem “as bad” as it was with public-key crypto, where once Shor’s algorithm comes along, the plausibly quantum-secure alternatives that we currently know (like lattice-based crypto and quantum key distribution) are either much less efficient than RSA and Diffie-Hellman, or else require new hardware.  Still, the new observations about Simon’s algorithm show us how the history of quantum computing could have unfolded differently: rather than Simon → Shor → everyone gets excited (because their crypto is now vulnerable), people could’ve gotten cryptographically excited immediately after Simon.

3. Speaking of Diffie-Hellman, belated congratulations to Whitfield Diffie and Martin Hellman for an extremely well-deserved Turing Award!

4. At MIT’s weekly quantum information group meeting, Aram Harrow spoke about his new paper with Ed Farhi, “Quantum Supremacy through the Quantum Approximate Optimization Algorithm.”  Using the same arguments developed around 2010 by me and Alex Arkhipov, and (independently) by Bremner, Jozsa, and Shepherd, this paper shows that, even though the recently-developed QAOA/Quinoa quantum optimization algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever that algorithm does do, at least there’s no polynomial-time classical algorithm that samples from the same distribution over outputs, unless the polynomial hierarchy collapses.

In other words: even if the algorithm fails at its original goal, it’s still hard for a classical computer to reproduce its exact pattern of failure!  Hence: Quantum Supremacy.

A secondary goal of Aram and Eddie’s paper is to make the Aaronson-Arkhipov and Bremner et al. arguments more accessible to physicists, by decreasing the amount of “weird complexity theory” invoked.  (I suppose I’ve asked for this—for physicists to de-complexify complexity theory—by telling everyone for years how easy quantum mechanics becomes once you take away the physics!)  I’ll leave it to physicists to judge how well Aram and Eddie succeed at their pedagogical goal, but I’m thrilled by any such effort to communicate across fields.  Aram’s talk would surely have served that same educational purpose, had it not gotten derailed partway through by Donald Trump jokes from the audience.  (My contribution: “Aram, will you disavow support from quantum supremacists?”)


Unrelated Update: Some people might be interested in this brief interview with Michael Cerullo, who read The Ghost in the Quantum Turing Machine and wanted to ask me about “the relevance of quantum mechanics to brain preservation, uploading, and identity.”

Edging in: the biggest science news of 2015

Sunday, January 3rd, 2016

For years, I was forced to endure life with my nose up against the glass of the Annual Edge Question.  What are you optimistic about?  Ooh! ooh! Call on me!  I’m optimistic about someday being able to prove my pessimistic beliefs (like P≠NP).  How is the Internet changing the way you think?  Ooh, ooh! I know! Google and MathOverflow are saving me from having to think at all!  So then why are they only asking Steven Pinker, Freeman Dyson, Richard Dawkins, David Deutsch, some random other people like that?

But all that has changed.  This year, I was invited to participate in Edge for the first time.  So, OK, here’s the question:

What do you consider the most interesting recent [scientific] news?  What makes it important?

My response is here.  I wasn’t in love with the question, because of what I saw as an inherent ambiguity in it: the news that’s most interesting to me, that I have a comparative advantage in talking about, and that people probably want to hear me talk about (e.g., progress in quantum computing), is not necessarily what I’d regard as the most important in any objective sense (e.g., climate change).  So, I decided to write my answer precisely about my internal tension in what I should consider most interesting: should it be the recent progress by John Martinis and others toward building a quantum computer?  Or should it be the melting glaciers, or something else that I’m confident will affect the future of the world?  Or possibly the mainstream attention now being paid to the AI-risk movement?  But if I really want to nerd out, then why not Babai’s graph isomorphism algorithm?  Or if I actually want to be honest about what excited me, then why not the superquadratic separations between classical and quantum query complexities for a total Boolean function, by Ambainis et al. and my student Shalev Ben-David?  On the other hand, how can I justify even caring about such things while the glaciers are melting?

So, yeah, my response tries to meditate on all those things.  My original title was “How nerdy do you want it?,” but John Brockman of Edge had me change it to something blander (“How widely should we draw the circle?”), and made a bunch of other changes from my usual style.  Initially I chafed at having an editor for what basically amounted to a blog post; on the other hand, I’m sure I would’ve gotten in trouble much less often on this blog had I had someone to filter my words for me.

Anyway, of course I wasn’t the only person to write about the climate crisis.  Robert Trivers, Laurence Smith, and Milford Wolpoff all wrote about it as well (Trivers most chillingly and concisely), while Max Tegmark wrote about the mainstreaming of AI risk.  John Naughton even wrote about Babai’s graph isomorphism breakthrough (though he seems unaware that the existing GI algorithms were already extremely fast in practice, and therefore makes misleading claims about the new algorithm’s practical applications).  Unsurprisingly, no one else wrote about breakthroughs in quantum query complexity: you’ll need to go to my essay for that!  A bit more surprisingly, no one besides me wrote about progress in quantum computing at all (if we don’t count the loophole-free Bell test).

Anyway, on reflection, 2015 actually was a pretty awesome year for science, no matter how nerdy you want it or how widely you draw the circle.  Here are other advances that I easily could’ve written about but didn’t:

I’ve now read all (more or less) of this year’s Edge responses.  Even though some of the respondents pushed personal hobbyhorses like I’d feared, I was impressed by how easy it was to discern themes: advances that kept cropping up in one answer after another and that one might therefore guess are actually important (or at least, are currently perceived to be important).

Probably at the top of the list was a new gene-editing technique called CRISPR: Randolph Neese, Paul Dolan, Eric Topol, Mark Pagel, and Stuart Firestein among others all wrote about this, and about its implications for creating designer humans.

Also widely-discussed was the discovery that most psychology studies fail to replicate (I’d long assumed as much, but apparently this was big news in psychology!): Nicholas Humphrey, Stephen Kosslyn, Jonathan Schooler, Ellen Winner, Judith Rich Harris, and Philip Tetlock all wrote about that.

Then there was the Pluto flyby, which Juan Enriquez, Roger Highfield, and Nicholas Christakis all wrote about.  (As Christakis, Master of Silliman College at Yale, was so recently a victim of a social-justice mob, I found it moving how he simply ignored those baying for his head and turned his attention heavenward in his Edge answer.)

Then there was progress in deep learning, including Google’s Deep Dream (those images of dogs in nebulae that filled your Facebook wall) and DeepMind (the program that taught itself how to play dozens of classic video games).  Steve Omohundro, Andy Clark, Jamshed Bharucha, Kevin Kelly, David Dalrymple, and Alexander Wissner-Gross all wrote about different aspects of this story.

And recent progress in SETI, which Yuri Milner (who’s given $100 million for it) and Mario Livio wrote about.

Unsurprisingly, a bunch of high-energy physicists wrote about high-energy physics at the LHC: how the Higgs boson was found (still news?), how nothing other than the Higgs boson was found (the biggest news?), but how there’s now the slightest hint of a new particle at 750 GeV.  See Lee Smolin, Garrett Lisi, Sean Carroll, and Sarah Demers.

Finally, way out on the Pareto frontier of importance and disgustingness was the recently-discovered therapeutic value of transplanting one person’s poop into another person’s intestines, which Joichi Ito, Pamela Rosenkranz, and Alan Alda all wrote about (it also, predictably, featured in a recent South Park episode).

Without further ado, here are 27 other answers that struck me in one way or another:

  • Steven Pinker on happy happy things are getting better (and we can measure it)
  • Freeman Dyson on the Dragonfly astronomical observatory
  • Jonathan Haidt on how prejudice against people of differing political opinions was discovered to have surpassed racial, gender, and religious prejudice
  • S. Abbas Raza on Piketty’s r>g
  • Rebecca Newberger Goldstein, thoughtful as usual, on the recent study that said it’s too simple to say female participation is lower in STEM fields—rather, female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort
  • Bill Joy on recent advances on reducing CO2 emissions
  • Paul Steinhardt on recent observations saying that, not only were the previous “B-modes from inflation” just galactic dust, but there are no real B-modes to within the current detection limits, and this poses a problem for inflation (I hadn’t heard about this last part)
  • Aubrey de Grey on new antibiotics that are grown in the soil rather than in lab cultures
  • John Tooby on the evolutionary rationale for germline engineering
  • W. Tecumseh Fitch on the coming reality of the “Jurassic Park program” (bringing back extinct species through DNA splicing—though probably not dinosaurs, whose DNA is too degraded)
  • Keith Devlin on the new prospect of using massive datasets (from MOOCs, for example) to actually figure out how students learn
  • Richard Muller on how air pollution in China has become one of the world’s worst problems (imagine every child in Beijing being force-fed two packs of cigarettes per day)
  • Ara Norenzayan on the demographic trends in religious belief
  • James Croak on amazing advances in battery technology (which were news to me)
  • Buddhini Samarasinghe on (among other things) the power of aspirin to possibly prevent cancer
  • Todd Sacktor on a new treatment for Parkinson’s
  • Charles Seife on the imminent availability of data about pretty much everything in our lives
  • Susan Blackmore on “that dress” and what it revealed about the human visual system
  • Brian Keating on experiments that should soon tell us the neutrinos’ masses (again, I hadn’t heard about these)
  • Michael McCullough on something called “reproductive religiosity theory,” which posits that the central purpose of religions is to enforce social norms around mating and reproduction (for what it’s worth, I’d always regarded that as obvious; it’s even expounded in the last chapter of Quantum Computing Since Democritus)
  • Greg Cochran on the origin of Europeans
  • David Buss on the “mating crisis among educated women”
  • Ed Regis on how high-fat diets are better (except, isn’t this the principle behind Atkins, and isn’t this pretty old news by now?)
  • Melanie Swan on blockchain-based cryptography, such as Bitcoin (though it wasn’t entirely clear to me what point Swan was making about it)
  • Paul Davies on LIGO getting ready to detect its first gravitational waves
  • Samuel Arbesman on how weather prediction has gotten steadily better (rendering our culture’s jokes about the perpetually-wrong weatherman outdated, with hardly anyone noticing)
  • Alison Gopnik on how the ubiquity of touchscreen devices like the iPad means that toddlers can now master computers, and this is something genuinely new under the sun (I can testify from personal experience that she’s onto something)

Then there were three answers for which the “progress” being celebrated, seemed to me to be progress racing faster into WrongVille:

  • Frank Tipler on how one can conclude a priori that there must be a Big Crunch to our future (and hence, the arena for Tiplerian theology) in order to prevent the black hole information paradox from arising, all recent cosmological evidence to the contrary be damned.
  • Ross Anderson on an exciting conference whose participants aim to replace quantum mechanics with local realistic theories.  (Anderson, in particular, is totally wrong that you can get Bell inequality violation from “a combination of local action and global correlation,” unless the global correlation goes as far as a ‘t-Hooft-like superdeterministic conspiracy.)
  • Gordon Kane on how the big news is that the LHC should soon see superparticles.  (This would actually be fine except that Kane omits the crucial context, that he’s been predicting superparticles just around the corner again and again for the past twenty years and they’ve never shown up)

Finally, two responses by old friends that amused me.  The science-fiction writer Rudy Rucker just became aware of the discovery of the dark energy back in 1998, and considers that to be exciting scientific news (yes, Rudy, so it was!).  And Michael Vassar —the Kevin Bacon or Paul Erdös of the rationalist world, the guy who everyone‘s connected to somehow—writes something about a global breakdown of economic rationality, $20 bills on the sidewalk getting ignored, that I had trouble understanding (though the fault is probably mine).

6.S899 Student Project Showcase!

Tuesday, December 22nd, 2015

As 2015 winds down, I thought I’d continue my tradition of using this blog to showcase some awesome student projects from my graduate class.  (For the previous project showcases from Quantum Complexity Theory, see here, here, and here.  Also see here for the showcase from Philosophy and Theoretical Computer Science.)

This fall, I taught 6.S899, a one-time “Seminar on Physics and Computation” that focused on BosonSampling, complexity and quantum gravity, and universality of physical systems.  There were also lots of guest lectures and student presentations.  Unfortunately, we didn’t do any notes or recordings.

Fortunately, though, the students did do projects, which were literature reviews some of which ventured into original research, and all nine have agreed to share their project reports here!  So enjoy, thanks so much to the students for making it a great class, and happy holidays.


Update (Dec. 23): Here are two conference announcements that I’ve been asked to make: Innovations in Theoretical Computer Science (ITCS) 2016, January 14-16 in Cambridge MA, and the Fifth Women in Theory Workshop, at the Simons Institute in Berkeley, May 22-25, 2016.

Ask an unbounded question, get an uncomputable answer

Friday, December 11th, 2015

Just when I thought I could relax, as the waters slowly receded from the latest D-Tsunami, my inbox and Facebook feed once again lit up with inquiries—this time, asking me to confirm or deny that “A Paradox at the Heart of Mathematics Makes a Physics Problem Unanswerable.”

Uh-oh!

Luckily for my blood pressure, though, this one turned out to refer to something that more-or-less deserves the hype.  In particular, it’s about a phenomenal 146-page paper by Cubitt, Perez-Garcia, and Wolf, which just appeared this week in Nature (in condensed form, of course).  Incidentally, yeah, his name really is Toby Cubitt, pronounced like “qubit.”  He’s a good guy.

To those in quantum computing, Cubitt et al.’s breakthrough is old news, having already been on the arXiv for almost a year (we’ve also had a talk at MIT about it).  The arXiv has created a funny phenomenon, where you learn something new and cool, assimilate it, move on, and then a year later, everyone is suddenly asking you have you seen this thing, is it for real, etc. etc., just because the thing got some rubber stamp like acceptance to Nature that caused the press to pick it up.  Like, dude, I was into the undecidability of the spectral gap way before it went mainstream.

One more amusing anecdote before we dive into the math.  In his Nature News piece popularizing Cubitt et al.’s result, the writer Davide Castelvecchi quotes Rebecca Goldstein, the brilliant novelist and biographer of Kurt Gödel, as saying: “Turing thought more clearly about the relationship between physics and logic than Gödel did.”  Here’s what happened: Nature News wrote to Rebecca to ask what Gödel’s own thoughts were about the relation between undecidability and physics.  Rebecca passed the request along to me.  So I wrote back to her, arguing that they might just as well ask what Turing thought, since the Cubitt et al. result is “really” about Turing-undecidability (with Gödel-undecidability just an automatic corollary), and at any rate:

I also think that Turing thought more clearly about the relationship between logic and physics than Gödel did (indeed, Gödel himself said that it was only Turing‘s analysis of the notion of computability, in terms of actual physical machines that one could imagine building, that convinced him that computability had been properly defined).

Rebecca passed that back to Nature News, agreeing with it, and then at some point the quote became hers.  Far from being miffed about this, I consider having my forgettable words attributed to a genius like Rebecca to be one of the great honors of my life.  (By pure coincidence, she and I are having lunch next week; hopefully this will butter her up.)

So, OK, let me restate Cubitt et al.’s great theorem in less pop-sciencey terms than Nature News used.  (You could also just read the paper‘s intro, which is exceedingly clear, but what the hell—I’m here to serve.)

Suppose you have two-dimensional material made of a bunch of stationary particles, each with local Hilbert space dimension d, which are arranged on an L×L square grid (so, there are L2 particles in all).  And suppose there’s some fixed d2-dimensional Hamiltonian h, with a local copy hi,j=h acting on each neighboring pair of particles (i,j).  (I.e., the material is translationally invariant, with the same laws of physics acting throughout.)  Let H be the total Hamiltonian: that is, the sum of the hi,j‘s over all the neighboring (i,j)’s.

Then a huge fraction of all of physics—quantum field theory, condensed-matter physics, you name it—can be summarized as, you’re trying to figure out the eigenvalues and eigenvectors of H.  The lowest eigenvalue, λ0, tells you your material’s ground energy, while the higher eigenvalues, λ12,…, tell you the next discrete energy levels that the material can jump up to.  The corresponding eigenvectors tell you which quantum states the material is sitting in when it has these energies: the ground state v0, and the excited states v1,v2,…  Those, in turn, determine basically everything you could want to know about the material: whether it superconducts, etc. etc.

Of course, the eigenvalues and eigenvectors will depend on the lattice size L.  Equally obviously, for any fixed L, you could in principle compute all the eigenvalues and eigenvectors by just diagonalizing some huge-ass matrix.  (That matrix being H.)  But physicists are usually more interested in the limiting behavior as L goes to infinity.  One of their most basic distinctions is: the material is gapped if λ10, the difference between the first excited energy and the ground energy, converges to some positive value or even grows with L as L→∞.  It’s gapless if λ10 converges to 0 as L→∞.  (Actually, Cubitt et al. use more technical definitions of both of these concepts, but we’ll ignore that.)

Cubitt et al.’s theorem now says the following: for some fixed, constant local dimension d, there is no algorithm that takes as input the local Hamiltonian h (say, as a d2×d2 matrix of algebraic numbers), and that decides whether the material is gapped or gapless.  Indeed, you can reduce the halting problem to that problem, in such a way that the material will be gapped if your Turing machine halts, or gapless if it runs forever.

As an immediate corollary, there’s some 2D material—characterized by a translationally-invariant local Hamiltonian h on particles of local dimension d—such that whether the material is gapped or gapless is independent of the axioms of ZF set theory, or whatever else your favorite axioms might be.  (Proof: build a Turing machine M that halts if and only if it finds an inconsistency in set theory, then run Cubitt et al.’s reduction from the halting problem.  By Gödel, if set theory is consistent then it can’t prove whether M halts or not.)

Cubitt et al. never bother to work out the local dimension d that suffices for them, but it could be worked out, and it’s probably at least in the tens of thousands.  Thus, their result leaves open the possibility that there’s an algorithm to decide gaplessness for 2D lattices of qubits (i.e., the special case d=2), or other “reasonably low-dimensional” quantum systems.  We simply don’t know right now.  Another tantalizing open question is whether there’s an algorithm to decide gaplessness for one-dimensional spin chains—again, even in the special case d=2.  Right now, the best we have in that direction is a difficult recent result of Bravyi and Gosset, which gives an algorithm to decide gaplessness for one-dimensional, frustration-free chains of qubits.  (Here “frustration-free,” an amusing term that does not well describe this subject as a whole, means that you can minimize the energy H by minimizing the energies of each hi,j individually.  Or, if you think of H as a SAT instance, it’s satisfiable.)

But while the exact value of d where uncomputability kicks in is still up for grabs, it’s extremely important that d is some fixed, universal constant, independent of the Turing machine.  Indeed, as Cubitt et al. point out in their paper, this is the only feature that makes their new result not a trivial corollary of the uncomputability of Wang tiling.  The latter is a famous result from 1966, which says that there’s no algorithm that takes as input a finite set of tiles, and that tells you whether, using unlimited copies of each tile, you could cover the entire plane (or equivalently, arbitrarily large finite regions of the plane).  I.e., this is yet another “natural” math problem that secretly encodes the halting problem.

The fact that d is fixed also means that, in order to encode larger and larger Turing machines into the local Hamiltonian h (as you must, if you want to embed the halting problem), you need to use more and more bits of precision (!) in the ~d4 real numbers that define h.  This then raises a question: how do you actually extract a description of a Turing machine from the binary expansions of the real numbers that define your Hamiltonian?  To do this, Cubitt et al. use Kitaev’s phase estimation algorithm—which, interestingly, is the only part of their construction that uses quantum mechanics in any way.  One thing that I’d love to understand better is whether the phase estimation is really essential here, or whether the analogous classical question, with the “Hamiltonian” given by a probability distribution over classical constraints, could also be proved to be undecidable for some fixed value of d—thereby showing that Cubitt et al.’s discovery had nothing to do with quantum mechanics.

(It’s possible that the answer to this is obvious; I didn’t think about it deeply.  Note that if the “classical Hamiltonian” is also deterministic, then the problem must be decidable for every fixed d, since there are only finitely many possible h’s, and we could cache all the answers in a lookup table.)

Anyway, it’s now my professional duty, as the prickly, curmudgeonly blogger I am, to end the post by shooing you away from two tempting misinterpretations of the Cubitt et al. result.

First, the result does not say—or even suggest—that there’s any real, finite physical system whose behavior is Gödel- or Turing-undecidable.  Thus, it gives no support to speculations like Roger Penrose’s, about “hypercomputing” that would exceed the capabilities of Turing machines.  The reason, again, is that as soon as you fix a lattice size L, everything becomes computable.  The Cubitt et al. result applies only to questions about the limiting behavior, as the number of particles goes to infinity.  But we already knew lots of examples of physical systems for which predicting their behavior in some infinite limit is at least as hard as the halting problem: for instance, the Wang tiles discussed earlier, or Post rewrite systems, or even Turing machines themselves.  Local Hamiltonians are a profound, nontrivial addition to that list—one that will be particularly striking to physicists, many of whom calculate the spectral gaps of at least 50 Hamiltonians between dinner and dessert.  But in some sense, there was no a-priori reason why a problem this general, about physical systems of unbounded size, ought to have been computable.

Second, the result does not say that any particular question physicists want an answer to—for example, the million-dollar Yang-Mills mass gap problem—is Gödel-undecidable.  “All it says,” is that the possibility that some real-world question of that kind could be undecidable isn’t totally closed off.  The Nature News piece stresses this latter implication a lot—as, admittedly, do Cubitt et al. themselves.  But to put things in perspective: four logicians proved around 1970 that there’s no algorithm to decide whether an arbitrary polynomial equation has an integer solution, thereby giving a negative solution to Hilbert’s Tenth Problem.  Yet with few exceptions, “working number theorists” barely even noticed this development, nor was (say) Andrew Wiles dissuaded from proving Fermat’s Last Theorem, by the absence of a general algorithm to do things like what he was trying to do.  (Indeed, the absence of a general algorithm was shown even earlier for equations like FLT, which have variables in the exponent.)  So I doubt the mathematical physicists who calculate spectral gaps for a living will be any more terrified than the number theorists were, to learn that they’ve been laboring their entire lives on the shores of the halting problem.  “Good for us, then!” they could rightly reply.  “Maybe our jobs won’t be so easy to automate.”

Update (Dec. 20): My colleague Seth Lloyd calls my attention to a PRL paper of his from 1993, which also discusses the construction of physical systems that are gapped if a given Turing machine halts and gapless if it runs forever.  So this basic idea has been around for a while.  As I explained in the post, the main contribution of the Cubitt et al. paper is just to get undecidability into “the sort of system physicists could plausibly care about” (or for which they could’ve plausibly hoped for an analytic solution): in this case, 2D translationally-invariant nearest-neighbor Hamiltonians with bounded local dimension.

G. Phi. Fo. Fum.

Wednesday, November 4th, 2015

Update (Dec. 14): The long wait is over!  Here’s Laci’s paper on the arXiv.  So far, I’ve read it only deeply enough to note that it contains the following sentence:

A group G ≤ S(Ω) defines the category of G-isomorphisms of strings on the domain Ω; the natural notation for this category, the central object of study in this paper, would seem to be “G-Strings.”

With that, I believe Laci himself has outshone even reddit’s attempt to mine his breakthrough result for juvenile humor.

See also a nice Quanta article about Laci’s algorithm by Erica Klarreich.  There’s only one statement in the article that I disagree with: namely that, if graph isomorphism were inherently quasipolynomial time, then it would be the first natural example of such a problem.  We know other natural problems, like approximating free games and socially-optimal Nash equilibria, that are solvable in nO(log n) time but that can’t be in P unless 3SAT is solvable in ~exp(√n) time.

Update (Nov. 17): Video of Laci’s first talk is now available.

Breaking News (Nov. 12): Jeremy Kun has written up a phenomenal summary of Babai’s first lecture.  I haven’t carefully studied all of it, and in any case, there are many missing details to be filled in later (Babai told Kun that the preprint will be available “soon, soon!”).  But from the summary, four points stood out to me:

  1. Babai actually claims a quasipolynomial-time algorithm for an interestingly more general problem than graph isomorphism, called string isomorphism.  This was already in the abstract, but googling didn’t reveal what string isomorphism was.  So, OK, here’s what it is: you’re given two strings x and y over some finite alphabet, as well as the generators of a group G of permutations of the string indices.  The problem is to determine whether you can transform x to y by applying a permutation in G.  Or even more generally: given a string x, find a full set of generators for the subgroup of G that fixes x.  See Kun’s post for the straightforward reductions from GI to these group-theoretic problems.
  2. As was hinted in the abstract, in Babai’s analysis of his algorithm, there’s one step that relies on a statement whose only known proof depends on the Classification of Finite Simple Groups.  (Thus, it’s not the algorithm itself requires iterating through all the sporadic simple groups or anything like that; this only shows up in the correctness proof.)  This is not the first-ever computer-science application of the Classification of Finite Simple Groups (indeed, Babai himself has some previous ones), but it’s certainly the most dramatic.
  3. In previous work on GI, the Johnson graph emerged over and over as a forehead-bangingly hard case that caused numerous algorithms to fail.  In the new work, it looks like Babai’s central technical innovation is to show that, in some sense, the Johnson graph is the only obstruction to taking the divide-and-conquer approaches that people that had tried before, and making them run in quasipolynomial time.  I.e., in each step of the recursion, either you can find a Johnson graph on a large fraction of the vertices and handle it specially, or else you can do something that works whenever there’s not a Johnson graph on a large fraction of the vertices.  Babai calls this “split-or-Johnson.”
  4. Babai stressed that in some sense, his new algorithm is the culmination of a program laid out by Eugene Luks in 1982.  Now, the Classification of Finite Simple Groups was also more-or-less completed in the early 1980s.  To my mind, this raises a fascinating socio-mathematical question: which aspects of the new work, if any, could not have been done in the early 80s, possibly by Babai or Luks themselves?  what is it that needed another 30 years?  If the answer turns out to be “nothing,” then to me that’s an astounding illustration of the role of the individual in mathematical progress.  As in: Laci was nice enough to take a third-of-a-century break between his and Luks’ work in the early 80s, and the “natural next step” in their program … and still no one managed to use that break to beat him to the next step!

Earlier today, I was tipped off to what might be the theoretical computer science result of the decade.  My source asked me not to break the news on this blog—but since other theory bloggers (and twitterers) are now covering the story, I guess the graph is out of the Babai.

According to the University of Chicago’s theory seminar calendar, on Tuesday of next week (November 10), the legendary Laszlo Babai will be giving a talk about a new algorithm that solves the graph isomorphism problem in quasipolynomial time.  The previous fastest algorithm to decide whether two n-vertex graphs G and H are isomorphic—by Babai and Luks, back in 1983—ran in exp(√(n log n)) time.  If we credit the announcement, Babai has now gotten that down to exp(polylog(n)), putting one of the central problems of computer science “just barely above P.”  (For years, I’ve answered questions on this blog about the status of graph isomorphism—would I bet that it’s in BQP? in coNP? etc.—by saying that, as far as I and many others are concerned, it might as well just be in P.  Of course I’m happy to reaffirm that conjecture tonight.)

Next week, I assume, Laci will lecture to a packed house; then the experts will race to unpack the details.  Until then, we probably need to sit tight; I don’t know any more than what’s in the abstract.  For now, I’m delighted if commenters want to share general thoughts or questions about graph isomorphism (and I’ll try to answer what I can), but I won’t allow uninformed speculations or rumors about the details of the new result—not until Laci has had a chance to speak.


Update (Nov. 5): While we all wait with bated breath for more details, you can amuse yourself with the talk I gave at Laci’s 60th birthday conference five years ago.

Also, a comment of mine that I should probably promote to the main post:

Dana points out to me that non-native English speakers might not get the staggeringly clever pun in this post’s title (hey, it was the best I could do on a deadline).

So, alright, fee fi fo fum is what the approaching giant bellows in Jack and the Beanstalk. It means something big is on the horizon. Also, G is a graph, and Phi is an isomorphism.


Update (Nov. 12): So, Laci gave his talk. Video was made but does not appear to be available yet. However, Gabriel Gaster, who was in attendance, graciously live-tweeted everything. Here’s a Storify of the live-tweets. (What’s a “Storify”?)

A breakthrough on QMA(2)?

Friday, October 30th, 2015

Last night, Martin Schwarz posted a preprint to the arXiv that claims to show the new complexity class containment QMA(2) ⊆ EXP.  (See also his brief blog post about this result.)  Here QMA(2) means Quantum Merlin-Arthur with two Merlins—i.e., the set of languages for which a “yes” answer can be witnessed by two unentangled quantum states, |ψ〉⊗|φ〉, on polynomially many qubits each, which are checked by a polynomial-time quantum algorithm—while EXP means deterministic exponential time.  Previously, the best upper bound we had was the trivial QMA(2) ⊆ NEXP (Nondeterministic Exponential Time), which comes from guessing exponential-size classical descriptions of the two quantum proofs.

Whether QMA(2) is contained in EXP is a problem that had fascinated me for a decade.  With Salman Beigi, Andy Drucker, Bill Fefferman, and Peter Shor, we discussed this problem in our 2008 paper The Power of Unentanglement.  That paper (with an additional ingredient supplied by Harrow and Montanaro) shows how to prove that a 3SAT instance of size n is satisfiable, using two unentangled quantum proofs with only Õ(√n) qubits each.  This implies that searching over all n-qubit unentangled proofs must take at least exp(n2) time, unless 3SAT is solvable in 2o(n) time (i.e., unless the Exponential Time Hypothesis is false).  However, since EXP is defined as the set of problems solvable in 2p(n) time, for any polynomial p, this is no barrier to QMA(2) ⊆ EXP being true—it merely constrains the possible techniques that could prove such a containment.

In trying to prove QMA(2) ⊆ EXP, the fundamental difficulty is that you need to optimize over unentangled quantum states only.  That might sound easier than optimizing over all states (including the entangled ones), but ironically, it’s harder!  The reason why it’s harder is that optimizing over all quantum states (say, to find the one that’s accepted by some measurement with the maximum probability) is a convex optimization problem: in fact, it boils down to finding the principal eigenvector of a Hermitian matrix.  By contrast, optimizing over only the separable states is a non-convex optimization problem, which is NP-hard to solve exactly (treating the dimension of the Hilbert space as the input size n)—meaning that the question shifts to what sorts of approximations are possible.

Last week, I had the pleasure of speaking with Martin in person, when I visited Vienna, Austria to give a public lecture at the wonderful new research institute IST.  Martin was then ironing out some final wrinkles in his proof, and I got to watch him in action—in particular, to see the care and detachment with which he examined the possibility that his proof might imply too much (e.g., that NP-complete problems are solvable in quasipolynomial time).  Fortunately, his proof turned out not to imply anything of the kind.

The reason why it didn’t is directly related to the most striking feature of Martin’s proof—namely, that it’s non-relativizing, leaving completely open the question of whether QMA(2)A ⊆ EXPA relative to all oracles A.  To explain how this is possible requires saying a bit about how the proof works.  The obvious way to prove QMA(2) ⊆ EXP—what I had assumed from the beginning was the only realistic way—would be to give a quasipolynomial-time approximation algorithm for the so-called Best Separable State or BSS problem.  The BSS problem, as defined in this paper by Russell Impagliazzo, Dana Moshkovitz, and myself (see also this one by Barak et al.), is as follows: you’re given as input an n2×n2 Hermitian matrix A, with all its eigenvalues in [0,1].  Your goal is to find length-n unit vectors, u and w, that maximize

(uT⊗wT)A(u⊗w),

to within an additive error of ±ε, for some constant ε.

Of course, if we just asked for a length-n2 unit vector v that maximized vTAv, we’d be asking for the principal eigenvector of A, which is easy to find in polynomial time.  By contrast, from the ABDFS and Harrow-Montanaro results, it follows that the BSS problem, for constant ε, cannot be solved in poly(n) time, unless 3SAT is solvable in 2o(n) time.  But this still leaves the possibility that BSS is solvable in nlog(n) time—and that possibility would immediately imply QMA(2) ⊆ EXP.  So, as I and others saw it, the real challenge here was to find a quasipolynomial-time approximation algorithm for BSS—something that remained elusive, although Brandao-Christandl-Yard made partial progress towards it.

But now Martin comes along, and proves QMA(2) ⊆ EXP in a way that sidesteps the BSS problem.  The way he does it is by using the fact that, if a problem is in QMA(2), then we don’t merely know a Hermitian operator A corresponding to the measurement of |ψ〉⊗|φ〉: rather, we know an actual polynomial-size sequence of quantum gates that get multiplied together to produce A.  Using that fact, Chailloux and Sattath showed that a natural variant of the QMA-complete Local Hamiltonians problem, which they call Separable Sparse Hamiltonians, is complete for QMA(2).  Thus, it suffices for Martin to show how to solve the Separable Sparse Hamiltonians problem in singly-exponential time.  This he does by using perturbation theory gadgets to reduce Separable Sparse Hamiltonians to Separable Local Hamiltonians with an exponentially-small promise gap, and then using a result of Shi and Wu to solve the latter problem in singly-exponential time.  All in all, given the significance of the advance, Martin’s paper is remarkably short; a surprising amount of it boils down to deeply understanding some not-especially-well-known results that were already in the literature.

One obvious problem left open is whether the full BSS problem—rather than just the special case of it corresponding to QMA(2)—is solvable in quasipolynomial time after all.  A second obvious problem is whether the containment QMA(2) ⊆ EXP can be improved to QMA(2) ⊆ PSPACE, or even (say) QMA(2) ⊆ PP.  (By comparison, note that QMA ⊆ PP, by a result of Kitaev and Watrous.)


Update (Nov. 10): I thought I should let people know that a serious concern has been raised by an expert about the correctness of the proof—and in particular, about the use of perturbation theory gadgets. Martin tells me that he’s working on a fix, and I very much hope he’ll succeed, but not much to do for now except let the scientific process trundle along (which doesn’t happen at blog-speed).

6-photon BosonSampling

Wednesday, August 19th, 2015

The news is more-or-less what the title says!

In Science, a group led by Anthony Laing at Bristol has now reported BosonSampling with 6 photons, beating their own previous record of 5 photons, as well as the earlier record of 4 photons achieved a few years ago by the Walmsley group at Oxford (as well as the 3-photon experiments done by groups around the world).  I only learned the big news from a commenter on this blog, after the paper was already published (protip: if you’ve pushed forward the BosonSampling frontier, feel free to shoot me an email about it).

As several people explain in the comments, the main advance in the paper is arguably not increasing the number of photons, but rather the fact that the device is completely reconfigurable: you can try hundreds of different unitary transformations with the same chip.  In addition, the 3-photon results have an unprecedentedly high fidelity (about 95%).

The 6-photon results are, of course, consistent with quantum mechanics: the transition amplitudes are indeed given by permanents of 6×6 complex matrices.  Key sentence:

After collecting 15 sixfold coincidence events, a confidence of P = 0.998 was determined that these are drawn from a quantum (not classical) distribution.

No one said scaling BosonSampling would be easy: I’m guessing that it took weeks of data-gathering to get those 15 coincidence events.  Scaling up further will probably require improvements to the sources.

There’s also a caveat: their initial state consisted of 2 modes with 3 photons each, as opposed to what we really want, which is 6 modes with 1 photon each.  (Likewise, in the Walmsley group’s 4-photon experiments, the initial state consisted of 2 modes with 2 photons each.)  If the number of modes stayed 2 forever, then the output distributions would remain easy to sample with a classical computer no matter how many photons we had, since we’d then get permanents of matrices with only 2 distinct rows.  So “scaling up” needs to mean increasing not only the number of photons, but also the number of sources.

Nevertheless, this is an obvious step forward, and it came sooner than I expected.  Huge congratulations to the authors on their accomplishment!

But you might ask: given that 6×6 permanents are still pretty easy for a classical computer (the more so when the matrices have only 2 distinct rows), why should anyone care?  Well, the new result has major implications for what I’ve always regarded as the central goal of quantum computing research, much more important than breaking RSA or Grover search or even quantum simulation: namely, getting Gil Kalai to admit he was wrong.  Gil is on record, repeatedly, on this blog as well as his (see for example here), as saying that he doesn’t think BosonSampling will ever be possible even with 7 or 8 photons.  I don’t know whether the 6-photon result is giving him second thoughts (or sixth thoughts?) about that prediction.

Introducing some British people to P vs. NP

Wednesday, July 22nd, 2015

Here’s a 5-minute interview that I did with The Naked Scientists (a radio show syndicated by the BBC, and no, I’m not sure why it’s called that), explaining the P vs. NP problem.  For readers of this blog, there won’t be anything new here, but, well … you might enjoy the rest of the hour-long programme [sic], which also includes segments about a few other Clay Millennium problems (the Riemann Hypothesis, Navier-Stokes, and Poincaré), as well as a segment about fat fish that live in caves and gorge themselves on food on the rare occasions when it becomes available, and which turn out to share a gene variant with humans with similar tendencies.