## Scattershot BosonSampling: A new approach to scalable BosonSampling experiments

**Update (12/2)**: Jeremy Hsu has written a fantastic piece for *IEEE Spectrum*, entitled “D-Wave’s Year of Computing Dangerously.”

**Update (11/13)**: See here for video of a fantastic talk that Matthias Troyer gave at Stanford, entitled “Quantum annealing and the D-Wave devices.” The talk includes the results of experiments on the 512-qubit machine. (Thanks to commenter jim for the pointer. I attended the talk when Matthias gave it last week at Harvard, but I don’t think that one was videotaped.)

**Update (11/11)**: A commenter named RaulGPS has offered yet another great observation that, while forehead-slappingly obvious in retrospect, somehow hadn’t occurred to us. Namely, Raul points out that the argument given in this post, for the hardness of Scattershot BosonSampling, can also be applied to answer open question #4 from my and Alex’s paper: namely, how hard is BosonSampling with Gaussian inputs and number-resolving detectors? Raul points out that the latter, in general, is certainly *at least* as hard as Scattershot BS. For we can embed Scattershot BS into “ordinary” BS with Gaussian inputs, by first generating a bunch of entangled 2-mode Gaussian states (which are highly attenuated, so that with high probability *none* of them have 2 or more photons per mode), and then applying a Haar-random unitary U to the “right halves” of these Gaussian states while doing nothing to the left halves. Then we can measure the left halves to find out which of the input states contained a photon *before* we applied U. This is precisely equivalent to Scattershot BS, except for the unimportant detail that our measurement of the “herald” photons has been deferred till the end of the experiment instead of happening at the beginning. And therefore, since (as I explain in the post) a fast classical algorithm for approximate Scattershot BosonSampling would let us estimate the permanents of i.i.d. Gaussian matrices in BPP^{NP}, we deduce that a fast classical algorithm for approximate *Gaussian* BosonSampling would have the same consequence. In short, approximate Gaussian BS can be argued to be hard under precisely the same complexity assumption as can approximate *ordinary* BS (and approximate Scattershot BS). Thus, in the table in Section 1.4 of our paper, the entries “Gaussian states / Adaptive, demolition” and “Gaussian states / Adaptive, nondemolition” should be “upgraded” from “Exact sampling hard” to “Apx. sampling hard?”

One other announcement: following a suggestion by commenter Rahul, I hereby invite guest posts on *Shtetl-Optimized* by experimentalists working on BosonSampling, offering your personal views about the prospects and difficulties of scaling up. Send me email if you’re interested. (Or if you don’t feel like writing a full post, of course you can also just leave a comment on this one.)

*[Those impatient for a cool, obvious-in-retrospect new idea about BosonSampling, which I learned from the quantum optics group at Oxford, should scroll to the end of this post. Those who don’t even know what BosonSampling is, let alone Scattershot BosonSampling, should start at the beginning.]*

BosonSampling is a proposal by me and Alex Arkhipov for a rudimentary kind of quantum computer: one that would be based entirely on generating single photons, sending them through a network of beamsplitters and phaseshifters, and then measuring where they ended up. BosonSampling devices are not thought to be capable of universal quantum computing, or even universal *classical* computing for that matter. And while they might be a stepping-stone toward universal optical quantum computers, they themselves have a grand total of zero known practical applications. However, even if the task performed by BosonSamplers is **useless**, the task is of some scientific interest, by virtue of apparently being **hard!** In particular, Alex and I showed that, if a BosonSampler can be simulated exactly in polynomial time by a classical computer, then P^{#P}=BPP^{NP}, and hence the polynomial hierarchy collapses to the third level. Even if a BosonSampler can only be *approximately* simulated in classical polynomial time, the polynomial hierarchy would still collapse, if a reasonable-looking conjecture in classical complexity theory is true. For these reasons, BosonSampling *might* provide an experimental path to testing the Extended Church-Turing Thesis—i.e., the thesis that all natural processes can be simulated with polynomial overhead by a classical computer—that’s more “direct” than building a universal quantum computer. (As an asymptotic claim, *obviously* the ECT can never be decisively proved or refuted by a finite number of experiments. However, if one could build a BosonSampler with, let’s say, 30 photons, then while it would still be feasible to verify the results with a classical computer, it would be fair to say that the BosonSampler was working “faster” than any known algorithm running on existing digital computers.)

In arguing for the hardness of BosonSampling, the crucial fact Alex and I exploited is that the amplitudes for n-photon processes are given by the *permanents* of nxn matrices of complex numbers, and Leslie Valiant proved in 1979 that the permanent is #P-complete (i.e., as hard as any combinatorial counting problem, and probably even “harder” than NP-complete). To clarify, this doesn’t mean that a BosonSampler lets you *calculate* the permanent of a given matrix—that would be too good to be true! (See the tagline of this blog.) What you could do with a BosonSampler is weirder: you could sample from a probability distribution over matrices, in which matrices with large permanents are more likely to show up than matrices with small permanents. So, what Alex and I had to do was to argue that even that sampling task is *still* probably intractable classically—in the sense that, if it weren’t, then there would also be unlikely classical algorithms for more “conventional” problems.

Anyway, that’s my attempt at a 2-paragraph summary of something we’ve been thinking about on and off for four years. See here for my and Alex’s original paper on BosonSampling, here for a recent followup paper, here for PowerPoint slides, here and here for MIT News articles by Larry Hardesty, and here for my blog post about the first (very small, 3- or 4-photon) demonstrations of BosonSampling by quantum optics groups last year, with links to the four experimental papers that came out then.

In general, we’ve been thrilled by the enthusiastic reaction to BosonSampling by quantum optics people—especially given that the idea started out as pure complexity theory, with the connection to optics coming as an “unexpected bonus.” But not surprisingly, BosonSampling has also come in for its share of criticism: e.g., that it’s impractical, unscalable, trivial, useless, oversold, impossible to verify, and probably some other things. A few people have even claimed that, in expressing support and cautious optimism about the recent BosonSampling experiments, I’m guilty of the same sort of quantum computing hype that I complain about in others. (I’ll let you be the judge of that. Reread the paragraphs above, or anything else I’ve ever written about this topic, and then compare to, let’s say, this video.)

By far the most important criticism of BosonSampling—one that Alex and I have openly acknowledged and worried a lot about almost from the beginning—concerns the proposal’s *scalability*. The basic problem is this: in BosonSampling, your goal is to measure a pattern of quantum interference among n identical, non-interacting photons, where n is as large as possible. (The special case n=2 is called the Hong-Ou-Mandel dip; conversely, BosonSampling can be seen as just “Hong-Ou-Mandel on steroids.”) The bigger n gets, the harder the experiment ought to be to simulate using a classical computer (with the difficulty increasing at least like ~2^{n}). The trouble is that, to detect interference among n photons, the various quantum-mechanical paths that your photons could take, from the sources, through the beamsplitter network, and finally to the detectors, have to get them there at *exactly the same time*—or at any rate, close enough to “the same time” that the wavepackets overlap. Yet, while that ought to be possible in theory, the photon sources that actually exist today, and that *will* exist for the foreseeable future, just don’t seem good enough to make it happen, for anything more than a few photons.

The reason—well-known for decades as a bane to quantum information experiments—is that there’s no known process in nature that can serve as a *deterministic single-photon source*. What you get from an attenuated laser is what’s called a coherent state: a particular kind of superposition of 0 photons, 1 photon, 2 photons, 3 photons, etc., rather than just 1 photon with certainty (the latter is called a Fock state). Alas, coherent states behave essentially like classical light, which makes them pretty much useless for BosonSampling, and for many other quantum information tasks besides. For that reason, a large fraction of modern quantum optics research relies on a process called Spontaneous Parametric Down-Conversion (SPDC). In SPDC, a laser (called the “pump”) is used to stimulate a crystal to produce further photons. The process is inefficient: most of the time, no photon comes out. But crucially, any time a photon *does* come out, its arrival is “heralded” by a partner photon flying out in the opposite direction. Once in a while, 2 photons come out simultaneously, in which case they’re heralded by 2 partner photons—and even more rarely, 3 photons come out, heralded by 3 partner photons, and so on. Furthermore, there exists something called a *number-resolving detector*, which can tell you (today, sometimes, with as good as ~95% reliability) when one or more partner photons have arrived, and how many of them there are. The result is that SPDC lets us build what’s called a *nondeterministic single-photon source*. I.e., you can’t control exactly when a photon comes out—that’s random—but eventually one (and only one) photon *will* come out, and when that happens, you’ll *know* it happened, without even having to measure and destroy the precious photon. The reason you’ll know is that the partner photon heralds its presence.

Alas, while SPDC sources have enabled demonstrations of a large number of cool quantum effects, there’s a fundamental problem with using them for BosonSampling. The problem comes from the requirement that n—the number of single photons fired off simultaneously into your beamsplitter network—should be *big* (say, 20 or 30). Suppose that, in a given instant, the probability that your SPDC source succeeds in generating a photon is p. Then what’s the probability that *two* SPDC sources will *both* succeed in generating a photon at that instant? p^{2}. And the probability that three sources will succeed is p^{3}, etc. In general, with n sources, the probability that they’ll succeed simultaneously falls off exponentially with n, and the amount of time you’ll need to sit in the lab waiting for the lucky event *increases* exponentially with n. Sure, when it finally *does* happen, it will be “heralded.” But if you need to wait exponential time for it to happen, then there would seem to be no advantage over classical computation. This is the reason why so far, BosonSampling has only been demonstrated with 3-4 photons.

At least three solutions to the scaling problem suggest themselves, but each one has problems of its own. The first solution is simply to use general methods for quantum fault-tolerance: it’s not hard to see that, if you had a fault-tolerant universal quantum computer, then you could simulate BosonSampling with as many photons as you wanted. The trouble is that this requires a fault-tolerant universal quantum computer! And if you had that, then you’d probably just skip BosonSampling and use Shor’s algorithm to factor some 10,000-digit numbers. The second solution is to invent some specialized fault-tolerance method that would apply directly to quantum optics. Unfortunately, we don’t know how to do that. The third solution—until recently, the one that interested me and Alex the most—would be to argue that, even if your sources are so cruddy that you have no idea which ones generated a photon and which didn’t in any particular run, the BosonSampling distribution is *still* intractable to simulate classically. After all, the great advantage of BosonSampling is that, unlike with (say) factoring or quantum simulation, we don’t actually care which problem we’re solving! All we care about is that we’re doing *something* that we can argue is hard for classical computers. And we have enormous leeway to change what that “something” is, to match the capabilities of current technology. Alas, yet again, we *don’t* know how to argue that BosonSampling is hard to simulate approximately in the presence of realistic amounts of noise—at best, we can argue that it’s hard to simulate approximately in the presence of *tiny* amounts of noise, and hard to simulate *super*-accurately in the presence of realistic noise.

When faced with these problems, until recently, all we could do was

- shrug our shoulders,
- point out that none of the difficulties added up to a principled argument that scalable BosonSampling was
*not*possible, - stress, again, that all we were asking for was to scale to 20 or 30 photons, not 100 or 1000 photons, and
- express hope that technologies for single-photon generation currently on the drawing board—most notably, something called “optical multiplexing”—could be used to get up to the 20 or 30 photons we wanted.

Well, I’m pleased to announce, with this post, that there’s now a better idea for how to scale BosonSampling to interesting numbers of photons. The idea, which I’ve taken to calling **Scattershot BosonSampling**, is not mine or Alex’s. I learned of it from Ian Walmsley’s group at Oxford, where it’s been championed in particular by Steve Kolthammer. *( Update: A commenter has pointed me to a preprint by Lund, Rahimi-Keshari, and Ralph from May of this year, which I hadn’t seen before, and which contains substantially the same idea, albeit with an unsatisfactory argument for computational hardness. In any case, as you’ll see, it’s not surprising that this idea would’ve occurred to multiple groups of experimentalists independently; what’s surprising is that we didn’t think of it!*) The minute I heard about Scattershot BS, I kicked myself for failing to think of it, and for getting sidetracked by much more complicated ideas. Steve and others are working on a paper about Scattershot BS, but in the meantime, Steve has generously given me permission to share the idea on this blog. I suggested a blog post for two reasons: first, as you’ll see, this idea really is “blog-sized.” Once you make the observation, there’s barely any theoretical analysis that needs to be done! And second, I was impatient to get out to the “experimental BosonSampling community”—not to mention to the critics!—that there’s now a better way to BosonSample, and one that’s incredibly simple to boot.

OK, so what *is* the idea? Well, recall from above what an SPDC source does: it produces a photon with only a small probability, but whenever it does, it “heralds” the event with a second photon. So, let’s imagine that you have an array of 200 SPDC sources. And imagine that, these sources being unpredictable, only (say) 10 of them, on average, produce a photon at any given time. Then what can you do? Simple: just *define* those 10 sources to be the inputs to your experiment! Or to say it more carefully: instead of sampling only from a probability distribution over *output* configurations of your n photons, now you’ll sample from a joint distribution over inputs *and* outputs: one where the input is uniformly random, and the output depends on the input (and also, of course, on the beamsplitter network). So, this idea could also be called “Double BosonSampling”: now, not only do you not control which output will be observed (but only the probability distribution over outputs), you don’t control which input either—yet this lack of control is not a problem! There are two key reasons why it isn’t:

- As I said before, SPDC sources have the crucial property that they
*herald*a photon when they produce one. So, even though you can’t control which 10 or so of your 200 SPDC sources will produce a photon in any given run, you*know*which 10 they were. - In my and Alex’s original paper, the “hardest” case of BosonSampling that we were able to find—the case we used for our hardness reductions—is simply the one where the mxn “scattering matrix,” which describes the map between the n input modes and the m>>n output modes, is a Haar-random matrix whose columns are orthonormal vectors. But now suppose we have m input modes
*and*m output modes, and the mxm unitary matrix U mapping inputs to outputs is Haar-random. Then any mxn submatrix of U will simply be an instance of the “original” hard case that Alex and I studied!

More formally, what can we say about the computational complexity of Scattershot BS? Admittedly, I don’t know of a reduction from ordinary BS to Scattershot BS (though it’s easy to give a reduction in the other direction). However, under exactly the same assumption that Alex and I used to argue that ordinary BosonSampling was hard—our so-called Permanent of Gaussians Conjecture (PGC)—one can show that Scattershot BS is hard also, and by essentially the same proof. The only difference is that, instead of talking about the permanents of nxn submatrices of an mxn Haar-random, column-orthonormal matrix, now we talk about the permanents of nxn submatrices of an mxm Haar-random unitary matrix. Or to put it differently: where before we fixed the columns that defined our nxn submatrix and only varied the rows, now we vary both the rows *and* the columns. But the resulting nxn submatrix is still close in variation distance to a matrix of i.i.d. Gaussians, for exactly the same reasons it was before. And we can still check whether submatrices with large permanents are more likely to be sampled than submatrices with small permanents, in the way predicted by quantum mechanics.

Now, everything above assumed that each SPDC source produces either 0 or 1 photon. But what happens when the SPDC sources produce 2 or more photons, as they sometimes do? It turns out that there are two good ways to deal with these “higher-order terms” in the context of Scattershot BS. The first way is by using number-resolving detectors to count how many herald photons each SPDC source produces. That way, at least you’ll *know* exactly which sources produced extra photons, and how many extra photons each one produced. And, as is often the case in BosonSampling, a devil you know is a devil you can deal with. In particular, a few known sources producing extra photons, just means that the amplitudes of the output configurations will now be permanents of matrices with a few repeated rows in them. But the permanent of an otherwise-random matrix with a few repeated rows should *still* be hard to compute! Granted, we don’t know how to derive that as a consequence of our original hardness assumption, but this seems like a case where one is perfectly justified to stick one’s neck out and make a new assumption.

But there’s also a more elegant way to deal with higher-order terms. Namely, suppose m>>n^{2} (i.e., the number of input modes is at least quadratically greater than the average number of photons). That’s an assumption that Alex and I typically made *anyway* in our original BosonSampling paper, because of our desire to avoid what we called the “Bosonic Birthday Paradox” (i.e., the situation where two or more photons congregate in the same output mode). What’s wonderful is that exactly the *same* assumption also implies that, in Scattershot BS, two or more photons will almost never be found in the same *input* mode! That is, when you do the calculation, you find that, once you’ve attenuated your SPDC sources enough to avoid the Bosonic Birthday Paradox at the output modes, you’ve *also* attenuated them enough to avoid higher-order terms at the input modes. Cool, huh?

Are there any *drawbacks* to Scattershot BS? Well, Scattershot BS certainly requires more SPDC sources than ordinary BosonSampling does, for the same average number of photons. A little less obviously, Scattershot BS also requires a larger-depth beamsplitter network. In our original paper, Alex and I showed that for ordinary BosonSampling, it suffices to use a beamsplitter network of depth O(n log m), where n is the number of photons and m is the number of output modes (or equivalently detectors). However, our construction took advantage of the fact that we *knew* exactly which n<<m sources the photons were going to come from, and could therefore optimize for those. For Scattershot BS, the depth bound increases to O(m log m): since the n photons could come from any possible subset of the m input modes, we no longer get the savings based on knowing where they originate. But this seems like a relatively minor issue.

I don’t want to give the impression that Scattershot BS is a silver bullet that will immediately let us BosonSample with 30 photons. The most obvious limiting factor that remains is the efficiency of the photon *detectors*—both those used to detect the photons that have passed through the beamsplitter network, and those used to detect the herald photons. Because of detector inefficiencies, I’m told that, without further technological improvements (or theoretical ideas), it will still be quite hard to push Scattershot BS beyond about 10 photons. Still, as you might have noticed, 10 is greater than 4 (the current record)! And certainly, Scattershot BS itself—a simple, obvious-in-retrospect idea that was under our noses for years, and that immediately pushes forward the number of photons a BosonSampler can handle—should make us exceedingly reluctant to declare there can’t be any *more* such ideas, and that our current ignorance amounts to a proof of impossibility.

Comment #1 November 8th, 2013 at 2:50 pm

Has anyone considered doing some sort of feedback Boson Sampling? I have in mind putting a beamsplitter right before the detector, and probabilistically shuttling photons back to their input channel (possibly even the input channel they originated from in the case of scattershot boson sampling).

I don’t know enough quantum optics to know if it’s experimentally feasible, nor do I have enough intuition to say anything about how the probability distribution on your output channel is modified, but these sorts of feedback ideas allow you to do all kinds of tricks from the perspective of quantum control.

Comment #2 November 8th, 2013 at 3:00 pm

Daniel #1:

Has anyone considered doing some sort of feedback Boson Sampling?

I guess you have, just now! 🙂

Seriously, that’s a very interesting idea, which we should think more about. Certainly it could serve as a cheap way to increase the “effective depth” of an optical network. But the most immediate problem I can see is that, if you probabilistically route some of your photons to the detectors and others back to the beginning of the beamsplitter network, then you’ve ipso facto made those two sets of photons distinguishable from each other, and destroyed the possibility of interference involving

allyour photons.Comment #3 November 8th, 2013 at 9:35 pm

For dealing with the case where a source generates 2 or more photons – can’t you simply notice when that happens (since the herald photons will also be doubled) and then ignore that trial?

Comment #4 November 8th, 2013 at 11:15 pm

Dear Scott,

Fantastic post. I am looking forward to seeing which of the quantum optics labs does the first scattershot BosonSampling experiment! This post also let me understand BosonSampling better. Thanks for the awesome blog.

Comment #5 November 9th, 2013 at 4:08 am

Can we ever get to sampling say 10^6 photons? Does the BosonSampling thing have anything to do with factoring?

Comment #6 November 9th, 2013 at 5:36 am

Isn’t it basically the same idea as in this paper

http://arxiv.org/abs/1305.4346v1

which was since retracted?

Comment #7 November 9th, 2013 at 7:38 am

Anthony #6: Thanks so much for pointing me to that paper! I somehow never saw it before (even though I’ve had some contact with its authors), or if I saw it then I’ve forgotten. I’ll certainly add a comment about it to the original post.

Two remarks:

(1) If I

hadseen the paper, I would’ve been thrown off by an unsatisfactory argument they make for why Scattershot BS should be hard: namely, because a Scattershot BS distribution contains a bunch of “ordinary” BosonSampling distributions as sub-distributions (each one with exponentially small probability). That connection between the two problems does give you a reduction, but it’s in the wrong direction. Alternatively, from the fact that a scaled version of someparticularBosonSampling distribution occurs in the Scattershot BS distribution, it’s clear that Scattershot BS should be hard to simulateexactlyunless PH collapses, but that’s not really what we want either. It’s far better just to say that, under the same assumption (Permanent of Gaussians Conjecture) we used to get that ordinary BS was hard to approximate, one canalsoget that Scattershot BS is hard to approximate, by trivial modifications to our original proof.Now, you could argue that, as a complexity theorist,

Ishould’ve been able myself to correct the problem with their argument—that if I couldn’t, it would be more my fault than the authors’. That’s true, but I simply don’t have that much confidence in my own ability to realize obvious-in-retrospect things! 🙂(2) The authors write: “This paper has been withdrawn by the authors whilst a non-technical issue is resolved.” I have no idea what the “non-technical issue” is that caused them to temporarily withdraw. In any case, the only thing I could see that was unsatisfactory on a

technicallevel was the argument for hardness, and that I just corrected for them! 🙂Comment #8 November 9th, 2013 at 7:44 am

Lou Scheffer:

For dealing with the case where a source generates 2 or more photons – can’t you simply notice when that happens (since the herald photons will also be doubled) and then ignore that trial?

Yes, absolutely, that’s exactly what you can and should do, provided the multi-photon case occurs with probability bounded away from 1. In my “Solution #1,” however, I was assuming that the multi-photon case occurs

almost all the time, so that you don’t have the luxury of ignoring it: you have to try to take advantage of it, and argue that the problem it solves is still hard (otherwise, you’ll simply be waiting too long for collision-free trials).Comment #9 November 9th, 2013 at 8:17 am

J #5:

Can we ever get to sampling say 10^6 photons?

Probably not, but I don’t know a good reason to anyway!

As Alex and I have often pointed out, with 10

^{6}photons it’s not even obvious how you’d efficientlyverifythat a BosonSampling device was working correctly, let alone simulate it classically! (For more about verification of BosonSampling, see this paper.) Furthermore, the only known “application” of BosonSampling is as a proof-of-principle, showing that a quantum device can do something “faster” than a classical simulation, and for that 30-40 photons should be perfectly sufficient.Does the BosonSampling thing have anything to do with factoring?

Well, a BosonSampling device could certainly be a useful stepping-stone toward a KLM quantum computer (to get from one to the other, you just need to add adaptive measurements). And a KLM QC would be universal, so could certainly factor.

On the other hand, BosonSampling

itselfhas no known cryptographic or number-theoretic applications. (Though, if all you care about is evidence for classical hardness, not usefulness, then the evidence for hardness is arguablystrongerfor BosonSampling than it is for factoring.)Comment #10 November 9th, 2013 at 12:36 pm

Scott, has the possibility of a Twitter War with Lockheed Martin crossed your mind?

Comment #11 November 9th, 2013 at 12:53 pm

rrtucci #10: Before your comment, no, it hadn’t! And I think it just exited my mind out the other end.

Comment #12 November 9th, 2013 at 2:08 pm

Possibly a very ignorant question on my part since I know even less physics than I do TCS, but does the experiment have to be done with photons, or could it be done with composite bosons like helium-4?

Comment #13 November 9th, 2013 at 2:26 pm

Vadim #12: In principle you could use

anyexcitations that behaved as identical, non-interacting bosons—even Higgs bosons! (Indeed, it’s for precisely this reason that we decided to call the problem BosonSampling rather than, say, PhotonSampling.)Photons are merely an “obvious” choice, one for which much of the required technology is already standard. But sure, people have also discussed using (e.g.) bosonic excitations in solid state. And just a couple weeks ago, a preprint appeared on the arXiv (which is still on my to-read pile) proposing to implement BosonSampling using trapped ions. I haven’t seen helium-4 discussed yet, and have no idea what its experimental advantages or disadvantages might be.

Comment #14 November 9th, 2013 at 4:22 pm

OMG what a load of BS!!!

Comment #15 November 9th, 2013 at 6:42 pm

Along the lines of comment #14, Scattershot BS is a great way to get into Science and Nature!

Comment #16 November 9th, 2013 at 8:10 pm

I can’t resist #14 and aram #15: Glad people are taking me up on what amounted to an open invitation to BS jokes…

Comment #17 November 9th, 2013 at 10:46 pm

Naive question probably: But how does the efficiency of the current photon detectors make Scattershit (ok, shot) BS harder? Can someone explain?

Comment #18 November 9th, 2013 at 11:13 pm

A simulationist’s dual to Scott’s assertion is

History assists us as follows. Quantum chemist John Pople’s

Gaussianalgorithm teaches that algebraically natural state-spaces are preferred; physicist Roy Glauber’s optics teaches that coherent states are physically natural; Dorje Brody and Eva-Maria Graefe’s “Coherent states and rational surfaces” (arXiv:1001.1754) reconciles Pople’s algebraic insight with Glauber’s physical insight, and Joseph Landsberg’s “Geometry and the complexity of matrix multiplication” (arXiv:cs/0703059) concretely teaches that a suitable state-space manifold is $$\sigma_r\,\text{Segre}\big(\mathbb{G}^k\times\mathbb{G}^k\times\dots (n\,\text{times})\big) \subset \mathbb{P}^{(k+1)^n-1}$$where \(n\) is the number of dynamically coupled photon modes, \(k\) is an \(\text{O}(n)\) Fock-number cut-off, the Segre map is natural, \(\sigma_r\) is a rank-\(r\) secant join (per Landsberg’s notation), and \(\mathbb{G}^k\) is the Glauber/Pople coherent state that the Veronese map associates to points in \(\mathbb{P}^{k}\):$$\begin{aligned}\mathbb{G}^k(a,b)=&\textstyle{\big[{{k}\choose{0}}\,{a^0}{b^{k}},{{k}\choose{1}}\,{a^1}{b^{k-1}},\,\dots\,,}\\ &\textstyle{\quad{{k}\choose{k-1}}\,{a^{k-1}}{b^{1}},{{k}\choose{k}}\,{a^{k}}{b^0}}\big] \mapsto \mathbb{P}^{k}\end{aligned}$$where \(a,b\) are complex coherent-state coordinates (for a basic background see Lecture 2 of Joe Harris’Algebraic Geometry; for physical applications see eq. 15 of Brody and Graefe; for algebraic examples see Section 5.5 of Landsberg’sTensors: Geometry and Applications).The resulting varietal state-spaces algebraically have dimensional \(r(n+1)-1\) (that is, polynomial-in-\(n\) dimension). Advantageously for simulation purposes, these state-spaces are “foamy”, in that they “fill” the usual quantum Hilbert space of dimension \((k+1)^n\), and moreover they preserve *almost* all of the usual Hamiltonian, thermodynamical, and quantum “goodness” of that Hilbert space (that is, conserved quantities are conserved, thermodynamical laws Zero through Three are respected, and the standard quantum uncertainty principles are respected too).

The missing “quantum goodness” resides (of course) in high-order quantum coherences that essential to both quantum computing and BosonSampling.

Suppose therefore that hostile aliens have come to Earth, and inspired by

Erdős Ramsey(5,5) problem, these aliens command us to “show data from a 100-photon BosonSampling experiment, or be destroyed.”Absent a scalable BosonSampling experiment, we pretend that Earth’s sole option is to simulate data by unravelling trajectories on the above Segre/Veronese foamy-yet-thermodynamical varietal state-spaces. Fortunately the optimistic teaching of Pople, Glauber, Landsberger (and this year’s Chemistry Nobelists too) is that quantum simulation methods have sometimes worked very much better than simple arguments suggest.

The Open BosonSimulation QuestionWhat computational tests (if any) can the hostile aliens apply to Earth’s varietally simulated dataset, to demonstrate that the data are simulated rather than real? And do these tests require exponential computing resources and/or exponentially large data samples?Thank you Scott and Alex (Arkhipov), for so ably inspiring *both* experimentalists and simulationists!

Comment #19 November 10th, 2013 at 12:47 am

If this was my field, I would look at ways to engineer “buckyballs”, or buckminsterfullerene cages to emit single photons. Seems to me a possible way to control a single atom and get it to emit photons with some precision. Just an idea!

Comment #20 November 10th, 2013 at 4:37 am

Scott:

This is a very nice development.

My only quibble is with the statement, “…there’s no known process in nature that can serve as a deterministic single-photon source.” This statement is overly strong and that along with the tone of the blog gives the impression that heralded SPDC is the only single-photon-source game in town.

A great deal of effort has been invested in developing single photon sources for the past 20 years for use in quantum key distribution and linear optical quantum information processing. This work has led to the development of a number of single photon sources primarily using quantum dots in optical cavities, microwave qubits in strip-line cavities, and atoms in microwave or optical cavities. Often these are more of a headache than SPDC, but they do exist.

These sources are not perfect yet; they do not always produce a single photon on demand and sometimes they produce more than one, but the improvement of these sources is a matter of technological matter and not some fundamental physical roadblock. They are currently much better than SPDC as single-photon sources.

What would be interesting is to examine these other single photon sources in scatter-shot BosonSampling, since the input statistics will be much closer to what you and Alex originally proposed.

Cheers,

Jon

Comment #21 November 10th, 2013 at 6:39 am

Rahul #17:

how does the efficiency of the current photon detectors make Scattershit (ok, shot) BS harder? Can someone explain?

Well, suppose (for example) that you fail to detect some of your n photons as they emerge from the mxm beamsplitter network. Then you don’t know exactly what the output state was. Therefore, you don’t know the complete set of rows of the nxn submatrix of your mxm unitary matrix, whose squared permanent you should calculate and add to your probability histogram, to compare to the predictions of quantum mechanics. At best, you can narrow the set of rows down to choose(m-(n-k),k) possibilities, where k is the number of lost photons. Likewise, if you fail to detect k of the herald photons, then you don’t know exactly what the

inputstate was. Therefore you don’t know the complete set ofcolumnsof your nxn submatrix, and can at best narrow it down to choose(m-(n-k),k) possibilities.Comment #22 November 10th, 2013 at 6:51 am

Jon #20: Thanks for the highly informative comment! I knew, of course, that there were other ways to produce single photons besides SPDC, but I didn’t know how much further along they were toward being deterministic sources. (That might be because, every time I’ve met with optics people, almost every sentence out of their mouths has been SPDC this and SPDC that!)

Of course, reliable

enoughsingle-photon sources would obviate the need for Scattershot BS. But it seems more likely that one will need to combine every trick available…Comment #23 November 10th, 2013 at 8:21 am

The former proposition perennially inspires discourse, while the latter proposition is (among experimentalists at least) “a truth universally acknowledged.”

Oft-cited articles like Pittman

et al.“Can Two-Photon Interference be Considered the Interference of Two Photons?” (1996) and recent advances like McMillanet al.“Two-photon interference between disparate sources for quantum networking” (2013) show us that these considerations are mathematically challenging and physically subtle.These experiments help too to appreciate the robust common sense of William Lamb’s oft-cited sardonic diatribe “Anti-photon” (Applied Physics B, 1996):

A Google search finds all three of these outstanding articles.

SummaryThe existing literature provides ample reasons that we should be (as Scott says) “exceedingly reluctant to declare that our current ignorance amounts to proofs” in regard to foreseeing the feasibility-versus-infeasibility of scalable QIT technologies, and similarly in regard to foreseeing the fundamental limits to efficient computational simulation of those technologies.Carlton Caves’ advice is well-considered too:

“Physicists are really obliged to give every explanation they can think of.”It is Caves’ path that has led us most reliably toward fundamental math-and-physics advances, particularly in regard to quantum technologies.ConclusionThe present era’s abundance of yet-unexplored mathematical paths for explaining and computationally simulating large-scale quantum systems is good news for young STEM professionals, and augurs well for continued quantum advances, adventures, excitement, and enterprises.Comment #24 November 10th, 2013 at 10:17 am

John referred to the following story told by Joel Spencer:

“Erdős asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5,5) or they will destroy our planet. In that case, he claims, we shoultd marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6,6). In that case, he believes, we should attempt to destroy the aliens.”

(Interestingly these Rumsey numbers re of some relevance to our recent computational complexity discussions over here. Even more interestingly, a recent paper on a D-wave experiment attempts to compute Rumsey numbers)

Going out on a limb I would rephrase Erdos statement as follows: Imagine an alien force, vastly more powerful than us, landing on Earth and demanding to exhibit (genuine) bosonsampling capability for 5 bosons, or they will destroy our planet. In that case, we should marshal all our quantum engineering forces and attempt to carry it over. But suppose, instead, that they ask for BosonSampling for 10 bosons. In that case, we should attempt to destroy the aliens.”

I would not expect much difference between genuine BosonSampling and and genuine FermionSampling.

Comment #25 November 10th, 2013 at 10:57 am

Scott, sorry for the suggestion of a Twitter War, I don’t much like Twitter so I don’t blame you for not wanting to air your views about this that way. One last idea: How about if you write a periodic blog post (every six months or so) accessing the progress of Lockheed Martin/NASA/Google adiabatic quantum computing efforts as you see them (or maybe as you and some collaborators like Matthias Troyer see them). It would be a civilized way of fostering competition and airing differences, and a public service of sorts, similar to how the president’s state of the union speech is followed by a state of the union rebuttal. Oh, and by the way, sorry for being off topic. I think the BosonSampling stuff is cool, but I have nothing useful to add to it.

Comment #26 November 10th, 2013 at 12:04 pm

@Scott, thank you for this crystal clear post

@Gil, thanks I was wondering what your thoughts were. Just to clarify: will you change your mind entirely if experimentalists actually do the job for 10 bosons? Also, don’t you think if aliens were asking for that, 10’d be low enough we could compute it classically?

Comment #27 November 10th, 2013 at 12:40 pm

All the alien forces talk is making me want to ask can quantum computing or BSP say anything about verifying the impossible quickly – that can it answer NP=coNP question?

Comment #28 November 10th, 2013 at 12:45 pm

Dear Jay, good questions! I did not refer to the computational task of BosonSampling but to actually creating 20-levels bosons and a bosonic state for 10-bosons based on 10 by 20 complex matrix. (This is why I said that I expect the same for FermionSampling that can be classically simulated even for 1000 Fermions without difficulty.)

Regarding changing my mind, there are experimental efforts which are much closer that can cause me to change my mind. Here I refer to obtaining robust qubits based on implementation of the surface codes (or other quantum codes). There are two strong experimental groups attempting to achieve it with surface code and other groups with related experimental endeavors. One group leaded by John Martinis expects to create robust qubits based on 35-qubit surface code, in a few months, even more robust qubits based on a little over 100 physical qubits (in 2-3 years) and extremely robust qubits based on 1000 or so physical qubits (longer-term with sufficient resources). I would expect my conjectures to “show up” even in the 35-qubit case.

Success of having logical qubits substantially more robust than “raw” physical qubits that John’s and various other experimental groups are attempting now will smash my conjectures to pieces.

Comment #29 November 10th, 2013 at 1:17 pm

Even with Scattershot BS, you still need to wait for a synchronized photon emission from at least, say, 10 of your 200 SPDC sources, right?

If so, what’s the probability of that event, do we have an estimate? So far as I remember doesn’t this probability sharply fall as n increases?

What’s the larges number of simultaneous photons we have yet seen from a SPDC expt. and what was the freq. in MHz of that event combination?

Again, perhaps naive questions.

Comment #30 November 10th, 2013 at 1:33 pm

Rahul #29: You’ll have to ask the experimentalists for estimates involving the SPDC sources that exist today. What I can tell you is that the big difference with Scattershot BS + SPDC sources, compared to ordinary BS + SPDC sources, is that the asymptotics are actually working in your favor rather than against you. Thus, suppose you have (let’s say) n

^{2}SPDC sources. Then even if each one has only a 1/n probability of producing a photon in any given (synchronized) run, you’ll still get ~n photons on average—not in an exponentially small fraction of runs, but inalmost everyrun. The main drawbacks are:(1) Ironically, in order for this to work, your beamsplitter network needs to be

largeenough (!).(2) You’re still limited by the efficiency of your photodetectors.

Comment #31 November 10th, 2013 at 1:36 pm

J #27: No, I don’t know any good reason to think that a universal quantum computer, much less a BosonSampler, would help us answer the NP vs. coNP question. Alas, that one will probably require the computers between our ears!

Comment #32 November 10th, 2013 at 1:39 pm

rrtucci #25: Err, I

dowrite a blog post commenting on the latest out of the “D-Wave archipelago” roughly every 6 months or so. (Usually, not because I want to, but because other people spur me into it!)Comment #33 November 10th, 2013 at 1:45 pm

Gil #24: Wow, I guess my risk assessment is different than yours. If it came down to BosonSampling with n=10 photons, versus a war to the death against a superior alien race, I’d certainly at least want to try the former before opting for the latter.

Especiallyif the aliens were gracious enough to allow Scattershot BosonSampling. 🙂As for n=5 photons, that could be done today, if anyone had the motivation and funding. (Recall that the Oxford group has already done 4 photons.)

Comment #34 November 10th, 2013 at 1:53 pm

Scott #30:

I agree with you about the asymptotics, but my suspicion is that for the BS experimental size of interest the probability of a 10 detector coincidence is going to be minuscule.

My intuitive fear is that desynchronization is going to throw a spanner in the works analogous to decoherence for conventional QCs. Of course, this is speculative so I’d love to see empirical synch data for 3,4,5 simultaneous photons.

I could be totally wrong. I’d love to see an experimentalist comment.

Your theory seems impeccable, it’s the messy experimental reality that I’m worried about.

Comment #35 November 10th, 2013 at 3:52 pm

> I would expect my conjectures to “show up” even in the 35-qubit case. (…) [otherwise] smash my conjectures to pieces.

Fair and exciting! Not sure if I wish you’re wrong or if I wish you’re right

> I did not refer to the computational task of BosonSampling but to actually creating 20-levels bosons and a bosonic state for 10-bosons based on 10 by 20 complex matrix.

Ok so some Aliens put a gun on our heads and will pull the trigger if we don’t manufacture a “quantum black box”, but the black box is allowed to compute an easy task.

No problem, we’re safe! Just put Dwave in charge. 🙂

Comment #36 November 10th, 2013 at 4:26 pm

Scattershot BS seems to be a subset of a family of Boson Sampling problems that I like to call Gaussian Boson Sampling (sampling the modes of non-trivial Gaussian state in the photon-number basis), for which it is easy to prove exact sampling to be hard [as you mention in your Boson Sampling paper in THEORY OF C OMPUTING, page 236 open question (4)], but hardness of approximation is not known. If it is true that you (Scott and Alex) “can also get that Scattershot BS is hard to approximate” (by trivial modifications to your original proof), it is plausible that you can extend it to Gaussian Bosonic Sampling, which could be potentially interesting for implementation purposes.

Scattershot BS is a subset of Gaussian Boson Sampling because you can always see Scattershot BS as a Gaussian Boson Sampling where you prepare 2N two-mode squeezed vacuum states, apply a random linear-optics circuit [unitary U(N)] over the odd-number modes and the identity on even-modes, and finally sample the photon-number basis on all 2N outputs. The heralding detectors being simulated by the even-modes detectors.

Comment #37 November 10th, 2013 at 5:30 pm

“Ironically, in order for this to work, your beamsplitter network needs to be large enough”

Scott, can you explain why this is the case? This doesn’t seem obvious to me.

Comment #38 November 10th, 2013 at 10:36 pm

Joshua #37: Maybe it was a confusing way to put things. All I meant was that in Scattershot BS, if you have m SPDC sources, and each one has a probability p of producing a photon in a given run, then you’ll get m*p heralded single photons on average (ignoring the higher-order terms—which we can if p<<1/√m). By contrast, if you do

ordinaryBosonSampling, the probability that each of your SPDC sources spits out a single photon at once decreases like p^{m}. So there’s a sense in which, whereas with ordinary BS the situation gets worse the more sources you add, with Scattershot BS it gets better.Comment #39 November 10th, 2013 at 11:32 pm

Ah, gotcha. That makes sense.

Comment #40 November 11th, 2013 at 12:20 am

Scott, re #32:

Given that you are likely to be goaded into doing this about every six months anyway, consider announcing a series of special SO editions on the topic. For example, each Jan 1 and July 1, everyone with a report on the topic can post.

Comment #41 November 11th, 2013 at 3:41 am

Scott#9:

I thought BosonSampling approximates Permanent. If you can approximate a $n \times n$, you can approximate $n!$ in $\log_2^k{n}$ steps (I think according to http://math-www.uni-paderborn.de/agpb/work/6dich.pdf)

Then this essentially can be used to verify your BosonSampling device no?

Comment #42 November 11th, 2013 at 3:53 am

continuation: One must be able to approximate $n!$ modulo any $\ log{n}$ bit number efficiently.

Comment #43 November 11th, 2013 at 5:41 am

Recently, I’ve been finding myself drawn more and more toward this way of thinking about quantum computing, which makes quantum gate sets look almost exactly like topological quantum field theories. In this analogy, a circuit diagram is like a spacetime region, and its inputs and outputs are like the inward- and outward-oriented boundaries of the region. Instead of thinking of the circuit as describing a transformation from inputs to outputs, you can think of it as describing a state on the boundary—and instead of thinking about experimentally “setting the inputs and then measuring the outputs,” you can think about probing the boundary state by testing how likely the circuit is to attach to various other circuits along the boundary.

Comment #44 November 11th, 2013 at 6:49 am

J #41: No, the problem is that, as I said in the post, BosonSampling doesn’t let you calculate the permanent of a given matrix, only sample from a certain probability distribution in which the probabilities are defined in terms of permanents.

Comment #45 November 11th, 2013 at 7:28 am

An almost off-topic comment: every time I ask my experimentalists about doing something involving number-resolving photon detectors, they look back at me in despair and ask: “Do you \emph{really} need it?”

Comment #46 November 11th, 2013 at 11:09 am

Mateus #45: Yes, I’ve experienced that as well!

With BosonSampling, the basic story is that you

don’tneed number-resolving detectors, if you instead have a large number of very reliable non-number-resolving detectors.For simplicity, first consider “ordinary” (non-scattershot) BS. Then provided m>>n

^{2}(i.e., the number of output modes is at least quadratically greater than the number of photons), and provided your scattering matrix is Haar-random, with high probability you’ll avoid the “bosonic birthday paradox,” meaning that all n of your photons will land in separate output modes. So then w.h.p., all your occupation numbers are either 0 or 1, and you just need detectors that can distinguish those two cases. (Provided, of course, that your detectors are actually reliable enough to let you account for all n of the photons—so that, if you see fewer than n, you can figure it was probably a collision rather than a photon loss.)Likewise, in the case of Scattershot BS, if m>>n

^{2}then you can attenuate your SPDC sources to the point where, in a 1-ε fraction of runs, every source produces either 0 or 1 photons. So then distinguishing those cases again suffices (though not to detect the ε fraction of cases where youdidget higher-order terms).Comment #47 November 11th, 2013 at 11:32 am

RaulGPS #36: That’s an

awesomeobservation—thanks so much!! Yes, just as you said, we can use Scattershot BS to show that “ordinary” BosonSampling with Gaussian inputs and number-resolving detectors is hard to simulate even approximately, under exactly the same assumption (the Permanent-of-Gaussians Conjecture) that works for Fock-state inputs. And that resolves one of the open problems from our paper.Incidentally, sorry for the delay! I wanted a few quiet minutes while Lily was napping to think about your comment and verify its correctness. Now that I have, though, I think I’ll add something about it to the original post.

Comment #48 November 11th, 2013 at 12:55 pm

Scott:

You should have one of your experimental collaborators guest blog on this topic.

I’d really love to hear more experimental perspectives. e.g. how feasible is this? How far from n=10 are we? What are potential roadblocks? etc.

Comment #49 November 12th, 2013 at 12:45 am

Just wanted to point out that the Lund et al arXiv has actually been updated i.e. unwithdrawn with a new version 2 days ago at http://arxiv.org/abs/1305.4346v3

Comment #50 November 12th, 2013 at 9:33 am

Scott #46: Thanks for taking your time to give more details! But I had understood that number-resolving photon detectors were unecessary, that’s why I said my comment was almost off-topic. I just thought it would be interesting to point out that using them is a pain in the ass.

But your sentence made me curious: when you seen less than n photons you just discard the run? Does this imply a hard lower bound on the detection efficiency of, say, \(\eta \ge {\frac23}^{\frac1n}\)?

Comment #51 November 12th, 2013 at 10:22 am

Mateus #50: Well, if you have non-number-resolving detectors and you see fewer than n photons, then at least one photon is unaccounted for. So the probability of the outcome is not a squared permanent, but a sum of many squared permanents (one for every possible combination of lost photons). And unfortunately, it becomes iffy to argue for the computational hardness of estimating such sums—we have some ideas about how to do it, but we’re still working out the details, and in any case our arguments will probably only be convincing for a small number of lost photons. So sure, you can keep those runs, but they won’t be as interesting computationally as the ones where you can account for all of the photons.

Comment #52 November 12th, 2013 at 11:41 pm

This seems to raise another question Is there any unexpected collapse we can show occurs if Boson Sampling is enough to simulate BQP, i.e. something like if one has an oracle O that simulates boson computers in the sense of Theorem 1 from your original paper with Alex Arkhipov, can we then get some unexpected inclusion of the form BQP <= C^O for some complexity class C where it is not known that BQP <= C? Such a result would suggest that boson sampling does encompass some non-trivial amount of BQP.

Alternatively, is there any collapses that can occur from examples of C that are unlikely? It would be neat for example if one could have some result off the form, if BQP <= P^O then the polynomial hierarchy collapses (or some other unexpected collapse occurs), or alternatively, something like BQP <= PH^O. Both of these sound like they might be difficult.

Comment #53 November 13th, 2013 at 2:09 am

For those of you interested in more D-Wave news, Matthias Troyer gave the general Physics and Applied Physics Colloquium at Stanford today.

The full taped video of the colloquium is available for download: http://nitsche.mobi/2013/troyer/

Comment #54 November 13th, 2013 at 4:20 am

This month’s (December 2013) issue of

Notices of the AMSincludes a short account by Lockheed-Martin’s Richard H. Warren titled“Numeric Experiments on the Commercial Quantum Computer.”Historically inclined

Shtetl Optimizedreaders will recall that the great algebraic geometerSolomon Lefschetzplayed a founding role in Lockheed-Martin’s celebrated mathematical institute of the 1950s-1960s, namely theResearch Institute for Advanced Studies(RIAS).Two of the Clay Institute’s

Millennium Prizeproblems, namely the Hodge Conjecture and the Navier Stokes Conjecture, owe much to this celebrated era of Lockheed-Martin/Samuel Lefschetz RIAS research.It is natural to ask, is history repeating itself? And it is natural too to reflect that DWAVE’s devices implement a computational dynamic flow (per Navier-Stokes) upon an algebraic state-space (per Lefshetz/Hodge) such that a profound class of mathematical problems is in play.

ConclusionIt is mathematically natural to *hope* that Navier/Stokes/Lefschetz/Hodge/Tate (etc) history is repeating itself at DWAVE/Lockheed-Martin!Comment #55 November 13th, 2013 at 8:46 am

Joshua #52: You ask some excellent questions.

I don’t know how to prove that any unlikely complexity-theoretic consequence would follow, if BosonSampling were universal for BQP. I agree that such a result would be great.

My strongest evidence that BosonSampling is

notuniversal for BQP, just comes from considering the sizes of the respective Lie groups: the group of unitary transformations acting on n qubits is ~2^{2n}-dimensional, whereas the group of unitary transformations acting on n optical modes is merely ~n^{2}-dimensional (it then gets lifted, via homomorphism, to a tiny subgroup of the group of unitary transformations on an exponentially-larger Hilbert space). By the usual standards of complexity theory, this sort of argument doesn’t count for much. But it does show that there can’t be any “faithful mapping” from the gates in a qubit-based quantum circuit to the linear-optical gates, and it seems pretty convincing intuitively.Ironically, I

doknow an unlikely-seeming complexity consequence that would follow if the converse held, and BosonSampling were reducible to BQP! How is that possible, you ask, since clearly a BosonSampling machine can be efficiently simulated by a universal quantum computer? Well, it simply has to do with the fact that BosonSampling is a sampling problem, whereas BQP is defined as a class of decision problems. In my and Alex’s paper, we note the following strange corollary of our results: if exact BosonSampling could be performed by a probabilistic polynomial-time algorithm with an oracle for BQP decision problems, then P^{#P}would be contained in BPP^{NP^PromiseBQP}. We added a remark like, “this seems unlikely, though we admit to lacking a strong intuition here.”Comment #56 November 14th, 2013 at 4:59 am

jim #53: Thanks for the link. The same talk held at ETH is available online here (smaller file size, one file).

Direct link for the ~300 MB file: Link

Comment #57 November 15th, 2013 at 6:53 pm

It won’t be easy to devise active/error-correcting qubit protection methods that better the extraordinarily long many-minute phase-decoherence times that are reported in this week’s

Scienceby Saeediet al.in “Room-Temperature Quantum Bit Storage Exceeding 39 Minutes Using Ionized Donors in Silicon-28.”These authors apply protection means — chiefly high Debye temperatures, isotopic purification, and decoupling pulses — that are entirely passive, and thus would serve to protect the phase coherence of classical Bloch spins equally well as quantum spins.

It would be wonderful (obviously) if we could induce high-order quantum coherences among individual spins, protect these coherences by means both passive and active, interact them by tunable Hamiltonians, and read them out with high fidelity. After all, individually these capabilities have been demonstrated convincingly; now the great quest is to demonstrate them simultaneously and scalably.

One indication that Gil Kalai’s conjectures may be correct is that optimizing any one of these capabilities degrades the rest. It would be terrific if we had a deeper understanding of why this coupling is ubiquitous. Hundreds of instances are available for study; why is a unified appreciation so elusive?

Comment #58 November 17th, 2013 at 2:15 am

rrtucci #25: May be misreading your comment somewhat. But it seem to me you almost position Scott as a kind of spokesperson for Matthias Troyer. Don’t think that’s fair.

Before moving on to Harvard he came to Waterloo, and we arranged to meet over lunch. He is great to talk to, and of course very knowledgable. To me it seems he approaches D-Wave with unbiased academic curiosity. For instance he originally was sceptical of their chip truly performing quantum annealing, but backed this conclusion in a later paper when the data strongly suggested it.

Mathias won’t partake in any blog discussion until his latest paper is published, so to me it makes sense to wait with revisiting the benchmark discussion until then.

Comment #59 November 17th, 2013 at 8:13 am

Henning #58: For the record, I was also skeptical of the presence of quantum annealing behavior in the device, back when they hadn’t published anything to show it—and then I changed my mind immediately when they

didpublish something! Which is, y’know, how science is supposed to work. Here’s the relevant blog post from 2.5 years ago.On the other hand, I agree with you that it was inaccurate for rrtucci to call me a “collaborator” of Matthias. At most, I’m Paul to his Jesus. 🙂

I share your admiration for Matthias’s objectivity. That’s why I’ve happily reported on his blog everything that he and his collaborators have found: both the evidence for quantum annealing behavior in the device, and the evidence for no scaling advantage whatsoever in any of the datasets that they tried.

Comment #60 November 17th, 2013 at 10:18 am

“it was inaccurate for rrtucci to call me a “collaborator” of Matthias”

Sorry, with hindsight, I should have been more clear. I meant that Scott and Matthias could collaborate in doing a blog post, or Matthias could do a guest post in Scott’s blog. I didn’t mean to imply that they had collaborated in writing a paper in the past.

I like Rahul’s idea of Jan 1 and July 4, two fireworks days. It could be the state of the union speech by the King Kong party versus the rebuttal by the Godzilla party. By making this a bit more scheduled, more people will begin to see it as just healthy, necessary debate.

Comment #61 November 17th, 2013 at 10:51 am

Hi – Just came across a five year old segment of your site re arguments about quantum computing. I think an interesting way to look at this issue is via what’s often called the de Broglie/Bohm perspective: i.e., if QM is interpreted to be deterministic, as it can be, how does a quantum computer work? Leads to some interesting considerations…

Comment #62 November 17th, 2013 at 2:54 pm

Scott #59, didn’t mean to imply that you differ from Mathias on the ability to change your mind when presented with new facts 🙂

As I wrote I don’t want to rekindle the benchmarking discussion before Mathias can freely chime in, but just for clarification when you mention scaling, do you mean scaling with the number of qubits on the chip?

Comment #63 November 17th, 2013 at 7:52 pm

Henning #62: Yes.

Comment #64 November 17th, 2013 at 7:57 pm

Ted #61: dBB makes the same predictions as standard QM for what a quantum computer would do, but unfortunately, it doesn’t really provide new or different insight. For more see my Physics StackExchange answer on the subject.

Comment #65 November 18th, 2013 at 3:05 pm

Off topic, sorry. But is Eliezer Yudkowski’s talk available online?

Comment #66 November 18th, 2013 at 8:08 pm

nathan #65: Sorry, no. Read the update on my last post.

Comment #67 November 19th, 2013 at 4:23 am

$14$ to $10000$ and for $39$ minutes

http://www.zdnet.com/scientists-produce-most-stable-qubit-largest-quantum-circuit-yet-7000023293/

http://www.sciencemag.org/content/342/6160/830

Is this good news and if so for what?

Comment #68 November 19th, 2013 at 10:24 pm

This doesn’t seem to have been mentioned yet:

November 4, 2013

Polymath9: P=NP? (The Discretized Borel Determinacy Approach)

http://polymathprojects.org/2013/11/04/polymath9-pnp/

Blog post: http://gowers.wordpress.com/2013/11/03/dbd1-initial-post

I remember some of Scott’s comments about underestimating the difficulty of P vs NP but in any case I wish all the best to this project.

Comment #69 November 21st, 2013 at 6:48 am

“The nincompoops who we paid to record the talk wrote down November instead of October for the date, didn’t show up, then stalled for a month before finally admitting what had happened.”Didn’t anyone just look to see if the videographer was around? Sounds funny. 🙂

Comment #70 November 22nd, 2013 at 11:14 am

This afternoon Gil Kalai will speak at the the University of Washington (UW) Pacific Institute for the Mathematical Science (PIMS) colloquium on the topic

Why quantum computers cannot work.This is a continuation of the wonderful Gil Kalai/Aram Harrow debate, and would be great to see quantum folks there!As a tribute to Gil and Aram (and Scott Aaronson too), I have composed for MathOverflow a question

Do quantum “Sure-Shor separators” have a natural Veronese/Segre classification? (question inspired by Gil Kalai and Aram Harrow).Responses are invited, needless to say (and later this week I will offer a bounty on this question).

Comment #71 November 27th, 2013 at 3:06 pm

Why is the extended Church-Turing thesis defined by you as “the thesis that all natural processes can be simulated with polynomial overhead by a classical computer”, as opposed to “the thesis that all natural processes can be simulated with polynomial overhead by a BUILDABLE computer” (which could be a quantum computer)?

Or would that be the “physical” Church-Turing thesis?

Comment #72 November 28th, 2013 at 8:25 am

In quantum simulation the following problem is natural: Given a set of Haar-random \(m\times m\) unitary transforms \(U\) on an input state of \(m\) modes, an \(n\times n\) submatrix \(U’\) is selected from each \(U\), and moreover \(U’\) is multiplicatively corrupted by gaussian amplitude and phase noise, each of relative amplitude \(\epsilon\ll 1\), yielding noisy submatrices \(U”\).

QuestionIf the noise-free submatrices \(U’\) are ranked by permanent \(|P(U’)|^2\), can that permanent-rank be approximated by PTIME measures applied to the noisy submatrices \(U”\)?A physics solutionRank-order the (noise-free) \(U’\) by the rate at which its associated (noisy) \(U”\) generates entropy in its \(n\) output modes, for classical light of uniform phase and amplitude incident on its \(n\) input modes.ExampleFor a typical case of \(m = 200\) total modes and \(n=15\) submatrix modes (i.e., 15 heralded-and-detected photons in a Scattershot BosonSampling experiment), with relative amplitude-and-phase error \(\epsilon=0.05\), the Spearman \(\rho\)-value associated to the (quantum) permanent-ordering versus the (classical) entropy-ordering is \(\rho_\text{Spearman}\simeq 0.14\) (sample data here, color-coded by entropy-rate rank).RemarksComputing the entropy rate requires \(\mathcal{O}(n^2)\) resources, whereas computing the determinant (for example) is \(\mathcal{O}(n^{^{2.37}})\) (by Coppersmith-Winograd). This motivates the following (somewhat jocular) conjecture:The Permanent-of-Gaussians Sorting ConjectureApproximately sorting a noise-corrupted list of gaussian matrices by permanent is computationally easier than exactly sorting noise-free matrices by determinant.More seriously (and broadly), these optical intuitions lead us to appreciate that permanent-associated quantities may be in some respects intrinsically *easier* to compute (approximately) than determinant-associated questions.

The physical intuition is “noise is our friend”, in the following sense. If the \(U’\) matrices had elements in a finite field, then the notion of a “nearby” noisy matrix \(U”\) would cease to be defined, in that finite field elements have no natural ordering, (AFAICT). Converse, the observed robustness to noise of \(U’\)-elements in \(\mathbb{C}\) acts (somehow) to facilitate rank-ordering, and thus estimation, of \(\mathbb{C}\)-valued permanents.

Open questions(intended for StackExchange later in the week) What PTIME measures perform better than the entropy-rate in rank-ordering by \(|\,P\,|^2\) the submatrices that are associated to Scattershot BosonSampling? What is the upper bound to the Spearman \(\rho\)-value that is achievable in PTIME for approximate rank-ordering of (possibly noisy) matrices?Comment #73 December 1st, 2013 at 2:32 am

I may not find a proof about impossibility of classical sampling (using some nontrivial approach). For example for permanent with negative numbers. In such a case noise may prevent to calculate that effectively, but sampling may be possible anyway.

Comment #74 December 2nd, 2013 at 9:19 am

Trapdoor BosonSampling:Twenty Years After ScatterShot BosonSampling

(a theorems-from-physics fantasy)Alice (an experimentalist>We quantum experimentalists have good news! After decades of work, scalable methods for ScatterShot BosonSampling have been conceived, and experimental datasets for \(n=20\) heralded photons scattering into \(m=200\) output modes now are available.Carl (a mathematician)That’s terrific news, Alice, because after decades of work, we mathematicians have recently proved the Aaronson/ArkhipovPermanent of Gaussians Conjecturein full generality and rigor, and so it is now called thePermanent of Gaussians Theorem.Bob (a simulationist)We quantum simulationists have good news too! After decades of work, PTIME methods forsimulatingScatterShot BosonSampling have been conceived, and simulated datasets for \(n=20\) heralded photons scattering into \(m=200\) output modes now are available.CarlThat’s terrific news, Bob, because after decades of work, we mathematicians have recently proven, in full generality and rigor, that no statistical test can distinguish Bob-style PTIME-simulated data from Alice-style experimental data.Alice and Bob(in unison) What!? How can that be?CarlThe answer is “forehead-slappingly obvious in retrospect” (in Scott Aaronson’s amusing phrase). Bob’s PTIME simulation algorithm samples quantum trajectories, and each trajectory is associated to a data record. Crucially, the map \(\text{(trajectory)}\to\text{(data record)}\) is a trapdoor function (that is, not invertible with PTIME resources); thus Bob’s trapdoor BosonSampling simulation algorithmcannotbe adapted to computationally verify that Alice’s experimental data records are associated to large permanents.An Unanswered Math-and-PhysicsDoes Nature generate Alice’s data records using Bob’s PTIME algorithms?Alice and Bob and Carl(in operatic chorus) Scalable BosonSampling experiments *and* a rigorously proven Permanent of Gaussians Theorem *and* efficient large-scale quantum simulation algorithms *and* deep open questions: surely the 21st century is gonna beterrific!Comment #75 December 3rd, 2013 at 9:05 am

To assist computational experiments relating to BosonSampling experiments, permanent estimation and rank-ordering, the Permanent of Gaussians Conjecture (etc), the good citizens of the Mathematica StackExchange have provided various code-optimizations relating to

“Can (compiled) matrix permanent evaluation be further sped-up?“that on a modest laptop allow 10×10 matrix permanents to be evaluated in less than a millisecond, and 20×20 permanents to be evaluated in less than one second.Comment #76 December 3rd, 2013 at 9:39 am

Michael Gogins #71:

Why is the extended Church-Turing thesis defined by you as “the thesis that all natural processes can be simulated with polynomial overhead by a classical computer”, as opposed to “the thesis that all natural processes can be simulated with polynomial overhead by a BUILDABLE computer” (which could be a quantum computer)?

Sorry for the delay! It’s a very good question, whose boring answer is that this is the usual terminology I’ve seen. We need

somename for the (probably false) thesis that all natural processes can be efficiently simulated by a classical computer, and ECT is the name that many papers have used. Certainly it has some historical basis, in that itiswhat many computer scientists and others assumed or believed for decades (usually without even explicitlyformulatingthe assumption), and it’s what some still believe.I should point out that David Deutsch has used the name “Church-Turing principle” (or according to Wikipedia, “Church-Turing-Deutsch principle” 🙂 ) to refer to the weaker thesis you mention, that all natural processes can be efficiently simulated by

somebuildable universal device.Comment #77 December 3rd, 2013 at 9:46 am

Rahul #69:

Didn’t anyone just look to see if the videographer was around?

If a videographer

hadbeen there, he or she would’ve been in a glass alcove overlooking the lecture room where we wouldn’t have easily seen them. But yes, we should’ve checked. As it was, we were completely preoccupied withothertechnical issues, like finding a VGA/HDMI converter for Eliezer’s laptop.Comment #78 December 3rd, 2013 at 2:44 pm

Combining Scott’s two sensible statements and juxtaposing them with

sustained Moore’s Law process in multi-scale dynamical simulation capabilityhelps us to appreciate that the notoriously tricky concepts of “probably,” “natural,” “efficiently,” and “impossibility” are associated to richly interacting mathematical nuances that havealreadyprovided ample room for classical simulation of large-scale quantum dynamics to become (as the Nobel Committee calls it)“our Virgil in the world of atoms.“ConclusionOur current ignorance (in Scott’s phrase) obscures the practical limits to continued improvements in classical Virgil’s quantum guidance, and so it is healthy for the 21st century STEM enterprise that opinions in regard to these unknown limits vary widely.Comment #79 December 4th, 2013 at 9:02 am

Scott #76

May we summarize as follow?

*Church-Turing thesis: any computation can be carried out by a Turing machine/lambda calculus. [equivalent formulations proposed by Church and Turing around 1935, final name by Kleene 1952]

*Church-Turing principle : Every physical process can be simulated by a universal computing device. [proposition and name due to Deutsch, 1985; sometime refered to as the Church-Turing-Deutsch Principle; equivalent to Michael Gogins #71’s formulation]

*Deutsch 1985’s actual thesis: Every physical process can be simulated by a quantum, universal computing device. [sometime also refered to as the Church-Turing-Deutsch Principle]

*Extended Church-Turing thesis: Every physical process can be simulated by a classical, universal computing device. [unclear origin for the denomination; unclear if anyone actually proposed it; sometime confused with CT thesis]

Comment #80 December 4th, 2013 at 9:45 am

Jay #79: Looks OK, except that in your statement of the ECT, you forgot to say, “…can be simulated

with polynomial overhead.” (Otherwise you just get a restatement of the ordinary CT.)Also, I think Ian Parberry and some others explicitly formulated the ECT, and it was certainly at least “implicit” in most complexity theory textbooks. And if nothing else, Leonid Levin and Oded Goldreich staunchly defend the ECT today. 🙂

Comment #81 December 4th, 2013 at 11:44 am

Three references in regard to evolving definitions of “ECT”

• Dershowitz and Falkovich, “A formalization and proof of the Extended Church-Turing Thesis” (

arXiv:1207.7148 [cs.LO]) provide in-depth references that include Ian Parberry’s 1986 work (as mentioned in Scott #80).Regrettably, Dershowitz and Falkovich overlook what is (AFAICT) a key early reference to the (seemingly equivalent?) notion of the

polynomial-time Church-Turing Thesis, which is set forth plainly and explicitly in• Aaronson, “Multilinear formulas and skepticism of quantum computing” (

arXiv:quant-ph/0311039v4)And finally, the practical math-and-physics nuances that are associated to PTIME/trapdoor quantum simulations (per

comment #74) are regrettably not accounted in recent references that include:• Shchesnovich, “A sufficient fidelity bound for scalability of the boson sampler computer” (

arXiv:1311.6796 [quant-ph])In particular, conclusions like Shchesnovich’s

implicitly embrace a trapdoor-excluding definition of

simulationthat regrettably (as it seems to me) bears little relation to the eminently practical trapdoor-exploiting — and generically thermodynamical —PTIME realities of real-world large-scale quantum dynamical simulation.——

ConclusionThe plausibility of the ECT hinges substantially upon definitional nuances that are associated to the inclusion-versus-exclusion of trapdoor PTIME algorithms as “simulations” of quantum dynamical systems.Comment #82 December 4th, 2013 at 4:58 pm

Yep I forgot “efficiently” from definitions 2-4. Thanks to you and John Sidles for the ref.

However Parberry et al. 1986 hardly defined the ECT. They instead cite a textbook from Leslie Goldschlager 1983 and use a different naming:

“the sequential computation thesis [10] states that time on all “reasonable” sequential machine models is related by a polynomial.”

So it’s the same idea, but at least the name does not come from this path.

Diging further I couldn’t find older references to this “sequential computation thesis”, but that may have followed as a consequence of the “parallel computation thesis”, which was likely defined by Chandra and Stockmeyer 1976.

A.K. Chandra, L.J. Stockmeyer, “Alternation”, Proc. 17th FOCS, Oct. 1976 (98-108). [cited in Goldschlager, STOC 1978]

Comment #83 December 4th, 2013 at 10:05 pm

John Slides, I had a look at your post #74 but that does not seems a consistent story, or maybe I just don’t get your idea. In short you’re considering a fantasy in which:

1) Alice constructs a BosonSampler using physical ressources

2) Carl fills some minor holes in the demonstration that it is hard to emulate BosonSampling unless P^#P=BPP^NP

3) Bob finds a polynomial time algorithm for BosonSampling

So far no problem. Fantasy use to present faster-than-light travels, by this standard it’s definitely ok to consider an incomplete collapse of the Polynomial Hierarchy.

4) Carl demonstrate we cannot distinguish Bob’s algorithm from Alice’s physical BosonSampler

Here I don’t get your point. To me statement 4) is the same as statement 3) plus statement 1). Why should anyone be surprised?

Comment #84 December 5th, 2013 at 4:43 am

Indeed the point is that no-one should be surprised at Bob’s simulational success, because perfect PTIME/trapdoor simulation of Scattershot BosonSampling experiments would

notviolate any standard assumption of complexity theory (as appreciated by me at any rate).Close reading of the Aaronson/Arkhipov article “The computational complexity of linear optics” (2011) is strongly recommended, and all definitions in the following discussion — including the ECT in particular — are as given in that well-reasoned and wonderful (as it seems to me) article.

In particular:

• Alice’s scalable BosonSampling experimental capability would

notdisprove the Extended Church-Turing thesis (in its Aaronson/Arkhipov form) because …• Bob’s PTIME/perfect BosonSampling simulation capability of Alice’s experiment would

notdisprove the Aaronson/ArkhipovPermanent of Gaussians Conjectureand therefore …• Bob’s efficient/perfect BosonSampling simulation capability would

notimply the collapse of any portion of the polynomial-time hierarchy.A concise overview of the mathematical

reasoningassociated to the above conclusions — reasoning that is “forehead-slappingly obvious in retrospect” (in Scott’s phrase) — is generated by substituting “Shor Factoring” for “BosonSampling”:SummaryFor quantum simulationists, the key difference between quantum factoring and Scattershot BosonSampling is that AKS-based PTIME/trapdoor sampling algorithms exist for factoring, whereas for Scattershot BosonSampling the analogous PTIME/trapdoor sampling algorithms are not knownyetto exist.Fortunately (even excitingly!) neither complexity theory nor the supposed falsity of Extended Church-Turing Thesis provide any reason to foresee that efficient Scattershot BosonSampling algorithms

don’texist.Discovering algorithms in this PTIME/trapdoor simulation class is (at it seems to me) a wonderful open problem, having innumerable practical applications, that in the memorable phrase of Scott’s original post “is a simple, obvious-in-retrospect idea that has been under our noses for years.”

ConclusionCarl-the-optimistic-mathematician’s favorite joke has the joyous and justly celebrated punch-line“You know, you’re absolutely right!“Comment #85 December 5th, 2013 at 7:27 am

“simulation of Scattershot BosonSampling experiments would not violate any standard assumption of complexity theory (as appreciated by me at any rate).”

You disagree that would collapse the polynomial hierarchy at third level, or you disagree this is a standard assumption?

Comment #86 December 5th, 2013 at 7:54 am

Sorry, I tend to stop at first bull, then didn’t read “BosonSampling simulation capability would not imply the collapse of any portion of the polynomial-time hierarchy.” which answers my last question.

So, you have a story in which Carl demonstrate one thing and the opposite is true. Yes that’s surprizing, you’re absolutly right.

Comment #87 December 5th, 2013 at 8:21 am

Jay, your second response is correct (as it seems to me), and here are some of the details that your first response requested.

—————-

Aaronson/Arkhipov “Computational complexity of linear optics” begins with (what amounts to) the clearest succinct definition of the Extended Church-Turing Thesis (ECT) (that is known to me):

There follow 110 pages of analysis that work through the multitudes of nuances, subtleties, complexities, and implications that are associated to this statement.

With comparable brevity, the following is asserted:

Here the Permanent of Gaussians Conjecture is defined per Aaronson/Arkhipov section 1.2.3, Problem 1.4 and Conjecture 1.5.

This argument *for* the ECT presumes the same technical assumptions as the Aaronson/Arkhipov article; in particular the polynomial hierarchy is respected. More subtly yet realistically, the classical simulation must have access to a Kolmogorov-incompressible random-number generator (so that distinguishing simulated data from experimental data cannot be accomplished by guessing the initial state of the random-number generator). Reconciling PTIME/perfect simulation with the Permanent of Gaussians Conjecture is achieved via what might be called the

trapdoorloophole, namely, the simulated data donotprovide a PTIME estimator for any Per(X), where “estimator” has the concrete definition of Problem 1.4, and in particular the matrix X is given as ininput.ConclusionAccording to our present understanding, the standard complexity-theoretic postulates do not obstruct the fondest hopes of *both* BosonSamplers *and* quantum simulationists.Comment #88 December 5th, 2013 at 10:01 am

“PH may indeed collapse, in which case BosonSampling’d say nothing about the ECT”.

Comment #89 December 5th, 2013 at 11:52 pm

A less controversial source of plausibility for the ECT (as it seems to me) is the richly-intertwining postulates and definitions that are associated to the key theorems of “Computational complexity of linear optics” (section 5 in particular); postulates and definitions that (as it seems to me) leave ample room to satisfy — simultaneously — the most ardent BosonSamplers and BosonSimulators.

ExampleEven today the most-used, most-efficient primality tests and linear programming solvers are formally unreliable and/or require EXP resources, yet in practice these algorithms compute results reliably in PTIME.Very plausibly, we’ll need much more

practicalexperience than we now have, in simulating BosonSampling experiments and dealing concretely with tricky issues of verification and validation, before we can be confident that we’re specified the best postulates and definitions.This generically enormous gap between worst-case and average-case algorithmic complexity (for fields of characteristic zero especially) leaves ample room for optimism that well-considered tuning of postulates and definitions can yield complexity-theoretic narratives in which “Everyone is absolutely right.”

Comment #90 November 1st, 2015 at 10:11 pm

Scott,

Despite this post now being really old, I’m going to comment belatedly anyway, since it has started being cited in papers in the literature as the definitive reference on Scattershot Boson Sampling.

This is an aside, but I know you like for things to be correct and not misleading, even if they’re not the main show. In your post, you say, “there’s no known process in nature that can serve as a deterministic single-photon source”. If by “deterministic single-photon source”, you mean one that produces the Fock state |1> on-demand, with 100% fidelity, then no experimentalist should argue; achieving 100% is always going to be impossible. However, so long as we use a more relaxed condition that we just want very high fidelity, I think it’s fair to argue that many systems offer a natural deterministic single-photon generation process, and it’s largely a matter of engineering for us to get close to 100% fidelity. For example, in optically-active quantum dots, deterministic generation of single photons (with very low probability of generating two photons in a single shot; <1%) is achievable with both pulsed electrical and optical excitation. The big catch with using atoms or artificial atoms (like quantum dots) as single-photon sources is not that they don't deterministically generate single photons; it's that collecting the photon that is produced is tricky. For example, when an atom emits a photon, to first approximation the photon can and will be emitted in any direction. By putting an atom in a cavity, one can enhance the probability of emission into the cavity mode, and in this way have most of the photons that are emitted come out in only one direction, and be collected and collimated by a lens. For example, this work http://www.nature.com/ncomms/journal/v4/n2/pdf/ncomms2434.pdf demonstrates a single-photon source with which one will have a single photon be collected by a lens in 80% of attempts at triggering it. In my understanding, the source is close to 100% deterministic, and there is just 80% collection efficiency because sometimes the emitted photon goes in the wrong direction (out the side, or into the back substrate of the device).

Given that such amazing sources exist, why doesn't everyone use them? Mostly it's a question of expense and practicality, not physics: these high-efficiency quantum-dot-based single-photon sources are currently still difficult to fabricate, and need to be operated at ~4 K. In contrast, SPDC just requires you to shine a laser at a crystal (that you can buy for ~$300) at room temperature, and this simplicity in operation has made the technique very popular; it's also very convenient that the source is heralded.