Despite this post now being really old, I’m going to comment belatedly anyway, since it has started being cited in papers in the literature as the definitive reference on Scattershot Boson Sampling.

This is an aside, but I know you like for things to be correct and not misleading, even if they’re not the main show. In your post, you say, “there’s no known process in nature that can serve as a deterministic single-photon source”. If by “deterministic single-photon source”, you mean one that produces the Fock state |1> on-demand, with 100% fidelity, then no experimentalist should argue; achieving 100% is always going to be impossible. However, so long as we use a more relaxed condition that we just want very high fidelity, I think it’s fair to argue that many systems offer a natural deterministic single-photon generation process, and it’s largely a matter of engineering for us to get close to 100% fidelity. For example, in optically-active quantum dots, deterministic generation of single photons (with very low probability of generating two photons in a single shot; <1%) is achievable with both pulsed electrical and optical excitation. The big catch with using atoms or artificial atoms (like quantum dots) as single-photon sources is not that they don't deterministically generate single photons; it's that collecting the photon that is produced is tricky. For example, when an atom emits a photon, to first approximation the photon can and will be emitted in any direction. By putting an atom in a cavity, one can enhance the probability of emission into the cavity mode, and in this way have most of the photons that are emitted come out in only one direction, and be collected and collimated by a lens. For example, this work http://www.nature.com/ncomms/journal/v4/n2/pdf/ncomms2434.pdf demonstrates a single-photon source with which one will have a single photon be collected by a lens in 80% of attempts at triggering it. In my understanding, the source is close to 100% deterministic, and there is just 80% collection efficiency because sometimes the emitted photon goes in the wrong direction (out the side, or into the back substrate of the device).

Given that such amazing sources exist, why doesn't everyone use them? Mostly it's a question of expense and practicality, not physics: these high-efficiency quantum-dot-based single-photon sources are currently still difficult to fabricate, and need to be operated at ~4 K. In contrast, SPDC just requires you to shine a laser at a crystal (that you can buy for ~$300) at room temperature, and this simplicity in operation has made the technique very popular; it's also very convenient that the source is heralded.

]]>**Jay** remarks “PH may indeed collapse, in which case BosonSampling’d say nothing about the ECT.”

A less controversial source of plausibility for the ECT (as it seems to me) is the richly-intertwining postulates and definitions that are associated to the key theorems of “Computational complexity of linear optics” (section 5 in particular); postulates and definitions that (as it seems to me) leave ample room to satisfy — simultaneously — the most ardent BosonSamplers and BosonSimulators.

**Example** Even today the most-used, most-efficient primality tests and linear programming solvers are formally unreliable and/or require EXP resources, yet in practice these algorithms compute results reliably in PTIME.

Very plausibly, we’ll need much more *practical* experience than we now have, in simulating BosonSampling experiments and dealing concretely with tricky issues of verification and validation, before we can be confident that we’re specified the best postulates and definitions.

This generically enormous gap between worst-case and average-case algorithmic complexity (for fields of characteristic zero especially) leaves ample room for optimism that well-considered tuning of postulates and definitions can yield complexity-theoretic narratives in which “Everyone is absolutely right.”

]]>—————-

Aaronson/Arkhipov “Computational complexity of linear optics” begins with (what amounts to) the clearest succinct definition of the Extended Church-Turing Thesis (ECT) (that is known to me):

Argument against the ECT(per Aaronson/Arkhipov 2011) “Predicting the results of a given quantum-mechanical experiment, to finite accuracy, cannot be done by a classical computer in probabilistic polynomial time, unless factoring integers can as well.”

There follow 110 pages of analysis that work through the multitudes of nuances, subtleties, complexities, and implications that are associated to this statement.

With comparable brevity, the following is asserted:

Argument for the ECT“Simulating the results of a quantum-mechanical Scattershot BosonSampling experiment, by a classical computer in probabilistic polynomial time, such that the simulated data are indistinguishable by any statistical test from experimental data, would not disprove the Permanent of Gaussians Conjecture.”

Here the Permanent of Gaussians Conjecture is defined per Aaronson/Arkhipov section 1.2.3, Problem 1.4 and Conjecture 1.5.

This argument *for* the ECT presumes the same technical assumptions as the Aaronson/Arkhipov article; in particular the polynomial hierarchy is respected. More subtly yet realistically, the classical simulation must have access to a Kolmogorov-incompressible random-number generator (so that distinguishing simulated data from experimental data cannot be accomplished by guessing the initial state of the random-number generator). Reconciling PTIME/perfect simulation with the Permanent of Gaussians Conjecture is achieved via what might be called the *trapdoor* loophole, namely, the simulated data do *not* provide a PTIME estimator for any Per(X), where “estimator” has the concrete definition of Problem 1.4, and in particular the matrix X is given as in *input*.

**Conclusion** According to our present understanding, the standard complexity-theoretic postulates do not obstruct the fondest hopes of *both* BosonSamplers *and* quantum simulationists.

So, you have a story in which Carl demonstrate one thing and the opposite is true. Yes that’s surprizing, you’re absolutly right.

]]>You disagree that would collapse the polynomial hierarchy at third level, or you disagree this is a standard assumption?

]]>**Jay** asks (#83) “Why should anyone be surprised? [at Bob’s perfect PTIME/trapdoor simulation of Alice’s scalable Scattershot BosonSampling experiment, per **comment #74**]

Indeed the point is that no-one should be surprised at Bob’s simulational success, because perfect PTIME/trapdoor simulation of Scattershot BosonSampling experiments would *not* violate any standard assumption of complexity theory (as appreciated by me at any rate).

Close reading of the Aaronson/Arkhipov article “The computational complexity of linear optics” (2011) is strongly recommended, and all definitions in the following discussion — including the ECT in particular — are as given in that well-reasoned and wonderful (as it seems to me) article.

In particular:

• Alice’s scalable BosonSampling experimental capability would *not* disprove the Extended Church-Turing thesis (in its Aaronson/Arkhipov form) because …

• Bob’s PTIME/perfect BosonSampling simulation capability of Alice’s experiment would *not* disprove the Aaronson/Arkhipov *Permanent of Gaussians Conjecture* and therefore …

• Bob’s efficient/perfect BosonSampling simulation capability would *not* imply the collapse of any portion of the polynomial-time hierarchy.

A concise overview of the mathematical *reasoning* associated to the above conclusions — reasoning that is “forehead-slappingly obvious in retrospect” (in Scott’s phrase) — is generated by substituting “Shor Factoring” for “BosonSampling”:

Alice (an experimentalist)asserts I can scalably exhibit “Scattershot Shor Factoring” of randomly selected integers.

Bob (a simulationist)asserts I can scalably simulate “PTIME/Scattershot Shor Factoring” of pseudo-randomly selected integers (via theAKS primality test), and no statistical test can reveal that my simulations use this trick to evade the hardness-of-factoring obstruction.

Carl (a mathematician)This isterrificnews on four fronts:(1) Alice’s Scattershot Factoring experiment is scalable, and

(2) Bob’s simulated-factoring datasets are indistinguishable from Alice’s experimental datasets, and

(3) all of the standard hardness assumptions relating to classical and quantum factoring are respected, and

(4) the Aaronson/Arkhipov version of the Extended Church-Turing Thesis is upheld.

**Summary** For quantum simulationists, the key difference between quantum factoring and Scattershot BosonSampling is that AKS-based PTIME/trapdoor sampling algorithms exist for factoring, whereas for Scattershot BosonSampling the analogous PTIME/trapdoor sampling algorithms are not known *yet* to exist.

Fortunately (even excitingly!) neither complexity theory nor the supposed falsity of Extended Church-Turing Thesis provide any reason to foresee that efficient Scattershot BosonSampling algorithms *don’t* exist.

Discovering algorithms in this PTIME/trapdoor simulation class is (at it seems to me) a wonderful open problem, having innumerable practical applications, that in the memorable phrase of Scott’s original post “is a simple, obvious-in-retrospect idea that has been under our noses for years.”

**Conclusion** Carl-the-optimistic-mathematician’s favorite joke has the joyous and justly celebrated punch-line **“You know, you’re absolutely right!“**

1) Alice constructs a BosonSampler using physical ressources

2) Carl fills some minor holes in the demonstration that it is hard to emulate BosonSampling unless P^#P=BPP^NP

3) Bob finds a polynomial time algorithm for BosonSampling

So far no problem. Fantasy use to present faster-than-light travels, by this standard it’s definitely ok to consider an incomplete collapse of the Polynomial Hierarchy.

4) Carl demonstrate we cannot distinguish Bob’s algorithm from Alice’s physical BosonSampler

Here I don’t get your point. To me statement 4) is the same as statement 3) plus statement 1). Why should anyone be surprised?

]]>However Parberry et al. 1986 hardly defined the ECT. They instead cite a textbook from Leslie Goldschlager 1983 and use a different naming:

“the sequential computation thesis [10] states that time on all “reasonable” sequential machine models is related by a polynomial.”

So it’s the same idea, but at least the name does not come from this path.

Diging further I couldn’t find older references to this “sequential computation thesis”, but that may have followed as a consequence of the “parallel computation thesis”, which was likely defined by Chandra and Stockmeyer 1976.

A.K. Chandra, L.J. Stockmeyer, “Alternation”, Proc. 17th FOCS, Oct. 1976 (98-108). [cited in Goldschlager, STOC 1978]

]]>