I recently came across papers which talk about using Boson Sampling for calculating Franck Condon factors. https://arxiv.org/pdf/1412.8427.pdf

Does this really mean quantum supremacy in calculating these numbers? Seems strange because it would mean a quantum computer is calculating permanents efficiently? ]]>

“If our BosonSampling matrix is Haar-random—or otherwise not too skewed to produce outcomes with huge probabilities—then it’s not hard to see that we can do approximate BosonSampling in O(n2n) time classically, by using rejection sampling.”

I may not understand what you mean but are you sure this is correct? If you had a conjecture-free claim of this claim that would be very interesting.

If you actually meant the result averaged over Haar-random matrices then that it isn’t very interesting so I assume you really meant that given a particular matrix sampled from a Haar-random distribution we can (with high probability) perform approximate sampling in O(n2^n) time using rejection sampling.

Here is why I currently don’t believe it however.

First and trivially, you need the approximation factor somewhere in the complexity. So there is a poly(1/eps) term missing in any case.

Second, to do this provably wouldn’t we need bounds on the tail of the probability density function that we get from a Haar-random distribution which we don’t currently have. As far as I know we don’t even have good conjecture-free bounds for the mean! Not having good bounds for the mean also rules out using a Markov inequality type approach.

Third, even if all this could be worked out I am suspicious about the n in the n2^n term. Do you mean poly(n)?

Or does your claim require unproven conjectures?

]]>if adding one qubit is introducing more problems than it solves, that doesn’t bode well for when we’ll be dealing with hundreds/thousands of them at once! 😛 ]]>