## This Week’s BS

There are two pieces of BosonSampling-related news that people have asked me about this week.

First, a group in Shanghai, led by Chaoyang Lu and Jianwei Pan, has reported in *Nature Photonics* that they can do BosonSampling with a coincidence rate that’s higher than in previous experiments by a factor of several thousand. This, in particular, lets them do BosonSampling with 5 photons. Now, 5 might not sound like that much, especially since the group in Bristol previously did 6-photon BosonSampling. But to make their experiment work, the Bristol group needed to start its photons in the initial state |3,3〉: that is, two modes with 3 photons each. This gives rise to matrices with repeated rows, whose permanents are much easier to calculate than the permanents of arbitrary matrices. By contrast, the Shangai group starts its photons in the “true BosonSampling initial state” |1,1,1,1,1〉: that is, five modes with 1 photon each. That’s the kind of initial state we ultimately want.

The second piece of news is that on Monday, a group at Bristol—overlapping with the group we mentioned before—submitted a preprint to the arXiv with the provocative title “No imminent quantum supremacy by boson sampling.” In this paper, they give numerical evidence that BosonSampling, with n photons and m modes, can be approximately simulated by a classical computer in “merely” about n2^{n} time (that is, the time needed to calculate a single n×n permanent), as opposed to the roughly m^{n} time that one would need if one had to calculate permanents corresponding to *all* the possible outcomes of the experiment. As a consequence of that, they argue that achieving quantum supremacy via BosonSampling would probably require at least ~50 photons—which would in turn require a “step change” in technology, as they put it.

I completely agree with the Bristol group’s view of the asymptotics. In fact, Alex Arkhipov and I ourselves repeatedly told experimentalists, in our papers and talks about BosonSampling (the question came up often…), that the classical complexity of the problem should only be taken to scale like 2^{n}, rather than like m^{n}. Despite not having a general proof that the problem could actually be solved in ~2^{n} time in the worst case, we said that for two main reasons:

- Even under the most optimistic assumptions, our hardness reductions, from Gaussian permanent estimation and so forth, only yielded ~2
^{n}hardness, not ~m^{n}hardness. (Hardness reductions giving us important clues about the real world? Whuda thunk??) *If*our BosonSampling matrix is Haar-random—or otherwise not too skewed to produce outcomes with huge probabilities—then it’s not hard to see that we can do approximate BosonSampling in O(n2^{n}) time classically, by using rejection sampling.

Indeed, Alex and I insisted on these points despite some pushback from experimentalists, who were understandably hoping that they could get to quantum supremacy just by upping m, the number of modes, without needing to do anything heroic with n, the number of photons! So I’m happy to see that a more careful analysis supports the guess that Alex and I made.

On the other hand, what does this mean for the number of photons needed for “quantum supremacy”: is it 20? 30? 50? I confess that that sort of question interests me much less, since it all depends on the details of how you define the comparison (are we comparing against ENIAC? a laptop? a server farm? how many cores? etc etc). As I’ve often said, my real hope with quantum supremacy is to see a quantum advantage that’s so overwhelming—so duh-obvious to the naked eye—that we don’t have to squint or argue about the definitions.

Comment #1 May 5th, 2017 at 10:54 am

For calculating the permanent with BosonSamplng how would you convince me the answer is accurate when n = 50??

In case of something like shor algorithm its easy to plugin the numbers. But it seems here I have to trust that the machine works for n ~ 10 and I have to trust the extrapolation to n = 50. You might give some special matrices for which i can calculate but that also seems a little doggy as it seems those matrices wouldn’t capture the difficult space.

Comment #2 May 5th, 2017 at 12:06 pm

gerben #1: When n=50, you can still verify the results on your classical computer by just calculating some 50×50 permanents corresponding to the observed outputs (you just need a supercomputer or a large cluster, is all).

When n=100, we no longer know how to verify the answer in any reasonable amount of time classically—only of various ways to

falsifythe hypothesis that BosonSampling is what was happening.Comment #3 May 5th, 2017 at 1:42 pm

@gerben: We studied your question in this paper from 2012.

Comment #4 May 5th, 2017 at 1:44 pm

…did anyone else think that title was a pun and the paper was going to involve immanants? (Beyond the permanent, I mean?)

Comment #5 May 5th, 2017 at 3:43 pm

If I’m understanding your argument correctly, with m modes and n photons it would still be something like O(m^k2^n) for a constant k. Is this correct?

Comment #6 May 5th, 2017 at 4:47 pm

Joshua #5: Actually, I think it should be even better than that, O(n 2

^{n}+ poly(m,n)).Comment #7 May 5th, 2017 at 8:03 pm

[…] Scott Aaronson talked about the China quantum computer researchers achieving 5 photon boson sampling… […]

Comment #8 May 6th, 2017 at 5:44 am

What does ‘falsify the hypothesis’ mean here?

Comment #9 May 7th, 2017 at 8:44 am

I’ll start paying attention to this stuff when they factor the RSA numbers. That’s a perfectly well defined challenge and is what quantum computers were supposed to deliver. The rest of these fake problems are made up nonsense where there’s going to be ambiguity.

Comment #10 May 8th, 2017 at 12:09 am

Scott, something’s wrong with your blog’s reply feature, it remembers the wrong user.

I’ve been recognized as several different commenters in the last few days.

Comment #11 May 8th, 2017 at 8:56 am

hgfgygft #9: People outside a given field of course have the option to “start paying attention to it” only after it’s already succeeded at its most ambitious goals, to even the densest person’s satisfaction—“I’ll start paying attention to this heavier-than-air flying machine business when I can buy the tickets on Orbitz.” Those of us who are actually trying to help make this stuff happen don’t have that luxury.

Comment #12 May 8th, 2017 at 11:03 am

NotThatUser #10: Yes, you’re about the 10th person to tell me. But I don’t even know where to begin looking for a solution … can anyone help?

Comment #13 May 8th, 2017 at 12:37 pm

A |1,1,1,1,1> BosonSampling represents a very impressive progress indeed!

Comment #14 May 9th, 2017 at 10:57 am

Is anyone planning on adding a single quantum bit to classical computers in order to generate a proper random function? 😛

Comment #15 May 9th, 2017 at 3:37 pm

fred #14, random number hardware is commonly available. cf https://en.wikipedia.org/wiki/RdRand . As with most simple obvious ideas in both computer security and quantum mechanics, they tend to introduce more problems than they solve.

Comment #16 May 12th, 2017 at 10:40 am

zalman #15

if adding one qubit is introducing more problems than it solves, that doesn’t bode well for when we’ll be dealing with hundreds/thousands of them at once! 😛

Comment #17 May 15th, 2017 at 11:19 pm

Factoring polynomials over $\Bbb F_q[x]$ has a randomized $O(poly(deg\cdot \log q))$ time algorithm while the best deterministic needs $O(poly(deg\cdot q))$ time. If there an uniform algorithm that derandomizes this and provides a randomized polynomial time algorithm for factoring over $\Bbb Z$ (with no obvious method to derandomize and is essentially split in perspective by design) would it change your BPP=P perspective?

Comment #18 May 16th, 2017 at 11:57 am

BPPPerspective: I don’t know. I don’t even really understand your hypothetical, and would need to see details before evaluating what impact, if any, it had on my view that most likely P=BPP.

Comment #19 May 24th, 2017 at 1:13 am

[…] 5-bosons 5-modes BosonSampling. High-efficiency multiphoton boson sampling : (Nature photonics); Blogpost on the Shtetl. However, UK team indicates over 50 photons needed for Boson Sampling quantum supremacy. (The […]

Comment #20 June 8th, 2017 at 3:51 am

Scott:

“If our BosonSampling matrix is Haar-random—or otherwise not too skewed to produce outcomes with huge probabilities—then it’s not hard to see that we can do approximate BosonSampling in O(n2n) time classically, by using rejection sampling.”

I may not understand what you mean but are you sure this is correct? If you had a conjecture-free claim of this claim that would be very interesting.

If you actually meant the result averaged over Haar-random matrices then that it isn’t very interesting so I assume you really meant that given a particular matrix sampled from a Haar-random distribution we can (with high probability) perform approximate sampling in O(n2^n) time using rejection sampling.

Here is why I currently don’t believe it however.

First and trivially, you need the approximation factor somewhere in the complexity. So there is a poly(1/eps) term missing in any case.

Second, to do this provably wouldn’t we need bounds on the tail of the probability density function that we get from a Haar-random distribution which we don’t currently have. As far as I know we don’t even have good conjecture-free bounds for the mean! Not having good bounds for the mean also rules out using a Markov inequality type approach.

Third, even if all this could be worked out I am suspicious about the n in the n2^n term. Do you mean poly(n)?

Or does your claim require unproven conjectures?

Comment #21 June 8th, 2017 at 1:44 pm

ernesto #20: I’ll be happy to answer the question, but first, who are you??? I emailed an answer to Ernesto Galvao, and he tells me that while he’s indeed interested in the question,

he didn’t leave this comment, even though it has his name and email address attached.