## The Computational Complexity of Linear Optics

I usually avoid blogging about my own papers—since, as a tenure-track faculty member, I prefer to be known as a media-whoring clown who trashes D-Wave Sudoku claims, bets $200,000 against alleged P≠NP proofs, and complains about his lecture notes being appropriated by Australian actresses to sell printers. Any *research* that I happen to do is best kept secret, lest it interfere with that carefully-constructed persona.

Today, though, I’m making an exception. On Thursday, my PhD student Alex Arkhipov and I finally finished our mammoth ~~94~~ 95-page preprint on The Computational Complexity of Linear Optics, which we were writing for the past year. (Remarkably, this is Alex’s first paper. Congratulations, Alex!) Never before was I involved in a project that forced me to learn so many unfamiliar things, from experimental quantum optics to random matrix theory to exotic complexity classes like BPP^{NP} and PostBQP. (Alright, that last one wasn’t particularly unfamiliar, but the others were.)

In one sentence, the goal of our paper is to help bridge the yawning gap between what complexity theorists believe is *true* about quantum mechanics—namely, that it’s exponentially-hard to simulate on a classical computer—and what experimentalists can currently demonstrate. To do so, we try to “meet the experimentalists halfway,” by proposing a linear-optical setup that seems significantly closer to practicality than (say) a universal quantum computer, but still solves a computational problem that we can show is intractable for classical computers, under plausible and clearly-stated hardness assumptions (which don’t just amount to “our system is hard to simulate”!).

Without further ado, here’s the abstract:

We give new evidence that quantum computers — moreover, rudimentary quantum computers built entirely out of linear-optical elements — cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.

Our first result says that, if there exists a polynomial-time classical algorithm that samples from the same probability distribution as a linear-optical network, then P

^{#P}=BPP^{NP}, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation.Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the

Permanent-of-Gaussians Conjecture, which says that it is #P-hard to approximate the permanent of a matrix A of independent N(0,1) Gaussian entries, with high probability over A; and thePermanent Anti-Concentration Conjecture, which says that |Per(A)|≥√(n!)/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application.This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our model, and its connection to the permanent function, in a self-contained way accessible to theoretical computer scientists.

As you can see from the abstract, there’s a huge amount still to be done—of which the most obvious is (1) proving our #P-hardness conjecture and (2) doing our experiment! I’m also hopeful that people will take up the challenge of proving similar hardness results for other “rudimentary” quantum systems, besides linear optics. In that vein, one immediate question is whether we can give evidence that the beautiful “commuting quantum computations” model of Bremner, Jozsa, and Shepherd is hard to simulate even approximately by a classical computer.

Here are a few options for anyone who’s *slightly* curious about our work, but not curious enough to, y’know, download the paper:

- My PowerPoint slides.
- Alex’s PowerPoint slides, which I like better than mine.
- A MathOverflow question that I posted yesterday about the Permanent Anti-Concentration Conjecture, and which was quickly answered by Terry Tao.
- A CS Theory StackExchange question about our paper posted by András Salamon, which I answered this morning.
**New:**An interesting blog post about our work by Gil Kalai.

Anyway, the main purpose of this post was simply to provide a place for people with questions about our paper to ask them. So, shoot!

Comment #1 November 13th, 2010 at 5:53 pm

Could you give a high-level explanation why the result P^#P = BPP^NP implies the polynomial hierarchy will ‘only’ collapse to the third level, but not completely?

Comment #2 November 13th, 2010 at 6:06 pm

“I usually avoid blogging about my own papers—since, as a tenure-track faculty member”

Your webpage says you’re an Associate professor. Doesn’t that mean you’re already tenured?

Comment #3 November 13th, 2010 at 6:13 pm

Sid: No, I’m currently AWOT (Associate WithOut Tenure). But back to linear optics, please!

Comment #4 November 13th, 2010 at 6:11 pm

Ross: Probably the collapse goes somewhat further; it’s just that we can’t prove it does given the current state of knowledge!

In more detail: according to the Sipser-Lautemann Theorem, BPP is contained in NP

^{NP}—in other words, the second level of the polynomial hierarchy. Since the Sipser-Lautemann Theorem relativizes, that means that BPP^{NP}is contained in NP^{NP^NP}, thethirdlevel of PH.Meanwhile, Toda’s Theorem says that PH is contained in P

^{#P}. So, if P^{#P}= BPP^{NP}, then the classes BPP^{NP}, NP^{NP^NP}, PH, and P^{#P}all get smushed together.Note that, under widely-believed derandomization hypotheses, BPP

^{NP}= P^{NP}. If that’s true, then we get a deeper collapse to P^{NP}(which is the second or “1.5^{th}” level of the polynomial hierarchy, depending on how you count).Let me stress that none of this is really important, as far as our conclusions are concerned. Basically, if a polynomial-time classical algorithm could sample from exactly the same distribution as the linear-optics experiment, then the hierarchy of complexity classes would explode in a spectacular, action-movie-like burst of flame. Asking whether the collapse would be to NP

^{NP}, or “merely” (say) to NP^{NP^NP}, amounts to asking how much charred rubble would be left after the explosion.Comment #5 November 13th, 2010 at 7:38 pm

Scott, thank you. That was a really nice explanation.

Comment #6 November 13th, 2010 at 7:53 pm

Scott, it’s a terrific preprint—perhaps even an historic preprint—for which my congratulations and appreciation are extended to you and Alex. Under separate (email) cover I’ll send some extended remarks to you and Alex, and this will be easier for me if some references could be provided to two key sentences.

— The first reference needed —

Page 1, paragraph 1, sentence 1:

The Extended Church-Turing Thesis (ECT) says that all computational problems that are eﬃciently solvable by realistic physical devices, are eﬃciently solvable by a probabilistic Turing machine.Here an explicit statement of the ECT would be helpful to non-experts. Moreover, your experiment (wonderfully IMHO!) illuminates areas of this hypothesis for which existing statements are imprecise (and even inequivalent) and so the precise statement of the thesis becomes material.

For example, one textbook (literally) ECT definition is provided by Ingo Wegener and Randall Pruim’s

Complexity Theory:Extended Church-Turing Thesis:For any two models of computation R_1 and R_2, there is a polynomial p such that t computation steps of R_1 on an input of length n can be simulated by p(t,n) computation steps of R_2.An alternative (differently named) definition is provided by Leslie Valiant’s article

Quantum Circuits That Can Be Simulated Classically in Polynomial Time:Polynomial Time Turing Hypothesis:Any physical computing device can be simulated by a randomizing Turing machine that, as the input instances vary, takes a number of steps that grows as at most some fixed polynomial in the quantity T+S+E, where T, S, and E are the time, space and energy used by the computing device.Now, the Aaronson-Arkhipov experiment (henceforth the “A-A experiment”) is naturally described in Valiant’s language as a physical computing device that computes a one-bit decision, name “Accept the quantum model of the experiment, or reject it.” Then in order to experimentally test Valiant’s version of the ECT—or even the Pruim-Wegener version—we need to assess the T, S, and E of computing that one-bit decision. And this leads us to ask the second question:

— The second reference needed —

On Step (6) of the experimental protocol of Section 6.1, the phrase “check for agreement with the frequencies” occurs. Here I think the article would be stronger if this were followed by “via an Anderson-Darling test (or an equivalent empirical data function)” … since this is what we experimentalists actually use. I will send you some references on this, together with examples of photon-counting data for which we compute the A-D test statistic.

Now comes the main point: the computational cost associated to the decision bit of the experiment is exponential in the number of photons detected, such that even if the experiment “works”—in the sense that the data conforms to Hilbert-space quantum dynamics), then nonetheless the experiment will provide no grounds for rejecting the ECT (in its Valiant and/or Wegener-Pruim forms) due to the (exponentially large) computational cost that is associated to the output decision bit.

Hmmm … is this an immaterial quibble? Or is Nature telling us something, by ensuring that the A-A decision bit is so very hard to compute? It seems (to me) that these questions reside at the very heart of our evolving understanding of the ECT.

In summary, it seems to me that the A-A experiment is so novel in its conception, and (potentially) so sweeping in its implications, that some additional attention might be given to reviewing the various forms in which the ECT has been proposed, and considering carefully which forms of the ECT are most natural in light of your (wonderful) mathematical results and (seminal) physical ideas.

Comment #7 November 13th, 2010 at 10:19 pm

“we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.”

Does this mean that there exist probability distributions which a QC can sample (to a small, predetermined precision) in polynomial time, but which would take a classical computer exponential time to sample (to the same precision)?

Does it also mean the same thing, but replacing “probability distribution” by “cost function” and “sample” by “optimize”?

Comment #8 November 13th, 2010 at 10:58 pm

rrtucci:

Does this mean that there exist probability distributions which a QC can sample (to a small, predetermined precision) in polynomial time, but which would take a classical computer exponential time to sample (to the same precision)?Assuming our conjectures, yes.

(Note that proving such a quantum/classical separation

unconditionallyis impossible given the current state of complexity theory, where we can’t even prove P≠PSPACE. However, we giveevidencefor a separation between quantum and classical approximate sampling, and much of the paper is about what needs to be done to improve that evidence.)Does it also mean the same thing, but replacing “probability distribution” by “cost function” and “sample” by “optimize”?No. For our results, it was essential to talk about sampling problems, rather than (say) combinatorial optimization problems.

Now, if you want an

optimizationproblem that quantum computers solve exponentially faster than classical computers, then we’ve known some good candidates since 1994. For example:given as input a positive integer N, maximize the cost function that outputs 1 on the prime factors of N and 0 on everything else.(But of course, we don’t know how to implement Shor’s algorithm using linear optics with nonadaptive measurements; it seems to require building a universal quantum computer or something fairly close to one.)

Comment #9 November 14th, 2010 at 4:44 am

“To do so, we try to “meet the experimentalists halfway,” by proposing a linear-optical setup that seems significantly closer to practicality than (say) a universal quantum computer, but still solves a computational problem that we can show is intractable for classical computers, under plausible and clearly-stated hardness assumptions (which don’t just amount to “our system is hard to simulate”!).”

What has been the reaction from physicists?

Comment #10 November 14th, 2010 at 6:05 am

“What has been the reaction from physicists?”

Aaronson and Arkhipov finished writing the paper just two days ago.

Comment #11 November 14th, 2010 at 6:40 am

I guess one possible problem for experimentalists is the following: the original Hong-Ou-Mandel dip with 2 photons was both doable in practice and interesting from a theoretical point of view, the three photon case becomes more complicated to implement and less interesting theoretically, and this situation gets worse as the number of photons is increased.

In other words, the implementation quickly becomes really hard (and expensive!), and the insight gained by the experiment is relatively limited.

Of course, the results of the AA paper show that the results of the experiment become very interesting again when the number of photons reaches 20 for instance.

But the problem is the existence of this long plateau (between 4 and 20 photons) which is a necessary step, but which apparently does not provide any new insights.

Comment #12 November 14th, 2010 at 7:13 am

I haven’t read the meat of the paper, but this is like catnip for QOQC experimentalists. Scott’s heart is in the right place and it sounds like he has done his homework on what is actually practical. I don’t expect to see beamsplitters with 50.00000% split ratios, kilometer long waveplates, or gigawatt class lasers. Even so, if it turns out that to require 10 optical tables and 20 lasers to do an interesting experiment, the point is to get ideas like this out there. If the paper is any good at all, there should be room for further development.

Comment #13 November 14th, 2010 at 10:49 am

Anonymous and Captain Obvious: Yes, we just finished the paper two days ago—but on the other hand, we’ve been giving talks about this work for a while, and have generally been thrilled by the interested response from experimentalists. (We’ve also, of course, learned a lot about the difficulties of doing our experiment.)

In general, the impression we’ve gotten is that scaling the Hong-Ou-Mandel dip from 2 photons to (say) 3, 4, or 5 photons should be well within current technology; it’s simply a matter of there not having been sufficient motivation before now. Scaling to a larger number of photons will be difficult, largely because (the way people know how to do these experiments today) the probability of an n-photon coincidence falls off exponentially with n. However, Alex and I think that even n = 3,4,5 would be an interesting step forward.

Pretty much the entire Section 6 of our paper draws on our conversations with experimentalists, and can therefore be read as an answer to the question of what their reaction was.

Comment #14 November 14th, 2010 at 10:56 am

Anthony: I disagree about the existence of a “boring plateau” between 4 and 20 photons. At the most obvious level, 4 photons are interesting

becausethey’re a step toward 20 photons!But also, you don’t necessarily have to get up to 20 photons to see something that’s interesting from a complexity standpoint. As we say in the paper, even if n=10, the best known classical algorithm for the n*n permanent requires about 200,000 floating-point operations. Personally, I think doing an experiment where the outcome probabilities depend upon such permanents would already be qualitatively different from (say) factoring 15 into 3*5.

Comment #15 November 14th, 2010 at 1:01 pm

Yes, I was playing devil’s advocate here

I, too, agree that the case n=10 would be quite an achievement! But I think that before experimentalists start investing a lot of time, effort and money on such experiments, they need to be convinced that the editors of Nature or Science (for instance) will be thrilled as well by their results.

One obvious solution to this problem would be to find other tasks that could be solved in the nononteracting-boson model. I guess your paper will trigger some research on this matter, and you should try to advertise your results in the quantum optics communauty. Maybe you could try writing a much shorter paper / summary aimed directly at this communauty, insisting less on complexity classes and mathematical proofs but more on the punchline (it is almost possible to prove experimentaly that a quantum computer clearly outperforms classical computer with almost available technology). Indeed, I think that complexity classes are still a little bit frightening for most quantum opticians

Comment #16 November 14th, 2010 at 1:06 pm

I’ve been working through the Aaronson-Arkhipov manuscript in considerable detail, and this is proving to be a fascinating exercise.

Altogether no less than 73 theorems, propositions, corollaries, and lemmas are proved; this is more theorems—both per-page and in total—than any other experiment-related manuscript of my acquaintance.

Of necessity, in order for Scott and Alex to state and prove their theorems with the necessary clarity, rigor, and concision, the terms

decision,sampling,searching, andsimulationare used throughout in their strict complexity theoretic senses.Of course, experimentalists and statisticians have their own conventional usages of

decision,sampling,searching, andsimulation, and their usages too have evolved (through many centuries) to respect clarity, rigor, and concision.Unsurprisingly, the experimental usages sometimes differ pretty substantially from the complexity theory usages. Thus, in order to appreciate the practical implications of Scott’s and Alex’s work, I have had to write out a reasonably detailed isomorphism/lexicon that translates these notions back-and-forth between these two languages.

This process isn’t finished, but as no doubt there are others who are engaged in the same exercise, I will mention that John Tukey’s book (yes,

theJohn Tukey) titledExploratory Data Analysishas proved to be a helpful resource, in the sense that Tukey’s methods for confirmatory data analysis can be naturally associated to the decision problems of complexity theory, whereas Tukey’s methods for exploratory data analysis can be broadly mapped onto complexity theory’s sampling, searching, and simulation problems.A sense of humor is helpful too, because as Goethe said

“Die Mathematiker sind eine Art Franzosen: Redet man zu ihnen, so übersetzen sie es in ihre Sprache, und dann ist es alsbald etwas anderes.”(“Mathematicians are like Frenchmen: whatever you say to them they translate into their own language and forthwith it is something entirely different”).Comment #17 November 14th, 2010 at 1:35 pm

A sense of humor is helpful too, because as Goethe said “Die Mathematiker sind eine Art Franzosen: Redet man zu ihnen, so übersetzen sie es in ihre Sprache, und dann ist es alsbald etwas anderes.” (”Mathematicians are like Frenchmen: whatever you say to them they translate into their own language and forthwith it is something entirely different”).John, are you sure you don’t have any French ancestry?

Comment #18 November 14th, 2010 at 1:47 pm

Anthony #15: Yes, we’ve thought about writing a shorter paper aimed at experimentalists, and maybe we’ll do it (

especiallyif we find something new to say that’s not in the longer paper). In the meantime, I’ve already given talks about this work at several places with great quantum-optics experimentalists (SQuInT, JQI, University of Rochester…); indeed, their feedback was essential for writing the paper.As for

ScienceandNature: well, I’m not an editor at either place; all I can say is that both journals have published many experimental quantum information papers that seemedmuchless interesting to me than successful BosonSampling with n=10.Comment #19 November 14th, 2010 at 3:07 pm

Right in the intersection of my job and my hobby! Aw, Scott, is that my birthday present? How sweet.

Comment #20 November 14th, 2010 at 3:14 pm

“Yes, we’ve thought about writing a shorter paper aimed at experimentalists”

Yeah! You could title it “Church-Turing is Dead”

Comment #21 November 14th, 2010 at 5:30 pm

From an experimental point of view, I think one of the most challenging parts of your proposal is to produce n indistinguishable single photons. The problem is not to produce single photons (this can be done very well today), but to make sure that they are truly indistinguishable. My guess is that they should come from a common source: for instance, from a non-linear process. This seems very tricky for n>2.

Using coherent states instead of single photons would make things much (much) more practical. Obviously one needs to keep measurements in the photon-number basis but this is not too bad experimentally.

In Section 6.2, you say that you don’t know much about this specific situation. Do you plan to look into it?

Comment #22 November 14th, 2010 at 5:50 pm

In Section 6.2, you say that you don’t know much about this specific situation. Do you plan to look into it?Sure, if we get a chance! But in the meantime, would

youlike to look into it?Comment #23 November 14th, 2010 at 5:53 pm

Scott you should do this more often.

Comment #24 November 14th, 2010 at 6:07 pm

I can certainly think about it, and I’d be happy to exchange ideas … if I get any

Comment #25 November 14th, 2010 at 9:15 pm

Dick Lipton says:

Scott you should do this more often.Larry Stockmeyer’s 1984 article

The complexity of approximate countingplays a key role in Scott’s/Alex’s preprint … and it’s fun to notice that Stockmeyer’s article concludes with the following acknowledgment:——–

“I am grateful to Richard Lipton for suggesting the question of whether the size of a tree could be estimated using subtree samples and for many helpful discussions.”——–

Heck, this provides yet another reason to associate a “smiley” to this work:

Comment #26 November 15th, 2010 at 12:05 pm

Long time reader, first time commenter! In the grand tradition of internet commenters everywhere, I will now ask a question whose answer should be blindingly obvious if I had done more than skim the paper — how do you get around the Troyansky/Tishby result that you mention in the introduction? I had always interpreted that as saying “sure, bosonic statistics give you permanents, but as a necessary consequence any observable probing that structure gets exponentially growing fluctuations.” Why doesn’t this smear out the distribution \mathcal{D}_A you’re trying to sample from into something classically tractable?

Comment #27 November 15th, 2010 at 1:06 pm

Tom: The key point is that we don’t even

tryto calculate Per(A) for a given input matrix A. (As you say, that would require exponentially samples—and indeed, we don’t expect there to beanyefficient quantum algorithm for this problem, since such an algorithm would imply BQP=P^{#P}).Instead, we content ourselves to

sample from a probability distribution, in which the probabilities are defined in terms of n*n permanents of various submatrices of a given unitary matrix. The whole point of most of the work we do is to show that even that sampling task is intractable for a classical computer, under plausible complexity assumptions.And yeah, I

hopethat would’ve been fairly obvious if you’d read the paper, but we’ll know to put a blinking neon sign around this point when we revise. Thank you!Comment #28 November 16th, 2010 at 11:50 pm

Gil says:

” The whole point with fault tolerance is to represent a “protected state” on m qubits by a “physical state” on more qubits. However, fault tolerance as is does not allow us to sample or even to approximately sample the protected state.”

Scott, do you agree with this?

Comment #29 November 16th, 2010 at 11:59 pm

rrtucci: No, I don’t agree. Of course you can only sample a noisy version of the state you want, but standard fault-tolerance methods would then allow you to correct that noisy sample to a sample from (something exponentially close in variation distance to) the desired probability distribution. So, from this perspective, the problem is “merely” that standard fault-tolerance methods are so hard to implement technologically — which motivates the search for alternative quantum computing proposals (ours and others) that might not require explicit fault-tolerance.

In fact, if you read just slightly further in Gil’s post, he writes:

Question 4: Do noisy quantum computers allow QSAMPLE (on the nose)? (Answer: yes) … After a few conversations with Scott (e-mail) and Dorit, I realized that the answer is ‘Yes’ by rather routine FT methods.While I don’t remember the email exchange in question, I do agree with its apparent conclusion.

Comment #30 November 17th, 2010 at 12:04 am

Wow, that’s what I call service!

Comment #31 November 17th, 2010 at 12:05 am

Next time I want my answer in 8 minutes.

Comment #32 November 17th, 2010 at 2:34 am

Scott, have you thought about what results of your proposed experiment might be indicative of having found new physics rather than the machinery not working right? It would be very unfortunate if people did the experiment and thought they were running into an engineering limit on their apparatus when in fact it was working fine and they’d actually encountered new physics.

(my money is still on this approach being a good way to find new physics rather than build supercomputers)

Comment #33 November 17th, 2010 at 8:51 am

Bram: Interesting question! I’ve thought a lot about it in general, but admittedly not in the context of this experiment.

The problem is that, after almost a century, we still don’t have

anyreasonable foil theory against which quantum mechanics can be compared, which agrees with QM in the regimes where QM has already been tested. (This is in sharp contrast to, say, the Standard Model and General Relativity, whichdohave foil theories.)And since we don’t know

howQM could break down while remaining mathematically consistent, consistent with special relativity, and consistent with its own former self in the “non-quantum-computational regimes,” it’s almost impossible to tell experimentalists what to look for.(An analogy would be mathematicians in the year 1890 telling experimental physicists how they ought to check whether classical physics still holds at the atomic level. How much useful, specific guidance could a mathematician possibly provide, without basically having to

invent quantum mechanics?)For more about this stuff, you might enjoy my old papers Is Quantum Mechanics An Island In Theoryspace? and Multilinear Formulas and Skepticism of Quantum Computing.

Comment #34 November 17th, 2010 at 10:43 am

Scott says:

The problem is that, after almost a century, we still don’t have any reasonable foil theory against which quantum mechanics can be compared, which agrees with QM in the regimes where QM has already been tested.Hmmm … Scott, what a “Frenchman” would say is quite different.

The French Thesis:“The problem is that, after almost a century, we still have not accomplished any experiment that uniformly samples Hilbert space’s vast dimensionality. “It seems (to me) that the Aaronson-Arkhipov manuscript, first, specifies experimental protocols and data reduction algorithms for such an experiment and second, analyzes their computational complexity … and these are the two seminal merits of this work.

What experiments to date *have* solidly established—especially NIST’s high-precision atomic beam experiments—is, first, the broad principle that Nature’s dynamical flows are symplectic. This principle ensures that thermodynamics “works” and (importantly) it also ensures that Nature’s computational capabilities are not extravagant. A second broad principle is also well-established, namely that Nature’s noise mechanisms are Lindbladian; this principle ensures that locality, causality, and collapse all “just work”.

Now we see that Bram Cohen’s comment about “finding new physics” was spot-on. As geometric dynamicists have long appreciated, it is natural to pullback onto complex state-spaces that have the requisite symplectic and Lindbladian dynamical properties, yet have a non-Hilbert geometry. Thus, an incredibly exciting aspect of Aaronson-Arkhipov experiments are that (potentially) they provide us with a new route to investigating the geometry of Nature’s state-space … a new route that promises to be both experimentally feasible and computationally illuminating.

If we allow ourselves to be guided not by dry mathematics, but by the immortal

Monty Python and the Holy Grail, then we see that attitudes toward quantum dynamics can be broadly classified as British versus French.British Knights:“The Lady of the Lake, [angels sing] her arm clad in the purest shimmering samite, held aloft Hilbert Space from the waters of Nature, signifying by Divine Providence that we, the Quantum Information Theorists, were to interpret its dynamics!”French Knights:“No thanks, we’ve already got plenty of quantum state-spaces! Oh, yes, they’re very Kählerian ones. [whispers to other guards] I told him we already got some [guards all laugh].”It is open to debate whether the 21st century quest for the “Holy Grail of Quantum Dynamics” is best conducted in the style of the conservative (yet funny) British Knights of Hilbert Space, versus the progressive (yet also funny) French Knights of Geometry.

Either way, maintaining a sense of humor is exceedingly helpful in this great quest … because Nature is a shy and clever trickster, when it comes to revealing her dynamics.

Comment #35 November 17th, 2010 at 2:21 pm

Scott, I have a sneaking suspicion that Real Physics is weaker than general quantum computation but strong enough to demonstrate the experiment you’ve come up with. The difficulty of building an experiment to test more powerful computation might be hinting at something.

By the way, a big congratulations on this achievement, even if it isn’t yet possible to actually do the experiment. Figuring out doable experiments which call into question whether there really is a church-turing violation should be one of the central goals of quantum complexity theory right now.

Comment #36 November 17th, 2010 at 2:40 pm

Sorry, for the lay reader – can someone give a few examples of which kinds of “sampling and search problems” are possible with linear-optic QCs, as opposed to universal QCs?

Thanks!

Comment #37 November 17th, 2010 at 2:45 pm

Reading a comment from rrtucci, I wonder why the Extended Church-Turing Thesis is named that way. Of course the present paper doesn’t say anything against the Church-Turing thesis.

So wouldn’t be more appropriate to call that the Bernstein-Vazirani thesis for example?

At least Scott should agree: shooting down the thesis of one’s own supervisor is always fun

Comment #38 November 17th, 2010 at 3:21 pm

Bram: Thanks!

Jiav: It’s not called the “Bernstein-Vazirani Thesis” because it didn’t originate with them: the ECT has been an implicit foundation of complexity theory since the 1960s, even though I don’t know of anyone in the “pre-quantum-computing era” explicitly defending the ECT by an appeal to properties of the physical world. (Does anyone else know of such a person, and if so can you provide a reference? Thanks!) Rather, Bernstein and Vazirani were arguably the first people to ask directly, in a “modern” way, whether or not quantum mechanics

falsifiesthe ECT.Comment #39 November 17th, 2010 at 3:45 pm

Dave #36: The simplest example of a sampling problem that a boson computer can solve is this:

Sample from the probability distribution obtained by running this boson computer, as predicted by quantum mechanics.However, there’s also a more self-contained, mathematical way to state this problem: given as input an m*n matrix A of complex numbers (with m≥n), whose columns are orthogonal unit vectors. For any list of nonnegative integers S=(s

_{1},…,s_{m}) summing to n, let A_{S}be the n*n matrix obtained by taking s_{i}copies of the i^{th}row of A, for all i∈[m]. Then the problem is to sample an S with probability equal to|Per(A

_{S})|^{2}/ s_{1}!…s_{m}!(It’s an amazing-but-true fact that these probabilities sum to 1 for any choice of A — try it!)

The

searchproblem solvable by a boson computer is obtained “automatically” from the above sampling problem, by using my paper on The Equivalence of Sampling and Searching. Basically, the search problem will be to output a collection of S’s that have large probability according to the distribution above, and thatalsohave large Kolmogorov complexity.Comment #40 November 18th, 2010 at 10:45 am

Ahh – terrific! Thanks for the explanation and the link.

Comment #41 November 18th, 2010 at 12:37 pm

Scorr asks:

“I don’t know of anyone in the “pre-quantum-computing era” explicitly defending the ECT by an appeal to properties of the physical world. (Does anyone else know of such a person, and if so can you provide a reference Thanks!)”One starting point is the set of references given by Andrew Yao in his very interesting 2003 survey

Classical physics and the Church-Turing thesis(BibTeX appended). Whether the authors that Yao cites “explicitly defend the ECT” is largely subject to how generously one interprets the word “explicitly.”This relates to the first of five interesting questions that are raised (uhhh … implicitly? … explicitly? … naturally?) by your manuscript, namely: (1) “Can it be proved—subject to ‘reasonable’ assumptions of complexity theory—that

classicalphysics forbids the construction of experiments whose distribution is provably infeasible to sample?”As a concrete example, with reference to the Galton board shown in Fig. 1 of your manuscript, the observed distribution of 13 balls in 8 bins is {1,0,2,4,3,2,1,0}, and the probability G[{1,0,2,4,3,2,1,0}] of observing this data set is concretely 59181964850967943359375/ 154742504910672534362390528 ~ 0.00038.

Of course, it is well-known that Galton board experiments can be simulated in

BPPwithout ever calculating the Galton distribution G explicitly—simply by randomly bouncing balls off pegs (which is how physicists have sampled hard-to-compute distributions ever since Monte Carlo/Metropolis simulation methods were invented).For the sake of argument, let us now imagine that computing G directly is in some harder-than-

Pcomplexity class (involving permanents for example). We wonder whether the computational complexity of G places any limits at all upon our ability to sample from it.This line of inquiry leads naturally to four more common-sense questions whose answers I am attempting to extract from the theorems of your manuscript: (2) “What is a sufficient level of computational complexity associated to G (the less, the better obviously) such that all Monte Carlo/Metropolis-type sampling methods—that is, simulations in

BPP—provably fail to sample from G?” (3) “Is the definition of that threshold “no-simulate” complexity class for G free of non-ETC computational capabilities (e.g., post-selection and/or Kolmogorov incompressibility testing)?” (4) “If Françoise attempts to simulate a sample from G using BPP resources, what concrete tests (if any) can we apply to Françoise’ simulated data, such that her data will be sure to fail that test?” (5) “Are BPP resources sufficient to perform at least one concrete sure-to-fail test on Françoise’ simulated data?”Scott, these questions are both mathematically tough and physically relevant (I hope anyway) … and a great virtue of your manuscript is that careul reading of it inspires such questions. For which, my thanks and appreciation are (as usual) extended to you and Alex.

———

@article{null,Author = {Yao, Andrew Chi-Chih}, Journal = {J. ACM}, Number = {1}, Pages = 100–105, Title = {Classical physics and the {C}hurch-{T}uring thesis}, Volume = {50}, Year = {2003}}

Comment #42 November 18th, 2010 at 5:26 pm

One thing that striked me about the AA result that QSAMPLE is P leads to the collapse of the polynomial hierarchy is this. This is the first result that suggests that the power of quantum computers scratch the NP hardness area.

(I am not sure if this is a correct interpretation, is it?)

Regarding BQP, it is a common belief that BQP does not contain NP. One argument that i remember from an old post here is based on oracle results in this direction. But I dont remember the precise details.

Can we expect oracle results which will say that P equal BQP does not collapse the hiearachy ?

(Perhaps we can even expect oracle based result which shows the negation of what AA proved?)

Comment #43 November 18th, 2010 at 6:25 pm

Hi Gil,

One thing that striked me about the AA result that QSAMPLE is P leads to the collapse of the polynomial hierarchy is this. This is the first result that suggests that the power of quantum computers scratch the NP hardness area.(I am not sure if this is a correct interpretation, is it?)

As we discuss in the “Related Work” section, there were several previous results relating quantum computing to #P-complete problems. First, Fenner et al. showed that deciding whether a quantum computer accepts with

zeroornonzeroprobability is #P-complete under NP-reductions (indeed, their paper was titled something like “Determining acceptance possibility for quantum computations collapses the polynomial hierarchy”). Second, I showed that BQP withpostselected measurement outcomesequals PP (hence is #P-complete). Third, there was independent work by Bremner, Jozsa, and Shepherd, which proved the analogue of our “exact” result for a different type of weak QCs (called “commuting QCs”).On the other hand,

ifour #P-hardness conjecture can be proved, then yes, it’s fair to say that that would yield the most convincing connection to date between quantum computing and #P-complete problems.Regarding BQP, it is a common belief that BQP does not contain NP. One argument that i remember from an old post here is based on oracle results in this direction. But I dont remember the precise details.Bennett, Bernstein, Brassard, and Vazirani proved in 1994 that there exists an oracle relative to which NP⊄BQP—and more concretely, that the quadratic speedup of Grover’s algorithm is the best possible for black-box search. (By now, the extensions and generalizations of the BBBV Theorem are an entire field to themselves—in fact, the field I cut my teeth in!)

Long story short: you’re right, we certainly

don’texpect a polytime quantum algorithm for NP-complete problems. Even (say) a 2^{√n}quantum algorithm for 3SAT, or a fast quantum algorithm for the Approximate Shortest Vector problem, would be a very dramatic development.Can we expect oracle results which will say that P equal BQP does not collapse the hiearachy ?That’s a phenomenal question! As it turns out, Fortnow and Rogers constructed an oracle that makes P=BQP but PH infinite way back in 1998.

We’ll be sure to mention that when we revise our paper.

(Perhaps we can even expect oracle based result which shows the negation of what AA proved?)Another phenomenal question!

Our result for the

exactcase is easily seen to relativize (as long as we replace “boson computers” by “arbitrary quantum computers”—since I don’t even know what it means to feed a boson computer an oracle). In other words, for every oracle A,P

^{#P^A}⊆ BPP^{NP^QSAMPLE^A}.On the other hand, our approximate result (assuming it works) relies on the random self-reducibility of a specific #P-complete problem. Therefore, one could try to construct an oracle relative to which our approximate result doesn’t work.

(Note that, in my BQP vs. PH paper, I constructed an oracle relative to which the approximate result provably

doeswork. In fact, even a random oracle suffices here.)Comment #44 November 19th, 2010 at 7:05 pm

Scott,

Could you please explain why complexity theorists think it implausible that the polynomial time hierarchy might collapse? I find this difficult to understand, although I understand (at least, I thnk I understand) why P=NP, BPP⊃NP, BQP⊃NP, P/poly⊃NP are implausible.

Comment #45 November 19th, 2010 at 8:42 pm

syskill: OK, so you believe P≠NP?

Do you also believe that NP≠coNP (i.e., that there are no polynomial-size proofs for

unsatisfiability)?If you believe those two, then PH being infinite is pretty much just an inductive generalization of what you already believe.

(I mean, if the hierarchy were going to collapse at the 17

^{th}level or something, then why shouldn’t italreadyhave collapsed at the first or second?)Comment #46 November 21st, 2010 at 9:01 am

Thanks, Scott for the illuminating answers. Here is a couple more questions. First: Are there other problems that an effective solution for them was proved to imply PH collapse that may be feasible for quantum computers?

(The rationale is simple, once we prove that some problems QC can answer leads to hierarchy collapse maybe other problems which are not known to be NP hard but for which PH was proved to collapse can serve as candidates for efficient QC algorithms.)

One of the things that drew my attention to Scott and Alex’s hierarchy collapse (which also applies to the result by Bremner, Jozsa, and Shepherd) is this. Let’s consider the “physics” version of the NP/P problems asserting that no physical device can solve an NP hard problem. This is widely believed conjecture and Scott is a strong advocate for this conjecture and the related computational complexity conjecture that quantum computers cannot solve NP complete problems.

For the other examples of problems (known to me) that imply a polynomial hierarchy collapse, it looks very reasonable that the assertion of the physics NP/P conjecture continues to apply. For examlpe, it seem reasonable to believe that we cannot construct physical devices (even in a non uniform way) to crack NP complete problems. (re: the Karp Lipton theorem). Also that that we cannot build physical devices that will extract a bit of the permanent for which the decision problem leads to Hierarchy collapse (re: the Feige-Lund theorem).

Therefore, I found the new hierarchy collapse results rather surprising. At first it looked possible that noisy quantum computers do not allow QSAMPLE. (This, I thought, was the motivation behind APPROXIMATE-QSAMPLE.) In the FT schemes to simulate noiseless QC on a noisy one taken-as-is we do not observe exact or even approximate QSAMPLE as part of the process. When I first thought about it, it looked that you need a new type of QEC to encode noiselessly a distribution using a large number of qubits but later I realized (after consulting with some experts) that you can achieve it by applying standard FT techniques.

The question which looks rather unclear to me (but perhaps is addressesd in parts of AA’s paper I did not read) is how approximate-QSAMPLE in the AA sense is going to be easier on the fragments of QC considered by AA and by BJS .Without applying fault tolerance it looks that even approximate QSAMPLE is out of reach, while with fault tolerance you can get much stronger approximations. I am quite interest in describing “quantum processes without FT” on formal grounds and this can be relevant also to this last question.

Comment #47 November 21st, 2010 at 10:21 am

I have similar confusions to Gil.

In particular, Gil’s exceeding useful phrase “approximate-QSAMPLE in the AA sense” has to be parsed exceedingly carefully, to ensure that the AA manuscript’s intricate web of 63 definitions, propositions, lemmas, and theorems is rigorously respected, while at the same time, the centuries-old scientific tradition of experimental verification and validation (V&V) also is respected.

Thus, if we are given a linear optics dataset that is claimed to arise as an approximate-QSAMPLE process—that is, if we are given a set of classical bits comprising the data record of a linear optics experiment—it is natural to ask these questions:

“What are the necessary-and-sufficient attributes of a linear optics dataset to qualify as what Gil calls ‘an approximate-QSAMPLE in the AA sense’? And what are the minimal computational resources required to verify and validate those attributes?”The present AA manuscript does not address these natural questions; moreover it isn’t a trivial task (in my experience) to answer them by parsing the AA manuscript’s definitions and theorems.

E.g., is the collapse of the polynomial hierarchy implied

onlyif the (classical) QSAMPLE/AA strings output by a BPP simulation algorithm have (in an AA phrase that is associated to Theorem 12) “close-to-maximal Kolmogorov complexity.”?If so … yikes … that’s a data attribute that only an oracle can verify.

In summary, the implications of the AA QSAMPLE result would be appreciatly easier to grasp—for us experiment-minded and engineering-minded folks especially—if the existing

Section 6: Experimental prospectswere augmented with the additions ofSection 6.4: Verification and validation (V&V) protocols for QSAMPLE experiments, followed bySection 6.5: The computational and oracular resources required for V&V testing of QSAMPLE experiments.Comment #48 November 21st, 2010 at 2:02 pm

Gil: At this point you’re pretty much hitting the open problems section of our paper, one problem at a time…

1. Yes, I strongly believe there should be many other QC models that, in common with our model and Bremner-Jozsa-Shepherd’s, have the property that any fast exact classical sampling algorithm would imply P

^{#P}=BPP^{NP}. Indeed, as we point out in the “Related Work” section, a third such example is already provided byconstant-depthquantum circuits (quantum NC^{0}) without locality constraints, a family that was previously studied by Terhal and DiVincenzo.2. As we say in the paper, whether one can implement our approximate BosonSampling experiment for “interestingly large” values of n

withoutexplicit fault-tolerance is the central challenge we leave for experimentalists.You’re correct that, if one

hadexplicit fault-tolerance, then one could do even better than approximate BosonSampling.The factors that make us optimistic that interesting BosonSampling might be possible without explicit fault-tolerance are described on pages 11-12 of our paper. Basically, photons have

extremelygood coherence properties; the main reason they’re considered difficult to use as qubits is not that they decohere too fast but rather that(a) they’re extremely difficult to store, and

(b) one needs to introduce either photon-photon couplings or else adaptive measurements to get universality.

However, our proposal doesn’t have any need for photon storage, photon-photon couplings,

oradaptive measurements. So the experimental difficulties are reduced to “merely”– reliably generating single-photon states,

– reliably measuring the photon number in each mode, and

– (most importantly) getting a reasonably-high probability of an n-photon coincidence.

I conjecture that even if one didn’t have n-photon coincidences, but merely (say) n/2-photon coincidences, one could still solve a hard approximate sampling problem. Getting evidence for that is a very interesting open problem.

Comment #49 November 21st, 2010 at 2:22 pm

John Sidles #47:

If you “merely” want to do our experiment with up to 40-50 photons, then Section 6.1 explains exactly what test you could run on your dataset, using a classical computer, to check for agreement with the predictions of quantum mechanics. (That test doesn’t involve Kolmogorov complexity, but it

doesinvolve computing n×n permanents.)If this test passes, it wouldn’t

provethat your computer was solving the approximate BosonSampling problem, but it would be an extremely encouraging sign.With n>>50 photons, as we discuss in detail in the paper, we don’t know whether it’s

possiblefor a classical computer to verify in time polynomial in n that a quantum computer’s correspond to n×n permanents! This is an extremely interesting open problem. Thus, it’s no coincidence that we don’t offer much guidance to experimentalists in this case.For perspective, though, the current world record for implementing the BosonSampling experiment appears to be n=2 photons. So there’s a long, long way to go before these interesting issues become relevant!

Comment #50 November 21st, 2010 at 2:55 pm

Scott, My first question referred to problems arising in computational complexity (not new questions about QC) for which it was already proved that they cause hierarchy collapse.

Your new results suggest that maybe these problems (or even one such problem) can also be solved using QC. What do you think?

(Your firm belief in the physical NP/P problem and in QC unable to solve NP-hard problems suggests that it is unlikely that such problems can be tractable for QC.)

Regarding bosons. I offered in http://front.math.ucdavis.edu/0607.4021 a very simple experiment regarding 2n bosons which goes as follows: Consider a state X of 2n bosons (n large) each having a ground state |0 > and an excited state |1 >, so that |0 > has occupation number (precisely) n and |1 > has occupation number n. Can such a simple state be well approximated on a photon machine? I exepected that the answer is no (for “realistic” noisy QC). I would not have a problem with a state Y where the occupation number has a binomial distribution. But it seems that very super-gaussian concentration for the occupency numbers goes against my conjectures about noisy systems. I wonder if this proposition has any relation with the type of states you try to create on photon machines. (It looks rather “easy” to create but experimentalists I talked at the time were not sure.)

Comment #51 November 21st, 2010 at 4:21 pm

Gil: For your first question, I’d like to get clearer about which problems you have in mind. What (ordinary, decision or promise) problems do we know about that

aren’tknown to be NP-complete, but for which any efficient classical algorithm would nevertheless collapse PH?The only plausible candidates that spring to mind would be problems known to be #P-complete, but only under nondeterministic reductions. The trouble is that, of the problems of that sort that I know about (computing the middle bit of the permanent, deciding whether a Boolean formula has exactly k satisfying assignments, etc.), I’m pretty sure that all of them are NP-hard as well. Do you know such an example that

isn’tknown to be NP-hard, even under adaptive randomized reductions?Regarding your experimental challenge: yes, it would be extremely interesting to prepare reliable n-boson Fock states for large n! I don’t see that it’s either necessary or sufficient for my and Alex’s proposal, but it’s clearly related.

Comment #52 November 21st, 2010 at 9:37 pm

Scott, As you are pretty sure all previous examples for which any efficient classical algorithm nevertheless collapse PH are NP-hard as well, arn’t you pretty sure that QSAMPLE is NP-hard as well? (I dont know/remember what NP-hard under adaptive randomized reduction is. But does QSAMPLE is like that?)

The argument that QSAMPLE leads to PH collapse does not seem to suggest that QSAMPLE is different than other problems for which PH collapse was proved.

(And another thing that confuses me. If QSAMPLE is NP hard doesn’t it imply that BQP contains NP?)

Comment #53 November 21st, 2010 at 10:08 pm

Hi Scott,

The first extended discussion of what you call the ECT that I know of is in the 1986 paper by Vergis, Steiglitz and Dickinson, “The complexity of analog computation.” Let me quote from the abstract:

Here, by analog, they pretty much mean “physical,” although they do explicitly say they are not considering probabilistic or quantum behavior. They do cite Feynman’s 1982 article on “Simulating physics with computers.”

Comment #54 November 21st, 2010 at 11:04 pm

Thanks, Peter!

Comment #55 November 21st, 2010 at 11:24 pm

As a follow-up to Peter Shor, Ker-I Ko’s 1991 textbook

Complexity theory of real functionsstates on page 14 (emphasis as in the original text):When we consider issues associated to experimental verification and validation of the ECT, the distinction between deterministic and nondeterministic models becomes material … and to date I have found

nostatement of the ECT in the literature that is sufficiently precise, sufficiently natural, and sufficiently foresighted, to provide adequate foundations for such an analysis.Scott, your assertion in comment #49 that

“There’s a long, long way to go before these interesting issues become relevant!”… well … heck … isn’t the central point of the AA manuscript precisely that these interesting issues *ARE* becoming experimentally relevant?That is why it would be helpful to many readers if the manuscript included a

Section 1.4: Deterministic, probabilistic, and oracular versions of the ECT.There is however (IMHO) zero chance that any such section could be the final word on the (immensely interesting and challenge) subject of defining the ECT … `cuz the issues raised in

Section 10: Open problemssuffice to make *that* a mighty safe bet.Comment #56 November 21st, 2010 at 11:24 pm

Gil, you keep wanting to talk about QSAMPLE as if it weren’t a sampling problem, but it

is! One lesson I draw from our result is that many of our intuitions about decision problems simply don’t carry over to sampling problems.Still, let’s press ahead, and ask: what would it

meanfor QSAMPLE to be NP-complete? Presumably, it would mean thatNP ⊆ BPP

^{QSAMPLE},possibly with the further restriction that the BPP machine gets to control the randomness used by the QSAMPLE oracle. But notice that, in the case that a BPP machine is making the queries, that further restriction can be removed without loss of generality! For any time the BPP machine submits a random string r to the QSAMPLE oracle that it’s submitted before, it already knows the outcome, while any time it submits a fresh random string, it gets a fresh sample, and these are the only two possibilities. Unlike in the case of a BPP

^{NP}machine, there’s no possibility of approximate counting or anything like that.However, the above implies that BPP

^{QSAMPLE}= BQP! So if NP ⊆ BPP^{QSAMPLE}, then NP ⊆ BQP. And I certainly don’t thinkthat’sthe case.What you seem to be asking is: how

couldQSAMPLE not be NP-complete, even though it shares a certain property with NP-complete problems (namely, that efficient classical solution collapses PH)? But that’s more a philosophical question than a mathematical one. Your answer is as good as mine…Comment #57 November 21st, 2010 at 11:35 pm

Scott, John,

While we’re quoting extended Church-Turing Theses, it would be remiss of me not to quote the one in the Vergis-Steilglitz-Dickinson paper.

They also say

so they were maybe the first computer scientists to really investigate in depth the idea that the Church-Turing thesis is a statement about physics and not mathematics.

Comment #58 November 22nd, 2010 at 7:14 am

Dear Scott,

Indeed it also looked also to me that if QSAMPLE is NP hard then BQP contains NP. In other words, this implies that quantum computers (and even noisy quantum computers using QFT) can solve NP complete problems.

(I cannot read “So if NP ⊆ BPPQSAMPLE, then NP ⊆ BQP” on my computer. I suppose it says “if NP subset BPP^QSAMPLE than NP subset BQP”.)

I am aware of the fact that our intuition about sample problems should be different than for decision problems and this was precisely what I wnated to ask you. (Perhaps you can demonstrate such a difference for a purely classical example?) In any case the difference is still not clear to me.

(I remember some discussions about it after your talk with Dorit but I forgot all the details.)

“What you seem to be asking is: how could QSAMPLE not be NP-complete, even though it shares a certain property with NP-complete problems (namely, that efficient classical solution collapses PH)? But that’s more a philosophical question than a mathematical one. Your answer is as good as mine… ”

Yes, I am aware that my answer is as good as your answer. I was just asking about your answer. Like you, I also tend to think that quantum computers cannot solve NP complete problems. But I also think that the evidence for that is rather weak. (E.g. BBBV-type oracle results supporting it may also exist against some very plausible conjectures you pose.) The new results about PH collapse give some support to the opinion/hope/fear that quantum computers may be strong enough to solve BQP problem. You say that this supoort is also weak because of the differences between sample problems and decision problems. Fine, this is the part I would be happy to understand better. (I dont share your opinion that it is an outrages view to think/hope that QC may solve NP complete problems and that this, if true, is by itself is a strong evidence against QC feasibility.)

I do not agree that this is a “philosophical problem”. Questions about the interpretation of mathematical results are at the heart of mathematics and applied mathematics. (And of course, TCS.) But, of course, I like (and it looks that you too) questions of philosophical nature, and I think they are important.

Comment #59 November 22nd, 2010 at 7:31 am

Dear Peter,

Is Vergis-Steilglitz-Dickinson paper you refer to their 1986 paper? There is a paper of Wolfram from 1985 which (I was told) is the first reference for the extended Church thesis. (I learned about it from Itamar Pitowsky and he mention and discuss it in his paper http://edelstein.huji.ac.il/staff/pitowsky/papers/Paper14.pdf . )

I wonder if they are earlier places where the extended Church thesis (in some version) was asked.

Comment #60 November 22nd, 2010 at 7:36 am

“The new results about PH collapse give some support to the opinion/hope/fear that quantum computers may be strong enough to solve BQP problem.”

I meant, of course

“The new results about PH collapse give some support to the opinion/hope/fear that quantum computers may be strong enough to solve NP-complete problem.”

Comment #61 November 22nd, 2010 at 9:30 am

These are wonderful points of discussion that Gil and Peter are raising! In preparation for this morning’s

Litoticaseminar (on large-scale quantum simulation codes), I have prepared the following Alice/ Bob/ Carl dialog, which illustrates how immensely tricky, and wonderfully interesting, it is parse the complexity-theoretic claim that “polynomial-time classical simulation even of a noisy experiment would imply a collapse of the polynomial hierarchy.”Alice:Alice is running a (say) 30 photon linear optics experiment, from which she collects 10^4 data samples per day.Bob:Bob simulates 10^4 data samples per day, via computational resources inBPP. He provides this simulated data to Alice, which she analyzes using the same protocol(s) as for her physical data.Carl:Carl is a complexity theorist who advises Alice and Bob on the implications of their experiment for the collapse (or not) of the polynomial hierarchy.Bob:Good morning Alice! I hope your experiment continues to run well. Here is today’s simulated data.Alice:Thanks Bob; your simulations have been immensely helpful in debugging our linear optic experiments. Carl, I have a question for you. Although Bob has access only toBPPresources, his algorithms are sufficiently ingenious that both Bob’s simulated samples and my experimental samples pass every verification and validation test that I can compute withBPPresources. And every time I increase the number of photons in my experiments, Bob matches this increase in hisBPPsimulations. According to your complexity theorems, does Bob’s simulation capability imply a collapse of the polynomial hierarchy?Carl:That’s a great question, Alice. But the answer is no.Alice (one month later):Carl, I’ve upgraded my apparatus with anNPoracle to help with data verification and validation, while Bob is still restricted to BPP resources for his simulations. Bob’s simulated samples and my experimental samplesstillpass every statistical test that I can compute, even with the help of myNPoracle. Doesthisimply a collapse of the polynomial hierarchy?Carl:Sorry Alice … the answer isstillno (or is it yes?).Alice (one year later):Carl, now I’ve installed a#Poracle, while Bob is still using his old BPP resources. Bob’s simulated samples and my experimental samples STILL pass every statistical test that I can compute. Surely THIS implies a collapse of the polynomial hierarchy?Carl:Sorry Alice … the answer is STILL no (or is it yes?).Alice (20 years later):Carl, now I’ve installed aEXPoracle, while Bob is still using his old BPP resources. Bob’s simulated samples and my experimental samplesSTILLpass every statistical test that I can compute. DoesTHISimply a collapse of the polynomial hierarchy?Carl:Sorry Alice … the answer isSTILLno (or is it yes?).Alice (exasperated):Carl, if we grant of all your complexity-theoretic assumptions, do your theorems place any constraints at all upon the results of deterministic analysis ofPSPACE-bounded sets ofBPP-simulated and experimental data in linear optics experiments?Carl:Regrettably Alice … no matter how many photons you succeed in observing, and no matter much computing power you bring to bear on analyzing your data samples, and even though Bob’s simulation algorithms are restricted toBPP, the answer is ALWAYS no (or is it yes?).———–

Questions for the

Litoticaseminar: What computational resources does Alice require, to shift Carl’s answer from no to yes? In what respects (if any) are the standard postulates of modern complexity theory incompatible with traditional experimental notions of verification and validation?———–

AFAICT, these are questions that have *NEVER* been asked before … thus (for me) the immense virtue of the AA work is that it stimulates the STEM community to ask entirely new classes of questions, that are broadly relevant to science, technology, engineering/ enterprise, and mathematics.

Comment #62 November 22nd, 2010 at 10:32 am

Hi Gil,

I am indeed talking about the 1986 Vergis-Steiglitz-Dickinson paper.

If you’re talking about the extended Church-Turing thesis without reference to the laws of physics, there are lots of places it was discussed previously; the first is probably Cobham’s 1965 paper, which is one of the two papers credited with identifying the complexity class P, and uses it as one of the reasons for choosing polynomial-time as the set of tractable functions (the other paper, by Jack Edmonds, does not discuss the ECT thesis). If you’re talking about the extended Church-Turing thesis with reference to the laws of physics, then the 1985 Wolfram paper (where it is not quite presented explicitly) and the 1986 Vergis et al paper are the two earliest that I know of.

You might want to look at the work of Toffoli and Fredkin, though, who in the early 80’s were thinking about simulating physics with cellular automata independently of Wolfram.

Comment #63 November 22nd, 2010 at 10:54 am

Peter, what about the Church Turing thesis itself? was it discussed as a questions referring to the law of physics from the beginning?

Comment #64 November 22nd, 2010 at 12:11 pm

Gil, my own (hugely over-simplified) understanding is that the historical development of the ECT proceeded in three broad stages.

First we have the stage of Spinoza, Leibniz, and Priestley, in which already in 1677 Spinoza explicitly associates the concept of

automatato cognition:We then skip ahead to the 20th century, in which WIttgenstein and Russell instantiate Spinoza/ Leibniz/ Priestley “automata” as truth-tables and tautologies, following which Turing and Church systematically investigate computation as a chain of logical inferences.

Now in the 21st century we have the new idea that perhaps the central activity that should be associated to quantum versions of the ECT is not logical inference, but rather is

samplingandsimulation.From a sampling-centric and simulation-centric point-of-view, the theorems associated to the ECT arise not so much from quantum dynamics—because dynamicists nowadays appreciate that classical and quantum dynamics differ scarcely at all—but rather arise from the tight, well-posed, physically natural constraints that Lindbladian QIT places upon noise-and-measurement processes; constraints that are absent and/or

ad hocin classical models.Then we appreciate that an outstanding virtue of the AA manuscript (IMHO) is its demonstration that the ECT acquires deeper meaning in the expanded context that quantum sampling provides relative to classical sampling.

That is why the AA analysis of the ECT seems (to me) to be strikingly original … so original, that appeals to authority regarding the “true ECT” are not likely to be all that helpful. Moreover, this is why (IMHO) the AA manuscript would be substantially improved if it explicitly gave sampling-centric formulation(s) of the ECT.

By the way, I apologize for the run-on formatting of the Alice-and-Bob dialog (in post #61) … it looks OK in the preview and in our seminar notes.

Comment #65 November 22nd, 2010 at 1:04 pm

Peter&Gil thanks for your insights.

For the physical version, I didn’t read Toffoli’s 1979 paper but at least David Deutsch beat Wolfram and Vergis-Steiglitz-Dickinson.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.4866&rep=rep1&type=pdf

Comment #66 November 22nd, 2010 at 1:46 pm

From Fredkin and Toffoli 1981:

Intuitively, the amount of “room” available as one moves away from a certain point in space increases as a power (rather than as an exponential) of the distance from that point, thus severely limiting the connectivity of a circuit. While we shall not explicitly deal with this constraint in presenting conservative logic here, our overall approach and in particular the billiard-ball model of computation (Section 6) will be consistent with it.

I don’t think this qualify as… the Deutch Thesis

Comment #67 November 22nd, 2010 at 3:22 pm

Gil: For whatever it’s worth, I think the intuitive case for NP⊄BQP is

almost(not quite) as strong as the case for NP⊄BPP. And while I was also surprised for several days on realizing that P^{#P}⊆BPP^{NP^QSAMPLE}, this fact doesn’t make me “fearful” that a fast quantum algorithm for NP-complete problems is around the corner.A central point is that our reduction from #P to QSAMPLE is

nondeterministic—and this nondeterminism is not just some removable technicality; it’s really important! For it lets us reduce a #P-hard problem to estimating the probability of a single,exponentially-unlikely event. There were already hints (for example, the Fenner-Green-Homer-Pruim NQP=coC_{=}P theorem, and my own PostBQP=PP theorem) that when you combine quantum computing with postselection on exponentially-unlikely events, you can get #P-completeness.This is surprising, and is already enough to have interesting consequences for the ability of classical computers to solve quantum sampling problems. As far as I can tell, though, it doesn’t say

anythingabout the ability of quantum computers to solve decision problems, where you want a particular right answer with 1/poly(n) or Ω(1) probability.Comment #68 November 22nd, 2010 at 3:27 pm

When I was in Russia recently, Leonid Fedichkin told me that the first person to have observed that quantum mechanics was hard to simulate on digital computers appears to have been A. Schlüter, in his 1970 paper “Electronische Rechenmaschinen in der Physik,” which appeared in Physikalische Blätter, the journal of the German Physical Society. He also said that it appeared that Poplavskii and Manin may have seen this before making the same observation in their papers. Unfortunately, the only copy of this I can get my hands on is a Russian version (http://ufn.ru/ru/authors/shlyuter_a/) which I cannot read. If anybody wants to translate the relevant section, or could send me an electronic copy of the German version, I would be very happy.

Comment #69 November 22nd, 2010 at 11:52 pm

Scott, what you say about properties that implies PH collapse makes sense and will probably applies to other cases (even not coming from QI) where the reduction is to a high level of the PH.

Comment #70 November 23rd, 2010 at 3:13 am

“For whatever it’s worth, I think the intuitive case for NP⊄BQP is almost (not quite) as strong as the case for NP⊄BPP.”

The three main reasons in my opinion to believe that NP is not P are: First numerous explicit and implicit attempts to find efficient algorithms for NP complete problems

failed, second the beautiful mathematical theory based on NP not equal to P seems coherent, does not lead to contradictions, and gives good predictions. Third, NP=P is likely to reflect a major, highly unexpected, and unprecedented change in our reality.

All these reasons are much weaker for believing that BQP does not contain NP. Much less effort was made in developing quantum algorithms in general and QA for NPC problems in particular. NP being different than BQP is not the basis of such a large coherent mathematical construction. (I am not sure how central it is even in quantum computational complexity.) Finally, BQP=NP does not reflect a change in our reality but, at most, a change in an unfamiliar future reality.

Like perhaps most people (who have opinions on this matter) I also tend to think that BQP does not contain NP (partially because BQP is such a strange bird), and that factoring is not in P. I just don’t regard the evidence for these assertions as overwhelming as for NP not equal P.

BTW, I met a QC-enthusiastic rather-top-official of a rather-large company with quite-an-interest in QC that mentioned something to the effect that QC may solve NP complete problems. Rather than immediately condemning him or her for this hope, or offering my own QC skepticism I (sincerely) suggested that the company will double its efforts in this direction.

Comment #71 November 26th, 2010 at 2:07 pm

Scott,

I wrote for my blog this description of your paper by a dumb greenhorn. If it’s not too much to ask, could you please look at the last line

max_s(|P(s|\theta, 0)-P(s|\theta, \delta)|) < \epsilon

Writing this description made me wonder in my sleep:

Do you assume something about how epsilon and delta depend on n?

Comment #72 November 26th, 2010 at 6:29 pm

Like I said, I’m a dumb greenhorn. I’m trying to understand better how approximate sampling is defined.

Comment #73 November 26th, 2010 at 6:32 pm

rrtucci: Yes, our current result (even the approximate one) assumes that the variation distance ε between the sampled distribution and the ideal one is at least 1/p(n) for some polynomial p. Or, more precisely, that the “experimental effort” required to scale to n photons and error ε in variation distance goes like poly(n,1/ε).

Getting a hardness result that works for

fixedε is an extremely interesting open problem. However, maybe the more immediate problem is to prove our conjectures, and thereby confirm that at least the sampling problem with ε=1/poly(n) is hard!Comment #74 November 27th, 2010 at 12:59 am

Thanks Scott. I think I understand it better now. I fixed my blog post with the information you just taught me.

Comment #75 November 28th, 2010 at 11:42 pm

Scott, your paper says

“The prescription suggested

by our results is to choose U randomly, according to the Haar measure over m×m unitaries. Once

U is chosen, one can then “hardwire” a network of beamsplitters and phaseshifters that produces

U.”

I think one way to realize this is as follows. Say m=50 and you want 13 gates. You could have 13 concentric 100-sided regular polygons. (The vertices of the polygons would all fall on 50 lines through the center of the polygons) Each side of each polygon could have a reflection coefficient which could be changed by applying a voltage to that side. By applying random voltages you could get random unitaries.

You could call this circuit Dante’s Inferno, with 13 gates to Hell

Comment #76 November 29th, 2010 at 12:19 am

Correction:

The above won’t work. What you can do instead is to have 13 of these polygons, non-overlapping, all lying in a plane. The light also is confined to the same plane. 50 beams enter the 50 input ports of polygon 1. The 50 exit ports of polygon 1 are connected to the 50 input ports of polygon 2 in a 1-1 fashion. The 50 exit ports of polygon 2 are connected to the 50 input ports of polygon 3. And so on up to polygon 13. As before, the transmission and reflection coefficients of each side of each polygon can be controlled independently by electrical means.

Comment #77 November 29th, 2010 at 12:50 am

Improvement:

You don’t really need 13 polygons. You can “reuse” a single polygon 13 times. Say your single 100-sided regular polygon has 50 sides on the x0 half-plane. Then connect the photon sources and detectors to the 50 ports with x0 half-plane with mirrors and delays. Change 13 times the properties of the polygon doing this while the photons are inside the delays.

Comment #78 November 29th, 2010 at 12:57 am

The last message got mangled. Try again

Improvement:

You don’t really need 13 polygons. You can “reuse” a single polygon 13 times. Say your single 100-sided regular polygon has 50 sides on the x<0 half-plane, and 50 sides in the x>0 half-plane. Then connect the photon sources and detectors to the 50 ports with x<0 and terminate the 50 ports in the x>0 half-plane with mirrors and delays. Change 13 times the properties of the polygon doing this while the photons are inside the delays.

Comment #79 November 29th, 2010 at 11:49 am

Scrap old idea. New idea. My last idea. I promise!

Suppose we want m=50 and 13 gates. Construct a “box” with 50 left (L) ports and 50 right ( R) ports.

Inside the Box:

Each L port splits into 2 optical fibers, call them the NI (NI stands for non-interacting) and the INT-L (INT stands for interacting). The NI fibers go from the L ports to the R ports in a 1-1 fashion. All the INT-L fibers converge to a single point O that acts as a beam splitter. From O, 50 fibers, call them INT-R, fan out, and connect to the R ports in a 1-1 fashion. Hence, inside the box, each L port fans out into two fibers (NI and INT-L) and each R port is the convergence of two fibers (NI and INT-R).

Outside the box:

Reuse the box 13 times

Initially the photon sources are connected to the L ports of the box. The photons enter the box at the L ports and exit it at the R ports. Each time, except the last time, that the photons exit the box, they meet delays and mirrors which makes them to go through the box one more time. The last time the photons exit the box, they meet the detectors.

Each time the photons are outside the box, two of the 50 input ports are made “active”, and all others remain “inactive”. The active ports direct the photons onto INT fibers, whereas the inactive ports direct the photons onto NI fibers. Each time the photons are outside the box, two active ports are chosen at random out of the possible 50, and the reflection and transmission coefficient of the interaction region O are also changed at random.

Comment #80 November 30th, 2010 at 7:34 am

I really jumped the shark this time.

Scott and anyone else who might be interested,

I wrote a blog post describing an implementation, using optical fibers, of the Aaronson Arkhipov experiment. Here it is.

Comment #81 December 3rd, 2010 at 1:15 pm

Over on the Fortnow/GASARCH weblog, under the recent topic

Complexity as Friction, in response to a post by Gil Kalai, I summarized a systems engineering perspective on the Extended Church-Turing Thesis (ECT).That post expresses a somewhat broader & more enginering-oriented version of the ECT than the narrower (extralusionary?) version that sometimes is asserted (implicitly or explicitly) by quantum information theorists.

It is evident that there is at present no statement of the ECT that satisfies the three criteria of being widely accepted across disciplines, mathematically rigorous, and experimentally testable.

In this regard, I have just checked that MathSciNet lists 837 articles having in their title the phrase

“What is” …and almost all of these titles take the form of questions. And yes … at least some of these articles bear directly on the question “What is the ECT?”So perhaps someday Scott might post a weblog topic

“What is the ECT?”And it seems that this question might even be worthy of a research article in itself … certainly there is ample precedent for it!“What is the ECT?”would be a fineShtetl Optimizedtopic … because upon closer examination, we appreciate that no-one (at present) has a clear idea of how best to pose this question … much begin to answer the follow-on question“Is the ECT true of Nature?”.Comment #82 December 14th, 2010 at 10:30 am

Hmmm … ten days with no comments? Heck … that’s no fun!

Here are a (fairly serious) few comments relating to the article’s abstract … the overall idea is to make it easier to parse the abstract and thus broadly grasp the implications of the work.

Hmmm … it’s mighty easy to parse the above statement as referring to

genericsampling and search problems … which isn’t the case. Clearer might be:Then the abstract should state plainly the key complexity-theoretic respect in which these distributions are intractable:

Then the abstract should state plainly the key physics respect in which these experiments are novel:

Scott and Alex, we can be reasonably confident that concurrently with experimental efforts to realize these ideas, there will be theoretical efforts to simulate the experiments, not on a full Hilbert space, but on lower-dimension immersions (product-state manifolds mainly). Thus, a key information-theoretic question—which might as well be asked in the abstract—is going to be the following:

Scott and Alex, it seems to me that your analysis has the great virtue of raising these key questions, which apply equally to experiments and to computational simulations of those experiments. And your analysis has the still

greatervirtue of providing powerful new theorems and complexity-theoretic insights that (partly) answer these questions.So please accept this appreciation of, and thanks for, your wonderfully thought-provoking work!

Comment #83 December 24th, 2010 at 12:14 pm

For the benefit Scott’s readers who are mainly interested in quantum physics and complexity theory—as contrasted with the

politicsof quantum physics and complexity theory—I have posted on Dave Bacon’s weblog an overview of post-Hilbert quantum dynamics as a comment under Dave’s topic “Consequence of the Concept of the Universe as a Computer”, with a view toward establishing a more natural link between Scott’s and Alex’s (immensely interesting IMHO) mathematical theorems and Dave’s (also immensely interesting IMHO) physical ideas.Comment #84 December 25th, 2010 at 12:41 pm

I am upgrading the entries in my BibTeX database that relate to

⟨QIT/QSE⟩ ∩ ⟨

geometric dynamics⟩and have summarized a subset of these entries on Dave Bacon’s blog as a mathematical perspective on a series of on-line IIQC vignettes titled

Seth Lloyd Talks Shop.Seth’s quantum “Shop Talks” are terrific … and are illuminated by some terrific mathematical articles too.

Comment #85 April 16th, 2013 at 10:56 pm

The physicists have begun to react!