## Five announcements

**Update (Oct. 3):** OK, a sixth announcement. I just posted a question on CS Theory StackExchange, entitled Overarching reasons why problems are in P or BPP. If you have suggested additions or improvements to my rough list of “overarching reasons,” please post them over there — thanks!

1. I’m in Oxford right now, for a Clay Institute workshop on New Insights into Computational Intractability. The workshop is concurrent with three others, including one on Number Theory and Physics that includes an amplituhedron-related talk by Andrew Hodges. (Speaking of which, see here for a small but non-parodic observation about expressing amplitudes as volumes of polytopes.)

2. I was hoping to stay in the UK one more week, to attend the Newton Institute’s special semester on Mathematical Challenges in Quantum Information over in Cambridge. But alas I had to cancel, since my diaper-changing services are needed in the other Cambridge. So, if anyone in Cambridge (or anywhere else in the United Kingdom) *really* wants to talk to me, come to Oxford this week!

3. Back in June, Jens Eisert and three others posted a preprint claiming that the output of a BosonSampling device would be “indistinguishable from the uniform distribution” in various senses. Ever since then, people have emailing me, leaving comments on this blog, and cornering me at conferences to ask whether Alex Arkhipov and I had any response to these claims. OK, so just this weekend, we posted our own 41-page preprint, entitled “BosonSampling Is Far From Uniform.” I hope it suffices by way of reply! (Incidentally, this is also the paper I hinted at in a previous post: the one where π^{2}/6 and the Euler-Mascheroni constant make cameo appearances.) To clarify, if we *just* wanted to answer the claims of the Eisert group, then I think a couple paragraphs would suffice for that (see, for example, these PowerPoint slides). In our new paper, however, Alex and I take the opportunity to go further: we study lots of interesting questions about the statistical properties of Haar-random BosonSampling distributions, and about how one might test efficiently whether a claimed BosonSampling device worked, even with hundreds or thousands of photons.

4. Also on the arXiv last night, there was a phenomenal survey about the quantum PCP conjecture by Dorit Aharonov, Itai Arad, and my former postdoc Thomas Vidick (soon to be a professor at Caltech). I recommend reading it in the strongest possible terms, if you’d like to see how far people have come with this problem (but also, how far they still have to go) since my “Quantum PCP Manifesto” seven years ago.

5. Christos Papadimitriou asked me to publicize that the deadline for early registration and hotel reservations for the upcoming FOCS in Berkeley is fast approaching! Indeed, it’s October 4 (three days from now). See here for details, and here for information about student travel support. (The links were down when I just tried them, but hopefully the server will be back up soon.)

Comment #1 October 1st, 2013 at 9:24 am

The FOCS 2013 site is back up along with the rest of UC Berkeley after a power outage and explosion, possibly due to copper wire theft.

Comment #2 October 1st, 2013 at 1:57 pm

Quick question, based on skimming the first 10 pages and then realizing this wasn’t related to any of my upcoming deadlines.

You say that it is enough to show that |D_A – U| = \Omega(1) with high probability. Plugging in the definition of total variation distance, this means there is a subset X of the sample space (\Lambda_{n,m} in your case) so that D_A and U assign probabilities to X which differ by \Omega(1).

Don’t you need to show not only that such an X exists, but that it can be computed from A in a reasonable amount of time? Otherwise, your result shows that the distributions are widely separated, but doesn’t show that you can figure out what experiment will separate them.

Thanks!

Comment #3 October 1st, 2013 at 7:29 pm

David Speyer #2: Aha, that stronger statement is exactly what we

doprove in Section 8 (which, alas, is not in the first 10 pages, although we do formally state the theorem on page 4)! There, we show that an X that separates D_{A}from U by Ω(1) not only exists, but can be computed inlineartime given A and an experimental outcome.Comment #4 October 1st, 2013 at 8:51 pm

Anyone know what this is about? Interesting, or more hype of something already known?

http://news.sciencemag.org/physics/2013/09/quantum-computers-check-each-other%E2%80%99s-work

Comment #5 October 2nd, 2013 at 11:57 am

Given the desirability of stronger theorems regarding the computational complexity of simulating realistic BosonSampling experiments, the following is interesting as a practical recipe for investigating that complexity both numerically and theoretically.

• Alice publishes the data record

A(m,k)of anm-repeated measurement of ak-photon BosonSampling scattering process.• Bob publishes a simulated data record

B(m,k)of that same experiment, computed by algebraic/operator methods on a standard Hilbert space.• Carla publishes a simulated data record

C(m,k)of that same experiment, computed by Lindblad/Carmichael trajectory unravelling methods on a standard Hilbert space.• David publishes a simulated data record

D(m,k,r)of that same experiment, computed by the same Lindblad/Carmichael trajectory unravelling algorithm as Carla, but pulled-back onto a rank-r Landsberg-style secant variety of Segre varieties.• Judge Ellen receives the identity-blinded datasets

{A(m,k),B(m,k),C(m,k),D(m,k,r)}, and she identifies as David’s simulation the dataset having smallest log-likelihood probability (as computed by the methods of Bob).An open questionSupposing that David’s simulation rankris polynomially bounded in photon-numberk, can David nonetheless fool Ellen with probabilityP(r(k),k)>0asymptotically for largek?One practical merit of the above computation is that it is feasible with laptop-computer resources for photon numbers up to (roughly) 14 for so, and with super-computer resources up to (perhaps) 20 photons. Certainly the research effort required (although considerable) is exponentially less than any serious attempt to produce 20-photon entangled states. Another merit is that realistic models of detector physics can be included in the trajectory integrations of both Carla and David, such that David’s simulations help Alice to improve her experiments. Yet another merit is that the trajectory integrations of both Carla and David are broadly applicable to real-world problems, and so it is advantageous to acquire experience with these methods in the simplified-yet-still-physical world of BosonSampling experiments.

Perhaps the most significant merit, though, is that Alice and Bob and Carla and Ellen are motivated to read the fine works of Howard Carmichael and Carlton Caves in regard to quantum optics and measurement processes, including in particular Caves’ on-line notes

“Completely positive maps, positive maps, and the Lindblad form”(Google finds it), which instruct Carla and David in coding methods.Comment #6 October 3rd, 2013 at 6:44 am

Have Eisert and co withdrawn their paper?

Comment #7 October 3rd, 2013 at 7:14 am

In regard to #5 (above), when we contemplate Judge Ellen’s mathematical challenge — namely, to optimally distinguish real-from-simulated BosonSampling datasets — we appreciate the vast span of the relevant STEM literature, and paradoxically, we appreciate too the deplorable paucity and discoordination of that literature.

We reflect as follows. Judge Ellen’s optimal decision procedure — namely a likelihood ratio test — requires that she calculate, for every dataset, a prior probability under two very different assumptions:

Case ABCnature’s exact quantum dynamical laws (from which Bob’s and Carla’s simulated datasets are sampled, and nominally Alice’s experimental dataset too), versusCase Dthe pullback quantum dynamical laws (from which David’s simulated dataset is sampled, and conceivably Alice’s experimental dataset too).The large and reasonably well-integrated quantum information literature that governs Case ABC in the absence of noise/inefficiency is well-reviewed in the Aaronson/Arkhipov literature. In the presence of noise/inefficiency, the larger and less well-integrated quantum field-theory literature that governs Case ABC now systematically surveyed with a view toward BosonSampling implications … this survey is a lot of hard work (obviously). And finally, the mathematical literature that governs Case D is almost inconceivably vast and diverse (as MathOverflow reading lists attest

hereandespecially here); the implications for quantum physics and engineering of this mathematical literature are just beginning to be appreciated. So Judge Ellen has a *lot* of work to do.ConclusionThe various workshops that Scott has been attending can be appreciated as substantially about the assimilation of the Case D literature in regard to practical STEM enterprises thatbeginwith BosonSampling experiments, and extend naturally to a vast span of strategically crucial enterprises that are encompassed by 21st century quantum systems engineering.Comment #8 October 3rd, 2013 at 8:09 am

Gasarch #6: The paper hasn’t been published, but it remains on the arXiv as a preprint (which is fine, in my opinion).

Comment #9 October 3rd, 2013 at 2:48 pm

I’m sorry you’re not coming to the Newton Institute programme — I was looking forward to meeting you. Another time perhaps.

Comment #10 October 3rd, 2013 at 5:14 pm

As a complexity novice, I keep reading often that solving P=NP would make it possible to find mathematical proofs efficiently/automatically:

“…it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time”

But solving NP-complete problems efficiently doesn’t mean that suddenly any problem involving an exponential solution space (with poly time to verify the solution) would be magically solvable, no?

Unless it depends on the type of method? I.e. one that takes advantage of the underlying structure of NP-complete problems vs a black box method that doesn’t assume a relation between the potential solutions (like a QC examining the whole solution space in parallel and magically collapse the end state onto the correct one?).

Comment #11 October 4th, 2013 at 1:46 am

Timothy #9: I was looking forward to meeting you as well! Yes, it’s really a shame. Another time.

Comment #12 October 4th, 2013 at 1:50 am

Fred #10:

But solving NP-complete problems efficiently doesn’t mean that suddenly any problem involving an exponential solution space (with poly time to verify the solution) would be magically solvable, no?

Actually, it kind of does mean that! (With the usual caveats about the degree of the polynomial needing to be small enough to make things practical, etc.)

Of course, here we need to interpret “poly time to verify the solution” to mean: “there’s a conventional classical computer program, with no oracle, that verifies solutions in polynomial time.” (In other words, the problem is in NP.) But crucially, that property

issatisfied for checking proofs of mathematical theorems, in any standard formal system such as ZFC.Comment #13 October 4th, 2013 at 6:29 am

An important pedagogic virtue of Gogolin, Kliesch, Aolita, and Eisert’s

Boson-Sampling in the light of sample complexity(arXiv:1306.3995) — a virtue that is evident too in Aaronson’sAre Quantum States Exponentially Long Vectors?(arXiv:quant-ph/0507242)nbsp;— is that the word “verify” appears in neither of these fine works.In contrast, both Aaronson and Arkhipov’s

The Computational complexity of linear optics(arXiv:1011.3245) and its follow-onBosonSampling Is Far From Uniform(arXiv:1309.7460) refer to “verification” often, in passages likeA pedagogic deficiency of this open statement is that no credible null hypothesis is subsequently postulated. And so the reader is subjected to (what

Presset al. call in a passage fromNumerical Recipes) “the curse of statistics”

Lifting the curse is simple:

A modest suggestionThe BosonSampling literature will be improved and clarified if the terms “separate” and/or “distinguish” are embraced henceforth, and the term “verify” deprecated.This sharpened nomenclature focuses the reader’s attention properly upon the crucial question “Separate what? Distinguish what?” A wonderful (and much-quoted) passage inAre Quantum States Exponentially Long Vectors?asks this question concretely:The search for Sure/Shor separators nowadays is extended to the search for Sure/BosonSampling separators, and so it is natural to aggregate these separators as follows:

This sharpened nomenclature focuses our attention upon concrete postulates like “Sure = PTIME, which is separated from Hilbert by algebraic state-space rank.”

Hopefully in the 21st century we will learn whether postulates like these are true or false … and this fresh understanding will be good news either way.

Comment #14 October 4th, 2013 at 7:20 am

John Sidles #13: Dude. We talk clearly and explicitly, throughout our 41-page paper, about the various null hypotheses that we’re comparing against the “BosonSampling hypothesis.” One such null hypothesis (the one Gogolin et al. cared about) was that of uniform random sampling—and in that case, we prove (contrary to their claim) that that hypothesis

canbe distinguished from the BosonSampling hypothesis in classical polynomial time. We also went further to consider other null hypotheses, like the “classical mockup distributions” M_{A}, F_{A}, and B_{A}(see Section 8.1). The sentence you quoted was purely for motivation, and was followed immediately by a more careful discussion that made clear what kind of “verification” we were talking about.Maybe, instead of faulting us over the words we used, you could focus on whether our actual claims are

trueorfalse, and likewise Gogolin et al.’s?Comment #15 October 4th, 2013 at 9:20 am

Scott, for me an especially interesting statement in the preprint

BosonSampling is far from uniformis in the section “6. The intractability of verification”:Here the word “verification” appears three times with three different senses. Ouch!

Couldn’t the point be made more plainly and clearly without using the word “verification” in *any* of its various (and incompatible) senses? E.g.

The amended phrasing accommodates the empirical reality that PTIME quantum simulation capabilities — which the original passage calls “a large and interesting collection of ‘null hypotheses’ ” — have been growing more powerful year-by-year, at a Moore’s Law pace, and the limits to further improvements in PTIME quantum simulation capability are not yet foreseeable.

Keep in mind too, that if it should turn out — as is conceivable and even (as it seems to me) entirely plausible — that for finite detector efficiency e<1, BosonSampling datasets are inseparable from well-designed PTIME simulations of those datasets, then that finding would (arguably) be

betternews even than scalable quantum computing, for the practical reason that these same PTIME quantum simulation methods would find immediate applications in enterprises of greater practical utility and strategic consequence than BosonSampling.That’s one more reason why (as it seems to me) BosonSampling is a *wonderful* field of quantum research, for which you and Alex Arkhipov amply deserve admiration and thanks (from me and everyone).

Comment #16 October 5th, 2013 at 7:06 am

You mention you can check boson sampling results with classical computation. How specific are these tests? Do they let someone prove you he can compute boson sampling such that you can recognize any fake with high probability?

Comment #17 October 5th, 2013 at 12:34 pm

Hey, your former postdoc has his own blog, and it looks pretty good! I see it’s already in your blogroll: http://mycqstate.wordpress.com/

Comment #18 October 6th, 2013 at 9:57 am

[…] finishing part 2 of the discussion on the furlough, I wanted to share another interesting debate regarding a quantum computational problem called […]

Comment #19 October 6th, 2013 at 10:07 am

To this extent, I would also argue that the use of uniform distributions in combination with the word classical is something that I have a problem with. So morally I have issues with the Gogolin paper.

Comment #20 October 9th, 2013 at 9:43 am

jonas #16:

You mention you can check boson sampling results with classical computation. How specific are these tests? Do they let someone prove you he can compute boson sampling such that you can recognize any fake with high probability?

Read the paper — we discuss that question in great detail. The short answer is no: we don’t know how to distinguish correct BosonSampling from any

possiblefake in classical polynomial time, and that’s either trivially impossible or else it’s an important open problem whether or not you can do it, depending on the order of the quantifiers. We do, however, know how to distinguish BosonSampling from variousspecificfakes in classical polynomial time, including the fake that Gogolin et al. focused on.Comment #21 October 9th, 2013 at 9:44 am

asdf #17: Yes, Tom Vidick’s blog is great!

Comment #22 October 9th, 2013 at 5:20 pm

scott #21: Another Arkhipov (Vasili) was responsible for preventing the launch of a soviet nuclear torpedo during the Cuban missile crisis. V. Arkhipov subsequently died of nuclear posioning suffered during a submarine reactor accident presumably caused by a pressure gauge not having been installed. Nothing to do with the topic of this post, certainly.

Comment #23 October 10th, 2013 at 4:36 am

To help us in appreciating the richness of the considerations that are associated to Scott’s “order of the quantifiers”, let us consider the simpler scenario in which Experimenter Alice submits to Judge Ellen a dataset consisting of (for example) 1024 flips of a

biasedcoin having 60 percent probability of “heads”.Similarly, Simulationist Bob submits to Judge Ellen a computationally simulated dataset, and Judge Ellen’s job is to decide, as reliably as feasible, which dataset is the simulation.

Before considering this scenario further, students especially should read this week’s

Tim Gowers Weblogwith particular attention to Gowers’ discussion of ‘random-like’ functions.Razborov and Rudich’s natural proofs argumentTo brutally abbreviate a large number of subtle considerations, we imagine a dialog like this:

Judge EllenI’m going to apply a likelihood ratio test; the least likely dataset will be identified as the simulation.Simulationist BobThat judging procedure is too simple for our game to be fun: an all-heads dataset will fool you every time. Please make it harder for us simulationists to win!Judge EllenOK, instead I’m going to identify the simulated dataset as the one with lower Kolmogorov complexity.Simulationist BobNow you’re bluffing, Judge Ellen, `cuz no-one has that much computational power.Judge EllenOK Bob, here’s a fair deal. You get access to a random number generator; however your computational resources are restricted to PTIME; *and* you have to disclose your simulation algorithm to me. In return I will stipulate to you that my test is a log-likelihood ratio, and that I have access to computational resources up to and including EXP, as required to compute that ratio.Simulationist BobThat is a hard-but-fair test. It’s evident that under these rules I *can* indistinguishably simulate biased coin-flipping, and moreover it’s plausible (under standard complexity-theoretic assumption) that I *can’t* simulate zero-noise BosonSampling (because PTIME simulation resources aren’t adequate to sample permanents). Obviously it’s an important open problem —a problem that’s fun too! — whether noisy BosonSampling can be indistinguishably simulated — using for example variations on the clever computational tricks for whichMartin Karplus, Michael Levitt and Arieh Warshel have won the 2013 Nobel Prize in Chemistry.Judge EllenYes, this noisy BosonSampling contest will be fun!Comment #24 October 10th, 2013 at 4:48 am

Later that day, Simulationist Bob texts Judge Ellen:

Simulationist BobBy the way Ellen, am I allowed to cheat by lying to you about my simulation algorithm? Do there exist BosonSampling judging criteria that are robustly efficacious, even when blinded to the simulation algorithm?Judge EllenThat is an important open problem in BosonSampling too!Comment #25 October 10th, 2013 at 9:17 am

“when a problem that naively seems like it should take exponential time turns out nontrivially to be in P or BPP, an “overarching reason” why the reduction happens can typically be identified”

I’ve been wondering about this when looking at Maximum Bipartite Matching (finding stable couples between boys and girls) vs 3-Dimensional Matching (finding stable triplets between buys, girls and pets).

A priori, Bipartite matching doesn’t seem any easier (or 3-Dim matching doesn’t seem that much harder).

Both involve exponentially big possible subsets, and their internal structure isn’t very different.

It’s a bit like Fermat’s last theorem, where things work out for n=2, but not for n >= 3 (but in that case it’s not obvious either why that’s the case).

Comment #26 October 10th, 2013 at 1:36 pm

@Fred There is a very good heuristic for why FLT isn’t solvable for

n \geq 4 and is solvable for n=2. Consider (x,y,z) between 0 and B. Then x^n+y^n takes on B^2 values between 0 and 2 B^n, while z^n takes on B values in this range. If we treat think of this as inserting B^2 values into a size 2B^n has table, then B more, standard heuristics predict few collisions for B*B^2 < < B^n and many for B*B^2 >> B^n.

It’s hard to give a good heuristic for n=3 because the answer depends on the exact nature of the equation: There are infinitely many relatively prime solutions to x^3+y^3=2 z^3.

Comment #27 October 10th, 2013 at 2:31 pm

@David Speyer thanks!

Comment #28 October 10th, 2013 at 7:33 pm

Hi, Scott! I want to make a comment about your past lectures about free will.

Actually I want to point you so a result by Thomas Breuer who showed that due to self-reference, a system which includes the observer does not follow the same physical theories as any other system. In other words, there are no universally valid theories, neither random, nor deterministic.

A system that contains the observer has totally indistinguishable states in the phase space. This phenomenon Breuer calls subjective decoherence.

I think this pretty much proves free will of THE observer.

The papers by Breuer are here:

https://homepages.fhv.at/tb/cms/?download=tbDISS.pdf

https://homepages.fhv.at/tb/cms/?download=tbPHILSC.pdf

https://homepages.fhv.at/tb/cms/?download=QuantumTuring-v2b.pdf

Here is a relevant question on Stackexchange:

http://physics.stackexchange.com/questions/4841/what-is-the-wavefunction-of-the-observer-himself

I also would be interested about what do you thing about this question?

http://physics.stackexchange.com/questions/65696/is-it-possible-to-detect-subjective-decoherence-if-yes-how

Comment #29 October 10th, 2013 at 7:44 pm

I also want to know what do you think about my opinion that impredictability of the observer effectively means that his brain is a hypercomputer whom no machine will ever outsmart. And as such, technological singularity as envisaged by Vinge is impossible (the prerequisite for such singularity is the creation of the machines who are smarter than humans), because there will be at least one man whom the machines will not be able to outsmart.

Comment #30 October 11th, 2013 at 11:59 am

Sorry, I claimed above infinitely many solutions for x^3+y^3=2 z^3, but that’s wrong. The point which I thought was infinite order on the elliptic curve is actually torsion. But the general point is right: There are D for which x^3+y^3=D z^3 has infinitely many solutions.

Comment #31 October 12th, 2013 at 4:53 am

David Speyer (#2, #26, #29, etc.), in regard to

your hundreds of highly-rated answers and commentsonMathOverflowthat relate to algebraic geometry [AG], please let me say many folks (including us engineers) are greatly helped by them.For AG novices especially, it’s disheartening to encounter

comments like Miles Reid’sin the preface to Igor R. Shafarevich’sBasic Algebraic Geometry 1: Varieties in Projective SpaceTo

paraphrase Mark Twain(as is fun) “if math students need a year or two, then systems engineers surely need a decade!”David Speyer, sustained good-hearted mathematical sharing like yours helps many, and hopefully you will not mind this public appreciation of it and thanks for it.

Comment #32 October 12th, 2013 at 7:11 pm

Scott, it’s a bit late to be making this observation, but you should fix your first link to the FOCS website.

Comment #33 October 13th, 2013 at 11:54 am

David Speyer #30: technically, x^3+y^3=2*z^3 still has infinitely many solutions, specifically x=y=z works for any z.

Comment #34 October 13th, 2013 at 1:46 pm

Jonas (re #34), just as

“57 is a, your observation establishes that “2 is a Fermat/Speyer coefficient.”Grothendieck Prime“A littler more seriously, the general paucity of concrete examples in the algebraic geometry literature hinders engineering appreciation of its practical implications. Conversely, it is helpful to read the AG literature with practical engineering applications in mind, and intuitions informed by concrete computations.

Comment #35 October 14th, 2013 at 5:39 pm

This is just a test post to see if MathJax now allows LaTeX in comments,

$\frac{x^2 + bx + c}{2}$

Comment #36 October 14th, 2013 at 5:41 pm

OK, let me try again: $\frac{x^2 + bx + c}{2}$

Comment #37 October 14th, 2013 at 6:49 pm

Alternative test

$$ \mathbf{V}_1 \times \mathbf{V}_2 = \begin{vmatrix}

\mathbf{i} & \mathbf{j} & \mathbf{k} \\

\frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\

\frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0

\end{vmatrix} $$

Comment #38 October 14th, 2013 at 9:58 pm

Inline Test:

$ WE(\mathcal{C}) = \sum_{i=0}^{n}WE_{i}(\mathcal{C}) x^{i} $

Display Test:

$$ WE(\mathcal{C}) = \sum_{i=0}^{n}WE_{i}(\mathcal{C}) x^{i} $$

Scott: You should probably allow editing (or preview) of comments to some extent. Im not sure how WordPress facilitates this. Probably another plugin.

Comment #39 October 15th, 2013 at 12:15 am

All the classical knowledge of quantum computing and Scott still cannot make his real $$ to work here:)

I know this is a lame comment.

Comment #40 October 15th, 2013 at 6:22 am

OK, third try: $$\frac{x^2 + bx + c}{2}$$

Comment #41 October 15th, 2013 at 9:09 am

Inline vs Display Test:

Display:

This is a line of text $$ \frac{x^2 + bx + c}{2} $$ with math in it.

Inline:

This is another line of text \( \frac{x^2 + bx + c}{2} \) with math in it.

Comment #42 October 15th, 2013 at 9:10 am

Hey, it worked! But now let’s try some inline math: \( x^2 \)

Comment #43 October 15th, 2013 at 9:26 am

Commutative Diagram Test:

$$ \require{AMScd} \begin{equation}\begin{CD}

S^{{\mathcal{W}}_\Lambda}\otimes T @>j>> T\\

@VVV @VV{P}V\\

(S\otimes T)/I @= (Z\otimes T)/J

\end{CD} \end{equation} $$

This tests whether the AMScd package is installed/active or not. It can be used by including “\require{AMScd}” in your code.