⟨QIT/QSE⟩ ∩ ⟨*geometric dynamics*⟩

and have summarized a subset of these entries on Dave Bacon’s blog as a mathematical perspective on a series of on-line IIQC vignettes titled *Seth Lloyd Talks Shop*.

Seth’s quantum “Shop Talks” are terrific … and are illuminated by some terrific mathematical articles too.

]]>Here are a (fairly serious) few comments relating to the article’s abstract … the overall idea is to make it easier to parse the abstract and thus broadly grasp the implications of the work.

Present abstract:“We give evidence that ⟨linear optics experiments⟩ would allow the solution of classically-intractable sampling and search problems.”

Hmmm … it’s mighty easy to parse the above statement as referring to *generic* sampling and search problems … which isn’t the case. Clearer might be:

“We give evidence that linear optics experiments can sample from photon-counting distributions whose classical simulation is formally intractable.”

Then the abstract should state plainly the key complexity-theoretic respect in which these distributions are intractable:

Specifically, these photon-counting distributions are proved to have the following novel property: verification that a sample claimed to derive from these distributions, does in fact derive from these distributions, is infeasible with classical resources in

P, unless the polynomial hierarchy collapses.

Then the abstract should state plainly the key physics respect in which these experiments are novel:

The proposed experiments are novel in that (by design) they generate quantum trajectories that uniformly sample large volumes of quantum state-space. They thus provide a new and potentially uniquely sensitive experimental test of the hypothesis that quantum state-space has exponentially many dimensions and a globally linear (Hilbert) geometry.

Scott and Alex, we can be reasonably confident that concurrently with experimental efforts to realize these ideas, there will be theoretical efforts to simulate the experiments, not on a full Hilbert space, but on lower-dimension immersions (product-state manifolds mainly). Thus, a key information-theoretic question—which might as well be asked in the abstract—is going to be the following:

By what concrete verification test(s) can one most reliably and efficiently distinguish photon-counting datasets that originate from dynamics on a full Hilbert space, from datasets that arise from the same quantum dynamics pulled-back onto immersed submanifolds?

Scott and Alex, it seems to me that your analysis has the great virtue of raising these key questions, which apply equally to experiments and to computational simulations of those experiments. And your analysis has the still *greater* virtue of providing powerful new theorems and complexity-theoretic insights that (partly) answer these questions.

So please accept this appreciation of, and thanks for, your wonderfully thought-provoking work!

]]>That post expresses a somewhat broader & more enginering-oriented version of the ECT than the narrower (extralusionary?) version that sometimes is asserted (implicitly or explicitly) by quantum information theorists.

It is evident that there is at present no statement of the ECT that satisfies the three criteria of being widely accepted across disciplines, mathematically rigorous, and experimentally testable.

In this regard, I have just checked that MathSciNet lists 837 articles having in their title the phrase *“What is” …* and almost all of these titles take the form of questions. And yes … at least some of these articles bear directly on the question “What is the ECT?”

So perhaps someday Scott might post a weblog topic *“What is the ECT?”* And it seems that this question might even be worthy of a research article in itself … certainly there is ample precedent for it!

*“What is the ECT?”* would be a fine *Shtetl Optimized* topic … because upon closer examination, we appreciate that no-one (at present) has a clear idea of how best to pose this question … much begin to answer the follow-on question *“Is the ECT true of Nature?”*.

Scott and anyone else who might be interested,

I wrote a blog post describing an implementation, using optical fibers, of the Aaronson Arkhipov experiment. Here it is.

]]>Suppose we want m=50 and 13 gates. Construct a “box” with 50 left (L) ports and 50 right ( R) ports.

Inside the Box:

Each L port splits into 2 optical fibers, call them the NI (NI stands for non-interacting) and the INT-L (INT stands for interacting). The NI fibers go from the L ports to the R ports in a 1-1 fashion. All the INT-L fibers converge to a single point O that acts as a beam splitter. From O, 50 fibers, call them INT-R, fan out, and connect to the R ports in a 1-1 fashion. Hence, inside the box, each L port fans out into two fibers (NI and INT-L) and each R port is the convergence of two fibers (NI and INT-R).

Outside the box:

Reuse the box 13 times

Initially the photon sources are connected to the L ports of the box. The photons enter the box at the L ports and exit it at the R ports. Each time, except the last time, that the photons exit the box, they meet delays and mirrors which makes them to go through the box one more time. The last time the photons exit the box, they meet the detectors.

Each time the photons are outside the box, two of the 50 input ports are made “active”, and all others remain “inactive”. The active ports direct the photons onto INT fibers, whereas the inactive ports direct the photons onto NI fibers. Each time the photons are outside the box, two active ports are chosen at random out of the possible 50, and the reflection and transmission coefficient of the interaction region O are also changed at random.

]]>Improvement:

You donâ€™t really need 13 polygons. You can â€śreuseâ€ť a single polygon 13 times. Say your single 100-sided regular polygon has 50 sides on the x<0 half-plane, and 50 sides in the x>0 half-plane. Then connect the photon sources and detectors to the 50 ports with x<0 and terminate the 50 ports in the x>0 half-plane with mirrors and delays. Change 13 times the properties of the polygon doing this while the photons are inside the delays.

You don’t really need 13 polygons. You can “reuse” a single polygon 13 times. Say your single 100-sided regular polygon has 50 sides on the x0 half-plane. Then connect the photon sources and detectors to the 50 ports with x0 half-plane with mirrors and delays. Change 13 times the properties of the polygon doing this while the photons are inside the delays. ]]>

The above won’t work. What you can do instead is to have 13 of these polygons, non-overlapping, all lying in a plane. The light also is confined to the same plane. 50 beams enter the 50 input ports of polygon 1. The 50 exit ports of polygon 1 are connected to the 50 input ports of polygon 2 in a 1-1 fashion. The 50 exit ports of polygon 2 are connected to the 50 input ports of polygon 3. And so on up to polygon 13. As before, the transmission and reflection coefficients of each side of each polygon can be controlled independently by electrical means. ]]>