Five announcements

Update (Oct. 3): OK, a sixth announcement.  I just posted a question on CS Theory StackExchange, entitled Overarching reasons why problems are in P or BPP.  If you have suggested additions or improvements to my rough list of “overarching reasons,” please post them over there — thanks!


1. I’m in Oxford right now, for a Clay Institute workshop on New Insights into Computational Intractability.  The workshop is concurrent with three others, including one on Number Theory and Physics that includes an amplituhedron-related talk by Andrew Hodges.  (Speaking of which, see here for a small but non-parodic observation about expressing amplitudes as volumes of polytopes.)

2. I was hoping to stay in the UK one more week, to attend the Newton Institute’s special semester on Mathematical Challenges in Quantum Information over in Cambridge.  But alas I had to cancel, since my diaper-changing services are needed in the other Cambridge.  So, if anyone in Cambridge (or anywhere else in the United Kingdom) really wants to talk to me, come to Oxford this week!

3. Back in June, Jens Eisert and three others posted a preprint claiming that the output of a BosonSampling device would be “indistinguishable from the uniform distribution” in various senses.  Ever since then, people have emailing me, leaving comments on this blog, and cornering me at conferences to ask whether Alex Arkhipov and I had any response to these claims.  OK, so just this weekend, we posted our own 41-page preprint, entitled “BosonSampling Is Far From Uniform.”  I hope it suffices by way of reply!  (Incidentally, this is also the paper I hinted at in a previous post: the one where π2/6 and the Euler-Mascheroni constant make cameo appearances.)  To clarify, if we just wanted to answer the claims of the Eisert group, then I think a couple paragraphs would suffice for that (see, for example, these PowerPoint slides).  In our new paper, however, Alex and I take the opportunity to go further: we study lots of interesting questions about the statistical properties of Haar-random BosonSampling distributions, and about how one might test efficiently whether a claimed BosonSampling device worked, even with hundreds or thousands of photons.

4. Also on the arXiv last night, there was a phenomenal survey about the quantum PCP conjecture by Dorit Aharonov, Itai Arad, and my former postdoc Thomas Vidick (soon to be a professor at Caltech).  I recommend reading it in the strongest possible terms, if you’d like to see how far people have come with this problem (but also, how far they still have to go) since my “Quantum PCP Manifesto” seven years ago.

5. Christos Papadimitriou asked me to publicize that the deadline for early registration and hotel reservations for the upcoming FOCS in Berkeley is fast approaching!  Indeed, it’s October 4 (three days from now).  See here for details, and here for information about student travel support.  (The links were down when I just tried them, but hopefully the server will be back up soon.)

43 Responses to “Five announcements”

  1. Paul Beame Says:

    The FOCS 2013 site is back up along with the rest of UC Berkeley after a power outage and explosion, possibly due to copper wire theft.

  2. David Speyer Says:

    Quick question, based on skimming the first 10 pages and then realizing this wasn’t related to any of my upcoming deadlines.

    You say that it is enough to show that |D_A – U| = \Omega(1) with high probability. Plugging in the definition of total variation distance, this means there is a subset X of the sample space (\Lambda_{n,m} in your case) so that D_A and U assign probabilities to X which differ by \Omega(1).

    Don’t you need to show not only that such an X exists, but that it can be computed from A in a reasonable amount of time? Otherwise, your result shows that the distributions are widely separated, but doesn’t show that you can figure out what experiment will separate them.

    Thanks!

  3. Scott Says:

    David Speyer #2: Aha, that stronger statement is exactly what we do prove in Section 8 (which, alas, is not in the first 10 pages, although we do formally state the theorem on page 4)! There, we show that an X that separates DA from U by Ω(1) not only exists, but can be computed in linear time given A and an experimental outcome.

  4. asdf Says:

    Anyone know what this is about? Interesting, or more hype of something already known?

    http://news.sciencemag.org/physics/2013/09/quantum-computers-check-each-other%E2%80%99s-work

  5. John Sidles Says:

    Given the desirability of stronger theorems regarding the computational complexity of simulating realistic BosonSampling experiments, the following is interesting as a practical recipe for investigating that complexity both numerically and theoretically.

    • Alice publishes the data record A(m,k) of an m-repeated measurement of a k-photon BosonSampling scattering process.

    • Bob publishes a simulated data record B(m,k) of that same experiment, computed by algebraic/operator methods on a standard Hilbert space.

    • Carla publishes a simulated data record C(m,k) of that same experiment, computed by Lindblad/Carmichael trajectory unravelling methods on a standard Hilbert space.

    • David publishes a simulated data record D(m,k,r) of that same experiment, computed by the same Lindblad/Carmichael trajectory unravelling algorithm as Carla, but pulled-back onto a rank-r Landsberg-style secant variety of Segre varieties.

    • Judge Ellen receives the identity-blinded datasets {A(m,k),B(m,k),C(m,k),D(m,k,r)}, and she identifies as David’s simulation the dataset having smallest log-likelihood probability (as computed by the methods of Bob).

    An open question  Supposing that David’s simulation rank r is polynomially bounded in photon-number k, can David nonetheless fool Ellen with probability P(r(k),k)>0 asymptotically for large k?

    One practical merit of the above computation is that it is feasible with laptop-computer resources for photon numbers up to (roughly) 14 for so, and with super-computer resources up to (perhaps) 20 photons. Certainly the research effort required (although considerable) is exponentially less than any serious attempt to produce 20-photon entangled states. Another merit is that realistic models of detector physics can be included in the trajectory integrations of both Carla and David, such that David’s simulations help Alice to improve her experiments. Yet another merit is that the trajectory integrations of both Carla and David are broadly applicable to real-world problems, and so it is advantageous to acquire experience with these methods in the simplified-yet-still-physical world of BosonSampling experiments.

    Perhaps the most significant merit, though, is that Alice and Bob and Carla and Ellen are motivated to read the fine works of Howard Carmichael and Carlton Caves in regard to quantum optics and measurement processes, including in particular Caves’ on-line notes “Completely positive maps, positive maps, and the Lindblad form” (Google finds it), which instruct Carla and David in coding methods.

  6. GASARCH Says:

    Have Eisert and co withdrawn their paper?

  7. John Sidles Says:

    In regard to #5 (above), when we contemplate Judge Ellen’s mathematical challenge — namely, to optimally distinguish real-from-simulated BosonSampling datasets — we appreciate the vast span of the relevant STEM literature, and paradoxically, we appreciate too the deplorable paucity and discoordination of that literature.

    We reflect as follows. Judge Ellen’s optimal decision procedure — namely a likelihood ratio test — requires that she calculate, for every dataset, a prior probability under two very different assumptions: Case ABC nature’s exact quantum dynamical laws (from which Bob’s and Carla’s simulated datasets are sampled, and nominally Alice’s experimental dataset too), versus Case D the pullback quantum dynamical laws (from which David’s simulated dataset is sampled, and conceivably Alice’s experimental dataset too).

    The large and reasonably well-integrated quantum information literature that governs Case ABC in the absence of noise/inefficiency is well-reviewed in the Aaronson/Arkhipov literature. In the presence of noise/inefficiency, the larger and less well-integrated quantum field-theory literature that governs Case ABC now systematically surveyed with a view toward BosonSampling implications … this survey is a lot of hard work (obviously). And finally, the mathematical literature that governs Case D is almost inconceivably vast and diverse (as MathOverflow reading lists attest here and especially here); the implications for quantum physics and engineering of this mathematical literature are just beginning to be appreciated. So Judge Ellen has a *lot* of work to do.

    Conclusion  The various workshops that Scott has been attending can be appreciated as substantially about the assimilation of the Case D literature in regard to practical STEM enterprises that begin with BosonSampling experiments, and extend naturally to a vast span of strategically crucial enterprises that are encompassed by 21st century quantum systems engineering.

  8. Scott Says:

    Gasarch #6: The paper hasn’t been published, but it remains on the arXiv as a preprint (which is fine, in my opinion).

  9. Timothy Gowers Says:

    I’m sorry you’re not coming to the Newton Institute programme — I was looking forward to meeting you. Another time perhaps.

  10. Fred Says:

    As a complexity novice, I keep reading often that solving P=NP would make it possible to find mathematical proofs efficiently/automatically:

    “…it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time”

    But solving NP-complete problems efficiently doesn’t mean that suddenly any problem involving an exponential solution space (with poly time to verify the solution) would be magically solvable, no?
    Unless it depends on the type of method? I.e. one that takes advantage of the underlying structure of NP-complete problems vs a black box method that doesn’t assume a relation between the potential solutions (like a QC examining the whole solution space in parallel and magically collapse the end state onto the correct one?).

  11. Scott Says:

    Timothy #9: I was looking forward to meeting you as well! Yes, it’s really a shame. Another time.

  12. Scott Says:

    Fred #10:

      But solving NP-complete problems efficiently doesn’t mean that suddenly any problem involving an exponential solution space (with poly time to verify the solution) would be magically solvable, no?

    Actually, it kind of does mean that! (With the usual caveats about the degree of the polynomial needing to be small enough to make things practical, etc.)

    Of course, here we need to interpret “poly time to verify the solution” to mean: “there’s a conventional classical computer program, with no oracle, that verifies solutions in polynomial time.” (In other words, the problem is in NP.) But crucially, that property is satisfied for checking proofs of mathematical theorems, in any standard formal system such as ZFC.

  13. John Sidles Says:

    An important pedagogic virtue of Gogolin, Kliesch, Aolita, and Eisert’s Boson-Sampling in the light of sample complexity (arXiv:1306.3995) — a virtue that is evident too in Aaronson’s Are Quantum States Exponentially Long Vectors? (arXiv:quant-ph/0507242)nbsp;— is that the word “verify” appears in neither of these fine works.

    In contrast, both Aaronson and Arkhipov’s The Computational complexity of linear optics (arXiv:1011.3245) and its follow-on BosonSampling Is Far From Uniform (arXiv:1309.7460) refer to “verification” often, in passages like

    Suppose we want to verify that the output of a BosonSampling device matches the predictions of quantum mechanics.

    A pedagogic deficiency of this open statement is that no credible null hypothesis is subsequently postulated. And so the reader is subjected to (what Press et al. call in a passage from Numerical Recipes) “the curse of statistics”

    That is the curse of statistics, that it can never prove things, only disprove them! At best, you can substantiate a hypothesis by ruling out, statistically, a whole long list of competing hypotheses, every one that has ever been proposed. After a while your adversaries and competitors will give up trying to think of alternative hypotheses, or else they will grow old and die, and then your hypothesis will become accepted. Sounds crazy, we know, but that’s how science works!

    Lifting the curse is simple:

    A modest suggestion  The BosonSampling literature will be improved and clarified if the terms “separate” and/or “distinguish” are embraced henceforth, and the term “verify” deprecated.

    This sharpened nomenclature focuses the reader’s attention properly upon the crucial question “Separate what? Distinguish what?” A wonderful (and much-quoted) passage in Are Quantum States Exponentially Long Vectors? asks this question concretely:

    Q  What criterion separates the quantum states we’re sure we can prepare, from the states that arise in Shor’s factoring algorithm? I call such a criterion a “Sure/Shor separator.”

    The search for Sure/Shor separators nowadays is extended to the search for Sure/BosonSampling separators, and so it is natural to aggregate these separators as follows:

    Q  What criterion separates the quantum datasets that we’re sure we can observe, from the datasets that we can simulate in PTIME, from the datasets that are associated to Hilbert’s state-space? We call such a criterion a three-way “Sure/PTIME/Hilbert separator.”

    This sharpened nomenclature focuses our attention upon concrete postulates like “Sure = PTIME, which is separated from Hilbert by algebraic state-space rank.”

    Hopefully in the 21st century we will learn whether postulates like these are true or false … and this fresh understanding will be good news either way.

  14. Scott Says:

    John Sidles #13: Dude. We talk clearly and explicitly, throughout our 41-page paper, about the various null hypotheses that we’re comparing against the “BosonSampling hypothesis.” One such null hypothesis (the one Gogolin et al. cared about) was that of uniform random sampling—and in that case, we prove (contrary to their claim) that that hypothesis can be distinguished from the BosonSampling hypothesis in classical polynomial time. We also went further to consider other null hypotheses, like the “classical mockup distributions” MA, FA, and BA (see Section 8.1). The sentence you quoted was purely for motivation, and was followed immediately by a more careful discussion that made clear what kind of “verification” we were talking about.

    Maybe, instead of faulting us over the words we used, you could focus on whether our actual claims are true or false, and likewise Gogolin et al.’s?

  15. John Sidles Says:

    Scott, for me an especially interesting statement in the preprint BosonSampling is far from uniform is in the section “6. The intractability of verification”:

    The intractability of verification:  It is possible that one can ‘verify’ a BosonSampling distribution D_A (in the sense of distinguishing D_A from a large and interesting collection of ‘null hypotheses’), without having to verify the value of Per(X) for any particular matrix X.

    Here the word “verification” appears three times with three different senses. Ouch!

    Couldn’t the point be made more plainly and clearly without using the word “verification” in *any* of its various (and incompatible) senses? E.g.

    On the practical problem of separation:  In regard to the practical problem of reliably separating a BosonSampling dataset D_A from a simulated dataset D_B that has been generated by a PTIME algorithm, an important open problem is to find statistically strong separation measures that are general-purpose in the sense of not requiring detailed knowledge of the simulation algorithm; preferably measures that can be computed with PTIME resources (in particular, without computing Per(X))

    The amended phrasing accommodates the empirical reality that PTIME quantum simulation capabilities — which the original passage calls “a large and interesting collection of ‘null hypotheses’ ” — have been growing more powerful year-by-year, at a Moore’s Law pace, and the limits to further improvements in PTIME quantum simulation capability are not yet foreseeable.

    Keep in mind too, that if it should turn out — as is conceivable and even (as it seems to me) entirely plausible — that for finite detector efficiency e<1, BosonSampling datasets are inseparable from well-designed PTIME simulations of those datasets, then that finding would (arguably) be better news even than scalable quantum computing, for the practical reason that these same PTIME quantum simulation methods would find immediate applications in enterprises of greater practical utility and strategic consequence than BosonSampling.

    That’s one more reason why (as it seems to me) BosonSampling is a *wonderful* field of quantum research, for which you and Alex Arkhipov amply deserve admiration and thanks (from me and everyone).

  16. jonas Says:

    You mention you can check boson sampling results with classical computation. How specific are these tests? Do they let someone prove you he can compute boson sampling such that you can recognize any fake with high probability?

  17. asdf Says:

    Hey, your former postdoc has his own blog, and it looks pretty good! I see it’s already in your blogroll: http://mycqstate.wordpress.com/

  18. Boson Sampling and Verification | The Furloff Says:

    […] finishing part 2 of the discussion on the furlough, I wanted to share another interesting debate regarding a quantum computational problem called […]

  19. Hal Swyers Says:

    To this extent, I would also argue that the use of uniform distributions in combination with the word classical is something that I have a problem with. So morally I have issues with the Gogolin paper.

  20. Scott Says:

    jonas #16:

      You mention you can check boson sampling results with classical computation. How specific are these tests? Do they let someone prove you he can compute boson sampling such that you can recognize any fake with high probability?

    Read the paper — we discuss that question in great detail. The short answer is no: we don’t know how to distinguish correct BosonSampling from any possible fake in classical polynomial time, and that’s either trivially impossible or else it’s an important open problem whether or not you can do it, depending on the order of the quantifiers. We do, however, know how to distinguish BosonSampling from various specific fakes in classical polynomial time, including the fake that Gogolin et al. focused on.

  21. Scott Says:

    asdf #17: Yes, Tom Vidick’s blog is great!

  22. no_one Says:

    scott #21: Another Arkhipov (Vasili) was responsible for preventing the launch of a soviet nuclear torpedo during the Cuban missile crisis. V. Arkhipov subsequently died of nuclear posioning suffered during a submarine reactor accident presumably caused by a pressure gauge not having been installed. Nothing to do with the topic of this post, certainly.

  23. John Sidles Says:

    Scott reminds us (#20) “We don’t know how to distinguish correct BosonSampling from any possible fake in classical polynomial time, and that’s either trivially impossible or else it’s an important open problem whether or not you can do it, depending on the order of the quantifiers.”

    To help us in appreciating the richness of the considerations that are associated to Scott’s “order of the quantifiers”, let us consider the simpler scenario in which Experimenter Alice submits to Judge Ellen a dataset consisting of (for example) 1024 flips of a biased coin having 60 percent probability of “heads”.

    Similarly, Simulationist Bob submits to Judge Ellen a computationally simulated dataset, and Judge Ellen’s job is to decide, as reliably as feasible, which dataset is the simulation.

    Before considering this scenario further, students especially should read this week’s Tim Gowers Weblog Razborov and Rudich’s natural proofs argument with particular attention to Gowers’ discussion of ‘random-like’ functions.

    To brutally abbreviate a large number of subtle considerations, we imagine a dialog like this:

    Judge Ellen  I’m going to apply a likelihood ratio test; the least likely dataset will be identified as the simulation.

    Simulationist Bob  That judging procedure is too simple for our game to be fun: an all-heads dataset will fool you every time. Please make it harder for us simulationists to win!

    Judge Ellen  OK, instead I’m going to identify the simulated dataset as the one with lower Kolmogorov complexity.

    Simulationist Bob  Now you’re bluffing, Judge Ellen, `cuz no-one has that much computational power.

    Judge Ellen  OK Bob, here’s a fair deal. You get access to a random number generator; however your computational resources are restricted to PTIME; *and* you have to disclose your simulation algorithm to me. In return I will stipulate to you that my test is a log-likelihood ratio, and that I have access to computational resources up to and including EXP, as required to compute that ratio.

    Simulationist Bob  That is a hard-but-fair test. It’s evident that under these rules I *can* indistinguishably simulate biased coin-flipping, and moreover it’s plausible (under standard complexity-theoretic assumption) that I *can’t* simulate zero-noise BosonSampling (because PTIME simulation resources aren’t adequate to sample permanents). Obviously it’s an important open problem —a problem that’s fun too! — whether noisy BosonSampling can be indistinguishably simulated — using for example variations on the clever computational tricks for which Martin Karplus, Michael Levitt and Arieh Warshel have won the 2013 Nobel Prize in Chemistry.

    Judge Ellen  Yes, this noisy BosonSampling contest will be fun!

  24. John Sidles Says:

    Later that day, Simulationist Bob texts Judge Ellen:

    Simulationist Bob  By the way Ellen, am I allowed to cheat by lying to you about my simulation algorithm? Do there exist BosonSampling judging criteria that are robustly efficacious, even when blinded to the simulation algorithm?

    Judge Ellen  That is an important open problem in BosonSampling too!

  25. Fred Says:

    “when a problem that naively seems like it should take exponential time turns out nontrivially to be in P or BPP, an “overarching reason” why the reduction happens can typically be identified”

    I’ve been wondering about this when looking at Maximum Bipartite Matching (finding stable couples between boys and girls) vs 3-Dimensional Matching (finding stable triplets between buys, girls and pets).
    A priori, Bipartite matching doesn’t seem any easier (or 3-Dim matching doesn’t seem that much harder).
    Both involve exponentially big possible subsets, and their internal structure isn’t very different.
    It’s a bit like Fermat’s last theorem, where things work out for n=2, but not for n >= 3 (but in that case it’s not obvious either why that’s the case).

  26. David Speyer Says:

    @Fred There is a very good heuristic for why FLT isn’t solvable for
    n \geq 4 and is solvable for n=2. Consider (x,y,z) between 0 and B. Then x^n+y^n takes on B^2 values between 0 and 2 B^n, while z^n takes on B values in this range. If we treat think of this as inserting B^2 values into a size 2B^n has table, then B more, standard heuristics predict few collisions for B*B^2 < < B^n and many for B*B^2 >> B^n.

    It’s hard to give a good heuristic for n=3 because the answer depends on the exact nature of the equation: There are infinitely many relatively prime solutions to x^3+y^3=2 z^3.

  27. Fred Says:

    @David Speyer thanks!

  28. Anixx Says:

    Hi, Scott! I want to make a comment about your past lectures about free will.

    Actually I want to point you so a result by Thomas Breuer who showed that due to self-reference, a system which includes the observer does not follow the same physical theories as any other system. In other words, there are no universally valid theories, neither random, nor deterministic.

    A system that contains the observer has totally indistinguishable states in the phase space. This phenomenon Breuer calls subjective decoherence.

    I think this pretty much proves free will of THE observer.

    The papers by Breuer are here:
    https://homepages.fhv.at/tb/cms/?download=tbDISS.pdf

    https://homepages.fhv.at/tb/cms/?download=tbPHILSC.pdf

    https://homepages.fhv.at/tb/cms/?download=QuantumTuring-v2b.pdf‎

    Here is a relevant question on Stackexchange:
    http://physics.stackexchange.com/questions/4841/what-is-the-wavefunction-of-the-observer-himself

    I also would be interested about what do you thing about this question?

    http://physics.stackexchange.com/questions/65696/is-it-possible-to-detect-subjective-decoherence-if-yes-how

  29. Anixx Says:

    I also want to know what do you think about my opinion that impredictability of the observer effectively means that his brain is a hypercomputer whom no machine will ever outsmart. And as such, technological singularity as envisaged by Vinge is impossible (the prerequisite for such singularity is the creation of the machines who are smarter than humans), because there will be at least one man whom the machines will not be able to outsmart.

  30. David Speyer Says:

    Sorry, I claimed above infinitely many solutions for x^3+y^3=2 z^3, but that’s wrong. The point which I thought was infinite order on the elliptic curve is actually torsion. But the general point is right: There are D for which x^3+y^3=D z^3 has infinitely many solutions.

  31. John Sidles Says:

    David Speyer (#2, #26, #29, etc.), in regard to your hundreds of highly-rated answers and comments on MathOverflow that relate to algebraic geometry [AG], please let me say many folks (including us engineers) are greatly helped by them.

    For AG novices especially, it’s disheartening to encounter comments like Miles Reid’s in the preface to Igor R. Shafarevich’s Basic Algebraic Geometry 1: Varieties in Projective Space

    “My experience is that some graduate students (by no means all) can work hard for a year or two on Chapters 2–3 of Hartshorne, and still know more-or-less nothing at the end of it.”

    To paraphrase Mark Twain (as is fun) “if math students need a year or two, then systems engineers surely need a decade!”

    David Speyer, sustained good-hearted mathematical sharing like yours helps many, and hopefully you will not mind this public appreciation of it and thanks for it.

  32. razvan Says:

    Scott, it’s a bit late to be making this observation, but you should fix your first link to the FOCS website.

  33. jonas Says:

    David Speyer #30: technically, x^3+y^3=2*z^3 still has infinitely many solutions, specifically x=y=z works for any z.

  34. John Sidles Says:

    Jonas (re #34), just as 57 is a Grothendieck Prime, your observation establishes that “2 is a Fermat/Speyer coefficient.”

    A littler more seriously, the general paucity of concrete examples in the algebraic geometry literature hinders engineering appreciation of its practical implications. Conversely, it is helpful to read the AG literature with practical engineering applications in mind, and intuitions informed by concrete computations.

  35. Scott Says:

    This is just a test post to see if MathJax now allows LaTeX in comments,

    $\frac{x^2 + bx + c}{2}$

  36. Scott Says:

    OK, let me try again: $\frac{x^2 + bx + c}{2}$

  37. Michael Dixon Says:

    Alternative test

    $$ \mathbf{V}_1 \times \mathbf{V}_2 = \begin{vmatrix}
    \mathbf{i} & \mathbf{j} & \mathbf{k} \\
    \frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\
    \frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0
    \end{vmatrix} $$

  38. Michael Dixon Says:

    Inline Test:

    $ WE(\mathcal{C}) = \sum_{i=0}^{n}WE_{i}(\mathcal{C}) x^{i} $

    Display Test:

    $$ WE(\mathcal{C}) = \sum_{i=0}^{n}WE_{i}(\mathcal{C}) x^{i} $$

    Scott: You should probably allow editing (or preview) of comments to some extent. Im not sure how WordPress facilitates this. Probably another plugin.

  39. J Says:

    All the classical knowledge of quantum computing and Scott still cannot make his real $$ to work here:)

    I know this is a lame comment.

  40. Scott Says:

    OK, third try: $$\frac{x^2 + bx + c}{2}$$

  41. Michael Dixon Says:

    Inline vs Display Test:

    Display:

    This is a line of text $$ \frac{x^2 + bx + c}{2} $$ with math in it.

    Inline:

    This is another line of text \( \frac{x^2 + bx + c}{2} \) with math in it.

  42. Scott Says:

    Hey, it worked! But now let’s try some inline math: \( x^2 \)

  43. Michael Dixon Says:

    Commutative Diagram Test:

    $$ \require{AMScd} \begin{equation}\begin{CD}
    S^{{\mathcal{W}}_\Lambda}\otimes T @>j>> T\\
    @VVV @VV{P}V\\
    (S\otimes T)/I @= (Z\otimes T)/J
    \end{CD} \end{equation} $$

    This tests whether the AMScd package is installed/active or not. It can be used by including “\require{AMScd}” in your code.