Yet more mistakes in papers

Amazing Update (Aug. 19): My former PhD student Daniel Grier tells me that he, Sergey Bravyi, and David Gosset have an arXiv preprint, from February, where they give a corrected proof of my and Andris Ambainis’s claim that any k-query quantum algorithm can be simulated by an O (N1-1/2k)-query classical randomized algorithm (albeit, not of our stronger statement, about a randomized algorithm to estimate any bounded low-degree real polynomial). The reason I hadn’t known about this is that they don’t mention it in the abstract of their paper (!!). But it’s right there in Theorem 5.


In my last post, I came down pretty hard on the blankfaces: people who relish their power to persist in easily-correctable errors, to the detriment of those subject to their authority. The sad truth, though, is that I don’t obviously do better than your average blankface in my ability to resist falsehoods on early encounter with them. As one of many examples that readers of this blog might know, I didn’t think covid seemed like a big deal in early February 2020—although by mid-to-late February 2020, I’d repented of my doofosity. If I have any tool with which to unblank my face, then it’s only my extreme self-consciousness when confronted with evidence of my own stupidities—the way I’ve trained myself over decades in science to see error-correction as a or even the fundamental virtue.

Which brings me to today’s post. Continuing what’s become a Shtetl-Optimized tradition—see here from 2014, here from 2016, here from 2017—I’m going to fess up to two serious mistakes in research papers on which I was a coauthor.


In 2015, Andris Ambainis and I had a STOC paper entitled Forrelation: A Problem that Optimally Separates Quantum from Classical Computing. We gave two main results there:

  1. A Ω((√N)/log(N)) lower bound on the randomized query complexity of my “Forrelation” problem, which was known to be solvable with only a single quantum query.
  2. A proposed way to take any k-query quantum algorithm that queries an N-bit string, and simulate it using only O(N1-1/2k) classical randomized queries.

Later, Bansal and Sinha and independently Sherstov, Storozhenko, and Wu showed that a k-query generalization of Forrelation, which I’d also defined, requires ~Ω(N1-1/2k) classical randomized queries, in line with my and Andris’s conjecture that k-fold Forrelation optimally separates quantum and classical query complexities.

A couple months ago, alas, my former grad school officemate Andrej Bogdanov, along with Tsun Ming Cheung and Krishnamoorthy Dinesh, emailed me and Andris to say that they’d discovered an error in result 2 of our paper (result 1, along with the Bansal-Sinha and Sherstov-Storozhenko-Wu extensions of it, remained fine). So, adding our own names, we’ve now posted a preprint on ECCC that explains the error, while also showing how to recover our result for the special case k=1: that is, any 1-query quantum algorithm really can be simulated using only O(√N) classical randomized queries.

Read the preprint if you really want to know the details of the error, but to summarize it in my words: Andris and I used a trick that we called “variable-splitting” to handle variables that have way more influence than average on the algorithm’s acceptance probability. Alas, variable-splitting fails to take care of a situation where there are a bunch of variables that are non-influential individually, but that on some unusual input string, can “conspire” in such a way that their signs all line up and their contribution overwhelms those from the other variables. A single mistaken inequality fooled us into thinking such cases were handled, but an explicit counterexample makes the issue obvious.

I still conjecture that my original guess was right: that is, I conjecture that any problem solvable with k quantum queries is solvable with O(N1-1/2k) classical randomized queries, so that k-fold Forrelation is the extremal example, and so that no problem has constant quantum query complexity but linear randomized query complexity. More strongly, I reiterate the conjecture that any bounded degree-d real polynomial, p:{0,1}N→[0,1], can be approximated by querying only O(N1-1/d) input bits drawn from some suitable distribution. But proving these conjectures, if they’re true, will require a new algorithmic idea.


Now for the second mea culpa. Earlier this year, my student Sabee Grewal and I posted a short preprint on the arXiv entitled Efficient Learning of Non-Interacting Fermion Distributions. In it, we claimed to give a classical algorithm for reconstructing any “free fermionic state” |ψ⟩—that is, a state of n identical fermionic particles, like electrons, each occupying one of m>n possible modes, that can be produced using only “fermionic beamsplitters” and no interaction terms—and for doing so in polynomial time and using a polynomial number of samples (i.e., measurements of where all the fermions are, given a copy of |ψ⟩). Alas, after trying to reply to confused comments from readers and reviewers (albeit, none of them exactly putting their finger on the problem), Sabee and I were able to figure out that we’d done no such thing.

Let me explain the error, since it’s actually really interesting. In our underlying problem, we’re trying to find a collection of unit vectors, call them |v1⟩,…,|vm⟩, in Cn. Here, again, n is the number of fermions and m>n is the number of modes. By measuring the “2-mode correlations” (i.e., the probability of finding a fermion in both mode i and mode j), we can figure out the approximate value of |⟨vi|vj⟩|—i.e., the absolute value of the inner product—for any i≠j. From that information, we want to recover |v1⟩,…,|vm⟩ themselves—or rather, their relative configuration in n-dimensional space, isometries being irrelevant.

It seemed to me and Sabee that, if we knew ⟨vi|vj⟩ for all i≠j, then we’d get linear equations that iteratively constrained each |vj⟩ in terms of ⟨vi|vj⟩ for j<i, so all we’d need to do is solve those linear systems, and then (crucially, and this was the main work we did) show that the solution would be robust with respect to small errors in our estimates of each ⟨vi|vj⟩. It seemed further to us that, while it was true that the measurements only revealed |⟨vi|vj⟩| rather than ⟨vi|vj⟩ itself, the “phase information” in ⟨vi|vj⟩ was manifestly irrelevant, as it in any case depended on the irrelevant global phases of |vi⟩ and |vj⟩ themselves.

Alas, it turns out that the phase information does matter. As an example, suppose I told you only the following about three unit vectors |u⟩,|v⟩,|w⟩ in R3:

|⟨u|v⟩| = |⟨u|w⟩| = |⟨v|w⟩| = 1/2.

Have I thereby determined these vectors up to isometry? Nope! In one class of solution, all three vectors belong to the same plane, like so:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(-1/2,(√3)/2,0).

In a completely different class of solution, the three vectors don’t belong to the same plane, and instead look like three edges of a tetrahedron meeting at a vertex:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(1/2,1/(2√3),√(2/3)).

These solutions correspond to different sign choices for |⟨u|v⟩|, |⟨u|w⟩|, and |⟨v|w⟩|—choices that collectively matter, even though each of them is individually irrelevant.

It follows that, even in the special case where the vectors are all real, the 2-mode correlations are not enough information to determine the vectors’ relative positions. (Well, it takes some more work to convert this to a counterexample that could actually arise in the fermion problem, but that work can be done.) And alas, the situation gets even gnarlier when, as for us, the vectors can be complex.

Any possible algorithm for our problem will have to solve a system of nonlinear equations (albeit, a massively overconstrained system that’s guaranteed to have a solution), and it will have to use 3-mode correlations (i.e., statistics of triples of fermions), and quite possibly 4-mode correlations and above.

But now comes the good news! Googling revealed that, for reasons having nothing to do with fermions or quantum physics, problems extremely close to ours had already been studied in classical machine learning. The key term here is “Determinantal Point Processes” (DPPs). A DPP is a model where you specify an m×m matrix A (typically symmetric or Hermitian), and then the probabilities of various events are given by the determinants of various principal minors of A. Which is precisely what happens with fermions! In terms of the vectors |v1⟩,…,|vm⟩ that I was talking about before, to make this connection we simply let A be the m×m covariance matrix, whose (i,j) entry equals ⟨vi|vj⟩.

I first learned of this remarkable correspondence between fermions and DPPs a decade ago, from a talk on DPPs that Ben Taskar gave at MIT. Immediately after the talk, I made a mental note that Taskar was a rising star in theoretical machine learning, and that his work would probably be relevant to me in the future. While researching this summer, I was devastated to learn that Taskar died of heart failure in 2013, in his mid-30s and only a couple of years after I’d heard him speak.

The most relevant paper for me and Sabee was called An Efficient Algorithm for the Symmetric Principal Minor Assignment Problem, by Rising, Kulesza, and Taskar. Using a combinatorial algorithm based on minimum spanning trees and chordless cycles, this paper nearly solves our problem, except for two minor details:

  1. It doesn’t do an error analysis, and
  2. It considers complex symmetric matrices, whereas our matrix A is Hermitian (i.e., it equals its conjugate transpose, not its transpose).

So I decided to email Alex Kulezsa, one of Taskar’s surviving collaborators who’s now a research scientist at Google NYC, to ask his thoughts about the Hermitian case. Alex kindly replied that they’d been meaning to study that case—a reviewer had even asked about it!—but they’d ran into difficulties and didn’t know what it was good for. I asked Alex whether he’d like to join forces with me and Sabee in tackling the Hermitian case, which (I told him) was enormously relevant in quantum physics. To my surprise and delight, Alex agreed.

So we’ve been working on the problem together, making progress, and I’m optimistic that we’ll have some nice result. By using the 3-mode correlations, at least “generically” we can recover the entries of the matrix A up to complex conjugation, but further ideas will be needed to resolve the complex conjugation ambiguity, to whatever extent it actually matters.

In short: on the negative side, there’s much more to the problem of learning a fermionic state than we’d realized. But on the positive side, there’s much more to the problem than we’d realized! As with the simulation of k-query quantum algorithms, my coauthors and I would welcome any ideas. And I apologize to anyone who was misled by our premature (and hereby retracted) claims.


Update (Aug. 11): Here’s a third bonus retraction, which I thank my colleague Mark Wilde for bringing to my attention. Way back in 2005, in my NP-complete Problems and Physical Reality survey article, I “left it as an exercise for the reader” to prove that BQPCTC, or quantum polynomial time augmented with Deutschian closed timelike curves, is contained in a complexity class called SQG (Short Quantum Games). While it turns out to be true that BQPCTC ⊆ SQG—as follows from my and Watrous’s 2008 result that BQPCTC = PSPACE, combined with Gutoski and Wu’s 2010 result that SQG = PSPACE—it’s not something for which I could possibly have had a correct proof back in 2005. I.e., it was a harder exercise than I’d intended!

31 Responses to “Yet more mistakes in papers”

  1. Man To Research Says:

    ‘I first learned of this remarkable correspondence between fermions and DPPs a decade ago, from a talk on DPPs that Ben Taskar gave at MIT. Immediately after the talk, I made a mental note that Taskar was a rising star in theoretical machine learning..’. The biggest error is Scott fails to learn ML and assist humanity.

  2. Scott Says:

    Man To Research #1: Funnily enough, studying ML is precisely one of the things I’ve been doing this summer. I’m sorry if it’s not enough for your satisfaction, but, y’know, what have you been doing to “assist humanity”?

  3. Joshua Brule Says:

    The sad truth, though, is that I don’t obviously do better than your average blankface in my ability to resist falsehoods on early encounter with them.

    I think this one of the most important problems facing our society today (really!): We don’t encourage people to actually make predictions! For the most part, we don’t even keep score. At least the scientific community aspires to, although with the replication crisis, I think it’s become, uh, more aspirational than an accurate statement of current affairs.

    Not that you need my approval or anything, but at least you’re still trying. I get the feeling that you’re conspicuously embarrassed to admit you were wrong, when a better prevailing attitude would be, “Yes, here is where I was wrong. (Where’s your record?)”

  4. Boaz Barak Says:

    As usual, you set a great example for scientific integrity, and while I would love to be wrong, I won’t be surprised if you get attacked for it.

    “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”

  5. venky Says:

    Reminds of the coin game where two fair coins are tossed independently and a third “coin” lands a head if the first two coins agree, and tails otherwise. The three coins are pairwise uncorrelated even tho the third coin is completely determined by the first two.

  6. Man To Research Says:

    @Scott #2 Still looking for a chance to get to study ML so I can assist humanity. Lot of young guns out there.

  7. Scott Says:

    Thanks Boaz!!

  8. Lou Scheffer Says:

    I don’t think it’s any great embarrassment to publish an honest but wrong paper. To reach the boundaries of what is known, you are usually looking at subtle implications of a complex topic. If you, your co-authors, and the referees think it’s good, then it’s at least a plausible conjecture even if you mistakenly believed it a theorem. If a mistake is found, you publish a correction, so your original article won’t mislead others into the same mistake, and move on. Occasional mistakes are a natural consequence of pushing the boundaries.

    I’d even suggest there is some optimum level of errors in science papers, and it’s not zero. Getting new ideas out is more important to the advancement of the field as a whole than completely error-free but more conservative papers.

  9. Jelmer Renema Says:

    Your point #2 reminds me very much of the paper by Adrian Menssen, Alex Jones and others on a similar effect in bosonic interference https://arxiv.org/abs/1609.09804 . In that case, there also is a nontrivial three-particle phase term (‘triad phase’) that doesn’t manifest in the two-particle overlaps but does manifest in the three-particle interference terms and also affects the output statistics.

    For the bosonic case, fortunately, they were able to show that that’s all there is: all 4-particle phases can be written as linear combinations of 3-particle phases, and so on. And since three-particle interference effects manifest in third order correlators and above, it should in prinicple be enough to measure those to reconstruct your basis vectors, although I don’t think anyone’s looked at whether this can be done robustly or efficiently. Also, I don’t know enough about fermions to say if any of this applies to that case.

  10. JimV Says:

    My favorite saying strikes yet again:

    “All mathematicians make mistakes; good mathematicians find them.”–Einstein.

    I would add, “or acknowledge them when others find them”, and I would generalize from “mathematicians” to “thinkers”, but that might be getting too cumbersome. I think those are implied in the original anyway.

  11. Scott Says:

    Jelmer Renema #9: I certainly did think about the analogue of our learning problem for bosons rather than fermions. Alas, there the difficulties are even worse!

    Using the notation from this post, it’s no longer true in the bosonic case that the probabilities only depend on inner products like ⟨vi|vj⟩—they could also depend on the 4-norms of the vectors and other higher-order polynomials in their entries. Which means, in particular, that isometries of the vectors are no irrelevant, and it’s no longer enough to solve for the covariance matrix that I called A. This, of course, is directly related to the permanent being a basis-dependent quantity, unlike the basis-independent determinant.

    Of course, there might be a polynomial-time reconstruction algorithm anyway, but this was enough to convince me that we’d better solve the fermionic version first! 🙂

  12. fred Says:

    I’m probably missing something, but that fermionic state thing sounds like it could have been tested running random test cases in an actual simulation.

  13. Scott Says:

    fred #12: Yep. 🙂

    Even when you don’t learn anything new from running a program, writing a program is often a superb way to catch mistakes in your thinking. I’ve used that to great effect in a few of my past papers, and of course I wish in retrospect that we’d coded up the damn algorithm here.

  14. DR Says:

    I think you’re being too hard on yourself for not being able to predict the future. The Blankfaces though, deserve blame for not executing on simple obvious things! When my son was 4 yo, I remember asking him what sort of a person he wanted to be. He said, “I want to be a person who knows everything and never makes a mistake.” 🙂

  15. fred Says:

    Scott #13

    And especially when probabilities are involved, I always find it super useful to generate thousands of random runs in excel and then just count the different outcomes to extract limit (frequency) probabilities to make sure it matches against the theoretical predictions, and validate that I’m not missing something. It’s easy to just change any parameter to see its effect.

    Like when I was trying to understand Newcomb’s Paradox (I can change the predictor probability of correctness and instantly see its effect on the final payout on hundreds of runs):
    https://i.imgur.com/TvrCVLk.png

  16. DR Says:

    My last lines got cut off.

    I think we cannot hold any individual to such standards, but it seems like the CDC and FDA cannot get their act together at all, at a crucial time, when this is why they exist. Let us consider just access to covid testing, as an example.

  17. John Stricker Says:

    Scott #2:

    “what have you been doing to “assist humanity”?”

    Does writing snarky blog post comments count? 😉

    “studying ML is precisely one of the things I’ve been doing this summer”

    Excellent! Looking forward to a (hopefully) upcoming blog post! 😀

  18. John Stricker Says:

    Oh yeah, and kudos on the correction; but that almost goes without saying.

    And never fear, “Gegen das Böcke-Schiessen hilft nur der Tod” (Albert Einstein to Max Born, https://einsteinpapers.press.princeton.edu/vol13-doc/401; translation from german idiom: “We will only stop making dumb mistakes when we´re dead.”)

    So you see, some more stellar company you are in. 😉

  19. Aspect Says:

    Thanks for being public about this stuff. It means a lot 🙂

  20. Robin Blume-Kohout Says:

    It seems like the fermion-learning vector reconstruction problem kind of echoes the central difficulty in the SICPOVM existence problem. Iff a SICPOVM in \( \mathbb{C}^d \) exists, then it consists of \(d^2\) unit vectors \(|v_i\rangle\) whose pairwise inner products all have absolute value \( |\langle v_i | v_j \rangle| = 1/\sqrt{d+1}\) when \( i \neq j \). In the very early days (circa 2002), we figured that this had to be enough information to decide whether such vectors existed, and construct them if they did. Since their existence still remains a open conjecture (AFAIK), it’s clearly not.

    (But boy did I spend a lot of time trying to solve those nonlinear equations…)

  21. Stephan Says:

    For a more useful, deeper sense of what a ‘blankface’ is, read Eichmann in Jerusalem – it is precisely about the psychological type you’re driving at – deeper than Harry Potter/Umbridge – Arendt is the thinker of the Blankface – the unthinking cog of power – the banality of evil.

  22. Scott Says:

    Stephan #21: Of course I read Eichmann in Jerusalem; it affected me to the point of making me physically ill for days. But my understanding was that more recent scholarship has severely questioned Arendt’s analysis, by uncovering evidence that Eichmann was actually an enthusiastic antisemite, who merely found it convenient to present himself during the trial as a bureaucratic bumbler. In any case, as I said before, there were clearly millions of blankfaces who helped enable the Holocaust, and my knowledge of that fact from an early age has surely played some role in how emotional I tend to get when confronted with a real-life blankface (even a petty and inconsequential one), why I’m unable to just let it go.

  23. Laurence Cox Says:

    Scott #22: yes that is my understanding of the scholarship of Arendt’s analysis too, but it seems to take a long time (decades) for this to percolate from the scholarly literature into popular understanding. In this respect, the Humanities seem to be worse than the Sciences. Going back to the ‘blankface’ I have another explanation, which comes from economics.

    https://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2021/08/ambition-in-capitalist-society.html

    The writer is an interesting person, a self-avowed Marxist whose day job is to write for Investors’ Chronicle (as capitalist a magazine as you will find in the UK). He makes the point that Alasdair MacIntyre distinguished between goods of excellence such as mastery of a craft and goods of effectiveness such as wealth and power.

    Ideally, one would want everyone to seek goods of excellence, that is to do what they do best and be rewarded for it, but in a capitalist society the rewards of the goods of effectiveness can be such as to mislead people into following them (I know this is true because it happened to me before I realised I was following the wrong path). I see your ‘blankfaces’ as people who have chosen the wrong path and have either not realised this or, worse still, have realised it and found themselves trapped. The author makes the good point that sloughing off ambition is a counsel of perfection: easy enough to say if you are debt-free and your mortgage paid off; far harder if you are still struggling to repay student debts.

  24. A. Karhukainen Says:

    (This would probably be better placed on some other thread here, but here it comes:)

    I wonder what do you think about Doron Zeilberger’s opinion #181 concerning this year’s Abel-prize:
    https://sites.math.rutgers.edu/~zeilberg/Opinion181.html
    ?

  25. Scott Says:

    A. Karhukainen #24: Yeah, sorry, I’m going to leave it to others here to bite that hook if they want. I know from experience that if I respond, Doron will see the response and launch into another rant against me, and I don’t see that as a good use of my time.

  26. asdf Says:

    Ok, we’ve got a prediction here, folks. This book (Gerard ‘t Hooft, The Cellular Automaton interpretation of QM) has been mentioned here before, but I don’t remember the specific claim coming up. From here :

    Our theory comes with a firm prediction:

    Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or perhaps, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of 1e-43 seconds.

    Such scaled classical computers can of course not be built, so that this quantum computer will still be allowed to perform computational miracles, but factoring a number with millions of digits into its prime factors will not be possible—unless fundamentally improved classical algorithms turn out to exist. If engineers ever succeed in making such quantum computers, it seems to me that the CAT is falsified; no classical theory can explain perfect quantum computers.

    The book is also on the arxiv (https://arxiv.org/abs/1405.1548v3) though I don’t know if the preprint version is exactly the same. The Springer and Arxiv links to the book are both from the author’s web site.

    Any idea how large a Google-style quantum supremacy experiment it would take to fulfill the criterion in the doubly quoted paragraph?

  27. Scott Says:

    asdf #26: Provided we had an efficient means of verification, 400 qubits would already be enough to test ‘t Hooft’s prediction, even assuming you allow the entire observable universe for his Planck-scale classical computer.

    (Incidentally, ‘t Hooft’s prediction doesn’t follow in any way from his own premises—once you have a superdeterministic conspiracy theory, it could “account for” scalable QC working in exactly the same way it could “account for” Bell/CHSH violations. But that’s a separate point.)

  28. Rand Says:

    “I first learned of this remarkable correspondence between fermions and DPPs a decade ago, from a talk on DPPs that Ben Taskar gave at MIT.”

    Dammit! DPPs were the first thing Ben had me work on when I was at Penn, and I’ve been kind of keeping them in my back pocket in case they ever proved useful for quantum computing. (This was exceedingly unlikely, but I’m always hopefully that directions that never panned out would someday prove useful.)

    Now Scott Aaronson’s gone ahead and told the whole quantum computing community about them. My back pocket is immeasurably reduced.

  29. Karen Morenz Korol Says:

    As a student, this was really great to read! It helps a lot to have someone explain *how* things went wrong, not just *that* they went wrong and how to fix them.

  30. arch1 Says:

    Aspects of #2 reminded me of a Quanta podcast I listened to over the weekend, about 3 physicists who stumbled across a new identity allowing eigenvectors of a hermitian matrix to be determined from the determinants of its minors, while doing neutrino calculations.

    They couldn’t find this identity in the literature, so on a recommendation they emailed Terry Tao about it. Though initially skeptical because it seemed the kind of thing he would have heard about if true (plus his gut feel that at least some knowledge of the matrix values is necessary), Tao was persuaded by trying examples and responded a couple of hours later with 3 proofs.

    I think Tao’s blog has a post with details.

  31. Michał Oszmaniec Says:

    I wanted to comment on the question of reconstructing relative geometrical arrangements of states \(\rho_i=|\psi_i\rangle \langle \psi_i | \) based on absolute values of their overlaps \(|\langle \psi_i| \psi_j\rangle|\). As you point out more information is needed to solve this problem. It so happens that together with Ernesto Galvao and Daniel Brod we were recently looking at exactly this question (partially motivated by matters regarding partial distinguishability of photons that Jalmer pointed out). We put our results on arxiv today https://arxiv.org/abs/2109.10006.

    In general, relative arrangements of states is decided by higher-order overlaps (also known as Bargmann invariants) i.e traces of products of density matrices \(tr(\rho_1 \rho_2 … \rho_k)\). Specifically, it is possible to use these invariants to construct “physically meaningful” covariance matrix of vectors \(\langle \psi_i| \psi_j \rangle\) which encodes the relational information between quantum states. Exactly which Bargmann invariants one needs to consider depends on the orthogonality relations between states in question -in general, the triad phase (ie. phase of \(tr(\rho_1 \rho_2\rho_3) \) ) does not suffice and higher-order invariants are needed.

    The same invariants \(tr(\rho_1 \rho_2 … \rho_k)\) are sufficient to characterize classes of unitary equivalence of tuples of mixed quantum states. This is in fact closely connected to the problem of characterizing orbits in the problem simultaneous conjugation of tuples of matrices. Avi Wigderson discusses this problem in his expository book “Mathematics and Computation” -p. 151 of https://www.math.ias.edu/files/Book-online-Aug0619.pdf#page=1)

    Anyway, it’s interesting that this kind of question appeared in the fermionic context!

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.