## The Unitarihedron: The Jewel at the Heart of Quantum Computing

**Update (9/24):** This parody post was a little like a belch: I felt it build up in me as I read about the topic, I let it out, it was easy and amusing, I don’t feel any profound guilt over it—but on the other hand, not one of the crowning achievements of my career. As several commenters correctly pointed out, it may be *true* that, mostly because of the name and other superficialities, and because of ill-founded speculations about “the death of locality and unitarity,” the amplituhedron work is currently inspiring a flood of cringe-inducing misstatements on the web. But, even if true, still the much more interesting questions are *what’s actually going on, *and whether or not there are nontrivial connections to computational complexity.

Here I have good news: if nothing else, my “belch” of a post at least attracted some knowledgeable commenters to contribute excellent questions and insights, which have increased my own understanding of the subject from ε^{2} to ε. See especially this superb comment by David Speyer—which, among other things, pointed me to a phenomenal quasi-textbook on this subject by Elvang and Huang. My most immediate thoughts:

- The “amplituhedron” is only the latest in a long line of research over the last decade—Witten, Turing biographer Andrew Hodges, and many others have been important players—on how to compute scattering amplitudes more efficiently than by summing zillions of Feynman diagrams. One of the key ideas is to find combinatorial formulas that express complicated scattering amplitudes recursively in terms of simpler ones.
- This subject seems to be
*begging*for a computational complexity perspective. When I read Elvang and Huang, I felt like they were working hard*not*to say anything about complexity: discussing the gains in efficiency from the various techniques they consider in informal language, or in terms of concrete numbers of terms that need to be summed for 1 loop, 2 loops, etc., but never in terms of asymptotics. So if it hasn’t been done already, it looks like it could be a wonderful project for someone*just to translate what’s already known*in this subject into complexity language. - On reading about all these “modern” approaches to scattering amplitudes, one of my first reactions was to feel
*slightly*less guilty about never having learned how to calculate Feynman diagrams! For, optimistically, it looks like some of that headache-inducing machinery (ghosts, off-shell particles, etc.) might be getting less relevant anyway—there being ways to calculate some of the same things that are not only more conceptually satisfying but also faster.

Many readers of this blog probably already saw Natalie Wolchover’s *Quanta* article “A Jewel at the Heart of Quantum Physics,” which discusses the “amplituhedron”: a mathematical structure that IAS physicist Nima Arkami-Hamed and his collaborators have recently been investigating. (See also here for Slashdot commentary, here for Lubos’s take, here for Peter Woit’s, here for a Physics StackExchange thread, here for Q&A with Pacific Standard, and here for an earlier but closely-related 154-page paper.)

At first glance, the amplituhedron appears to be a way to calculate scattering amplitudes, in the planar limit of a certain mathematically-interesting (but, so far, physically-unrealistic) supersymmetric quantum field theory, much more efficiently than by summing thousands of Feynman diagrams. In which case, you might say: “wow, this sounds like a genuinely-important advance for certain parts of mathematical physics! I’d love to understand it better. But, given the restricted class of theories it currently applies to, it *does* seem a bit premature to declare this to be a ‘jewel’ that unlocks all of physics, or a death-knell for spacetime, locality, and unitarity, etc. etc.”

Yet you’d be wrong: it isn’t premature at all. If anything, the popular articles have *understated* the revolutionary importance of the amplituhedron. And the reason I can tell you that with such certainty is that, for several years, my colleagues and I have been investigating a mathematical structure that contains the amplituhedron, yet is even richer and more remarkable. I call this structure the “unitarihedron.”

The unitarihedron encompasses, within a single abstract “jewel,” *all* the computations that can ever be feasibly performed by means of unitary transformations, the central operation in quantum mechanics (hence the name). Mathematically, the unitarihedron is an infinite discrete space: more precisely, it’s an infinite collection of infinite sets, which collection can be organized (as can every set that it contains!) in a recursive, fractal structure. Remarkably, each and every specific problem that quantum computers can solve—such as factoring large integers, discrete logarithms, and more—occurs as just a single element, or “facet” if you will, of this vast infinite jewel. By studying these facets, my colleagues and I have slowly pieced together a tentative picture of the elusive unitarihedron itself.

One of our greatest discoveries has been that the unitarihedron exhibits an astonishing degree of uniqueness. At first glance, different ways of building quantum computers—such as gate-based QC, adiabatic QC, topological QC, and measurement-based QC—might seem totally disconnected from each other. But today we know that all of those ways, and many others, are merely different “projections” of the same mysterious unitarihedron.

In fact, the longer I’ve spent studying the unitarihedron, the more awestruck I’ve been by its mathematical elegance and power. In some way that’s not yet fully understood, the unitarihedron “knows” so much that it’s even given us new insights about *classical* computing. For example, in 1991 Beigel, Reingold, and Spielman gave a 20-page proof of a certain property of unbounded-error probabilistic polynomial-time. Yet, by recasting things in terms of the unitarihedron, I was able to give a direct, half-page proof of the same theorem. If you have any experience with mathematics, then you’ll know that that sort of thing *never* happens: if it does, it’s a sure sign that cosmic or even divine forces are at work.

But I haven’t even told you the most spectacular part of the story yet. While, to my knowledge, this hasn’t yet been rigorously proved, many lines of evidence support the hypothesis that *the unitarihedron must encompass the amplituhedron as a special case*. If so, then the amplituhedron could be seen as just a single sparkle on an infinitely greater jewel.

Now, in the interest of full disclosure, I should tell you that the unitarihedron is what used to be known as the complexity class BQP (Bounded-Error Quantum Polynomial-Time). However, just like the Chinese gooseberry was successfully rebranded in the 1950s as the kiwifruit, and the Patagonian toothfish as the Chilean sea bass, so with this post, I’m hereby rebranding BQP as the unitarihedron. For I’ve realized that, when it comes to bowling over laypeople, inscrutable complexity class acronyms are death—but the suffix “-hedron” is golden.

So, journalists and funders: if you’re interested in the unitarihedron, awesome! But be sure to also ask about my other research on the bosonsamplinghedron and the quantum-money-hedron. (Though, in recent months, my research has focused even more on the *diaperhedron*: a multidimensional, topologically-nontrivial manifold rich enough to encompass all wastes that an 8-month-old human could possibly emit. Well, at least to first-order approximation.)

Comment #1 September 20th, 2013 at 4:09 pm

[…] a special case, just “a single sparkle on an infinitely greater jewel”. See his posting The Unitarihedron: The Jewel at the Heart of Quantum Computing, where he unveils this new […]

Comment #2 September 20th, 2013 at 4:33 pm

For once I thought Scott was pulling a bull and then after reading I am thinking may be this is serious BShedron (BosonSamplinghedron) May be the good work continue until things become tractable.

Comment #3 September 20th, 2013 at 5:09 pm

The unitarihedron is a part of 22nd-century physics that fell by chance into the 21st century.

Comment #4 September 20th, 2013 at 5:32 pm

Scott, it seems you have some interesting story in mind, and I’d love to hear it, but the way this blog post is written now, I’m completely missing the joke/point.

Or is it perhaps that this particular post is exclusively for those at least somewhat familiar with that 154p paper?

Comment #5 September 20th, 2013 at 5:53 pm

Lukasz: Tell you what, we’ll have a contest. Commenters, try your best to explain the “point” of this post unironically and unhumorously. One chance per commenter. I’ll choose a winner after 24 hours.

Comment #6 September 20th, 2013 at 5:55 pm

@Lukasz: The joke is that you can take any moderately interesting advance in any field, give it a stupid name, claim that it’s the first step in solving every problem ever and trick a substantial fraction of the press into publicizing your paper.

Comment #7 September 20th, 2013 at 6:11 pm

My not-entirely-serious (nor completely unserious) take on this is that computer science needs a better marketing department to name discoveries. I have two pieces of evidence for this, groundbreaking discoveries with self-defeating names: 1) complexity classes, and 2) Leslie Valiant’s probably approximately correct learning.

Comment #8 September 20th, 2013 at 6:12 pm

I remember Scott mentioning something about BQP and Feynman or some Nobel prize and Scott’s assertion that the work of the scientist actually in TCS terms proved something that is related to complexity class. I am not well versed here.

Comment #9 September 20th, 2013 at 6:12 pm

42.

Comment #10 September 20th, 2013 at 6:15 pm

Oh, Scott. You should know that the first-order approximation to the diaperhedron has no real predictive power. Perturbations quickly grow beyond all orders and renormalization often takes two years or more of repetitive work, filling sheet after sheet.

Comment #11 September 20th, 2013 at 6:34 pm

[…] The Unitarihedron: The Jewel at the Heart of Quantum Computing Source: http://www.scottaaronson.com/blog/?p=1537 0 […]

Comment #12 September 20th, 2013 at 6:47 pm

>> explain the “point” of this post unironically

physics envy

Comment #13 September 20th, 2013 at 7:04 pm

“try your best to explain the “point” of this post unironically and unhumorously.”

There are 31 of them, not counting the colons (8), exclamation/interrogation marks (3) and semicolons (0).

After having gone thoroughly through this blog post, I don’t think I’ve missed any point.

Comment #14 September 20th, 2013 at 9:15 pm

Remarkably, each and every specific problem that quantum computers can solve—such as factoring large integers, discrete logarithms, and more—Would it work better to add “summing thousands of Feynman diagrams”? Or does that steal thunder from the end?

Comment #15 September 20th, 2013 at 9:59 pm

Scott just is getting at Quantum Computation research skeptics.

Comment #16 September 20th, 2013 at 10:47 pm

Are you making fun of the way the press release was phrased?

Comment #17 September 20th, 2013 at 11:20 pm

Dear Scott, I think you will be amazed at MY discovery: the Completeposihedron. It’s even shinier and bigger and, well yeah, you could look at it as just a chunk of a bigger unitarihedron, but that would be missing the trees for all this damn forest. Plus, unlike your sparkly unitarihedron, which is clearly only good in clean room situations, the completeposihedron is good for real life situations where you’ve got dirt on your hands. ***runs off and washes hands***

Of course there are rumors of an even more interesting bedazzler, something to do with initial correlations. But I don’t believe in that jewel. Or when I do, I also believe in God.

Comment #18 September 20th, 2013 at 11:46 pm

Diaperhedrons are manifolds and probably compactified. I should have known it! So that’s where all the pee goes.

Anyhow, after watching the SUSY 2013 talk, I must say that if he gets this to work in something else but Yangian symmetric plane N = 4 space without RG flows, then all the hype would be fully justified. Here are the reasons why I like the Amplituhedron besides the brilliant name, and Arkami-Hamed’s nice handwriting (in no particular order):

– It actually seems to be useful in drastically reducing the complexity of calculating scattering amplitudes.

– It is completely geometric at its core in a surprisingly simple fashion.

– Locality and unitarity automatically result from the same structure.

– It is not actually SUSY but stands on its own

– Following this program they can check their results against the conventional perturbation derived scattering amplitudes, keeping it much more grounded than most other contemporary theoretical physics.

Comment #19 September 21st, 2013 at 12:38 am

The problem with your BQPhedron is that, unlike the amplituhedron, it doesn’t have a picture worthy of encrusting into a wedding band. Consider hiring a graphic artist.

Comment #20 September 21st, 2013 at 12:49 am

I think that the goal of this funny text is to collect some money from a stupid enough billionaire who will think that you have achieved almost the same, if not more, than the players of the twistor minirevolution – so that you may buy more diapers.

Comment #21 September 21st, 2013 at 12:52 am

Incidentally, there’s another solution I know that was invented by a family I once met. They didn’t have enough money to buy the diapers for their baby, either. So instead, they bought a villa in Hollywood and wiped the baby’s buttocks by rubbing it against the grass on their garden.

Comment #22 September 21st, 2013 at 2:12 am

what was that?

many reader of your shitty ambivalent post are uncertain if your post was real or satire. Fuck you for being this condescendant.

I still don’t know if the amplihedron is a remarkable discovery or not.

You stole my time asshole.

Comment #23 September 21st, 2013 at 5:38 am

These *-hedrons are clearly related to the n-dimensional polytope schemes our work has been working on [1]. I think we could’ve come up with the quantum-money-hedron if we believed in quantum computing; alas, we prefer to keep it ‘real’ and not dabble in imaginary quantities. Please see our website for more information and exciting investment opportunities.

[1] http://www.oneweirdkerneltrick.com/polytope.pdf

Comment #24 September 21st, 2013 at 6:38 am

Scott,

I don’t think your re-branding effort will work.

While the physicists find the jewels at the heart of our existence, all you ever do is figure out how “hard” everything is (but every traveling salesman knows that already).

But all you have to offer are conjectures, while the physicists find beauty and simplicity (although it can be difficult for the laymen to see that).

Comment #25 September 21st, 2013 at 7:40 am

wolfgang #24: ROTFL! Have you

lookedrecently at beyond-Standard-Model theoretical physics? It’s a teetering tower of conjectures (which is not to say, of course, that that’s inherently bad, or that I can do better). However, one obvious difference is that the physicists don’tcallthem conjectures, as mathematicians or computer scientists would. Instead they call them amazing discoveries, striking dualities, remarkable coincidences, tantalizing hints … once again, lots of good PR lessons for us!As for beauty and simplicity, I dare say that those are the common currency of

allmathematical fields. We all look for those, and are all thrilled when we occasionally find them.Comment #26 September 21st, 2013 at 7:46 am

screwed over reader #22: Calm down, dude. If you carefully reread the second paragraph, you might uncover some hidden clues about what I actually think:

At first glance, the amplituhedron appears to be a way to calculate scattering amplitudes, in the planar limit of a certain mathematically-interesting (but, so far, physically-unrealistic) supersymmetric quantum field theory, much more efficiently than by summing thousands of Feynman diagrams. In which case, you might say: “wow, this sounds like a genuinely-important advance for certain parts of mathematical physics! I’d love to understand it better. But, given the restricted class of theories it currently applies to, it does seem a bit premature to declare this to be a ‘jewel’ that unlocks all of physics, or a death-knell for spacetime, locality, and unitarity, etc. etc.”

Though I have to say that I had a similar reaction as yours when I first encountered Shakespeare: “WTF is this stuff?? In

The Taming of the Shrew, is Shakespeare literally saying that wives should ‘submit’ to their husbands? Or that the entire idea that wives should submit is a farce that should be ironically subverted? Or somehow both, at the same time? If he has a point, then why doesn’t he just come out and say it? Time-wasting asshole!”(And now let the recriminations begin, about the audacity of comparing myself to the Bard…)

Comment #27 September 21st, 2013 at 8:04 am

@Scott #25

I am sure you recognized the irony in my comment (but I acknowledge that it can be difficult for a layman to see it 8-).

Comment #28 September 21st, 2013 at 8:07 am

wolfgang #27: LOL, touché!

Comment #29 September 21st, 2013 at 8:19 am

I don’t understand this, but I know that Eliezer Yudkowsky’s friendly AI will figure all this out a few nanoseconds after it has been launched. And if he fails then you are all dead anyway.

So why not stop all this quantum computing, string theory, etc., bullshit and send him all your money now? He’s much smarter than Scott Aaronson or Luboš Motl anyway.

Thanks for your time! And be rational!

Comment #30 September 21st, 2013 at 8:24 am

One theorem I needed some time ago for understanding of geometry of two-qubits gates was proved on 10+ pages in a handbook using Grassmanians and all that and I failed to understand that during quite a long time … until realized that it may be likely illustrated with 4-qubits quantum circuit.

Comment #31 September 21st, 2013 at 8:27 am

could you PLEASE stop acting like Beigel-Reingold-Spielman is some impossibly challenging nightmare? you’ve been insinuating this for years now and it’s just not true.

it’s immediate from the definitions of PP that to show it’s closed under intersection you need to construct a poly(n)-degree, exp(poly(n))-coefficient-size polynomial which is positive on [1,2^n] and negative on the other three quadrant-squares. BRS make the clever observation that if you have a rational function r(x) of this “size” which .1-approximates sign(x) on [-2^n,-1] union [1,2^n] then you can get a rational function p(x)/q(x) doing the desired job via r(x)+r(y)-.5. then p(x)q(x) is a polynomial which has the same sign as r everywhere.

finally, the desired rational function was famously constructed by Newman in the 60’s (okay, he did |x| rather than sign(x), but it’s truly basically the same thing.) for completeness, BRS repeat the easy proof: note that although the proof spans 2 pages, almost all of the content is *stating* the lemmas; the total “proof” lines is 16 and they’re basically all of the form “the claim is easily verified”.

even the actual Newman-construction itself is not very clever; you’re basically like “gee, I need to get some polynomial under control for a wide range, [1,2^n]; let me put in some double-zeroes at 1, 2, 4, 6, …, 2^n”, etc.

the reason BRS is 24 pages and not 4 pages is: a) at the time (and still now?) PP was not a very familiar complexity class to most people, so they took some time to talk about it; b) after their basic result they went on and on about some technical complexity-theoretic consequences.

Comment #32 September 21st, 2013 at 8:40 am

Shurely a hedrony-laden blogpost.

And how does the unitarihedron perform its projections? Is it alive? Pretty scary stuff.

Comment #33 September 21st, 2013 at 10:10 am

I believe the point of the post is that the amplituhedron is merely a recasting of existing math into, shall we say, a geometric approximation.

I do believe, however, that such representations are useful when we are trying to visualize — as best as we can — what is really going on behind all of those N dimensions.

So do you have a nice colorful picture of the unitarihedron to provide?

Comment #34 September 21st, 2013 at 10:18 am

Scott, your reply to a comment by Wolfgang is a low-brow misunderstanding of what physics and natural science is, one that moreover paints you as an intellectually primitive person.

Science just isn’t about rigorous proofs of theorems; whatever is rigorous can’t be applied to the real world. It is about describing observable phenomena and collecting evidence supporting or disfavoring claims and theories about these phenomena. So if you want physics to be another piece of rigorous maths or complexity theory, you’re just missing its point completely.

More importantly, you suggestion that you’re not absolutely amazed by the beauty of the mathematical structures found underneath modern physics suggests that your intelligence is just way too low to do or at least follow physics.

None of the depth, beauty, precision, and universality of the mathematical structures is found anywhere in complexity theory or related portions of maths. Those portions of maths are just dirty applied maths, a form of engineering. Every person who has sort of mastered both fields knows it very well. I have absolutely nothing against the things you study. They’re totally legitimate science, you know it much better than I do, it’s useful, it has many rigorously established results etc. But it’s just not beautiful, fundamental, and the precise “shapes” of the mathematical structures in it depend on conventions, strategies to approach a problem, and they never have a universal validity of importance. In those respects, nothing in your field can remotely compare with the power underlying modern physics.

Many things in physics are called conjectures because of people’s modesty although the evidence is overwhelming. For a whole year, Maldacena’s AdS/CFT was called a “conjecture” although it was really established, at the standard physics level, within months. There are many examples like that.

Comment #35 September 21st, 2013 at 10:19 am

I was glad to see the hype that is so common in `Beyond the Standard Model’ physics taken to its logical extent, gradually, and with the usual lack of modesty, or honesty, that is frequently to be found in that area. I look forward to your next play, William: `Framing of the Stew?’

It would also be nice to know whether certain claims about `quantum annealing’ using really expensive classical computers attached to really expensive refrigerators, deserve the same sort of treatment, which is what I suspect. What is the innocent bystander to think about that?

John the somewhat suspicious

Comment #36 September 21st, 2013 at 10:43 am

I note Lubos Motl’s insulting comments in Comment 34. Lubos seems to be implying that he, Lubos, has mastered theoretical physics as well as complexity theory, (and no doubt many other things). My own experience with his website indicates that what he knows about BRST cohomology, consistent extensions of quantum field theory, politeness and thinking before he talks is very limited. No doubt theoretical physics is interesting. His comments are a perfect example of what you are lampooning.

John the more than slightly suspicious

Comment #37 September 21st, 2013 at 11:22 am

peeved #31: Dude, the entire point was to gently

lampoonthe string theorists’ tendency to argue that, if you can use your techniques to rederive something from an adjacent field in a different, arguably-simpler way, then that can only mean your techniques are imbued with the mysterious wisdom of God. I achieved that lampooning by giving an example, from my own work, where I used quantum computing to give what I think is a simpler proof for a classical theorem, but there was quite clearly nothing “magical” about it—quantum computing functions “merely” as a higher-level programming language for constructing the rational functions you want, which I agree are not that bad to construct directly either. If you wantrealmagic, then you need to check out my boson-based proof that the permanent is #P-complete.Anyway, just for you, I took the word “complicated” out of the post in reference to Beigel-Reingold-Spielman, though I still submit that the PostBQP proof is simpler.

Comment #38 September 21st, 2013 at 12:05 pm

I wrote something about the depth and/or revolutionary status of the amplituhedron here:

http://motls.blogspot.com/2013/09/diaperhedron-cant-match-amplituhedron.html?m=1

Comment #39 September 21st, 2013 at 12:24 pm

Luboš #34: I

havebeen repeatedly amazed by the beauty of the mathematical structures found in physics—even the tiny fraction of those structures that I’ve understood so far. But I’ve also been amazed by the beauty of Ambainis’s adversary method, various proofs in number theory and real approximation theory based on complex analysis, Razborov’s proof of Bazzi’s theorem, the proofs of economic fixed-point theorems ultimately using Sperner’s Lemma from combinatorics, Christiano et al’s max flow algorithm based on electrical flows, and the way the Euler-Mascheroni-constant, π^{2}/6, and even Apéry’s constant ζ(3) “magically” appeared when I recently had to calculate the moments of a certain random variable. Maybe I’m too promiscuous in my love of beauty.Now, are the deepest discoveries in theoretical physics, deeper than the deepest discoveries in theoretical computer science? While I’m not even entirely sure what one means by that, I’m ready to believe that, yes, however one did the comparison, theoretical physics would come out ahead. But

theoretical physics has also existed maybe eight times longer!Personally, I’d say that TCS holds up remarkably well in depth given the “mere” ~50 years there’s been such a thing—and of course, its youth also means that the opportunities tomakeit deeper are immense.That’s all well and good, someone might say—but how could I possibly be certain that, if I spent a decade or two studying theoretical physics, the scales wouldn’t be peeled from my eyes, and I wouldn’t realize how childish and “primitive” everything I know is compared to what the string theorists give freshmen as homework assignments? Well, of course I

can’tbe sure. But I’m encouraged by my many friends whose contributions to theoretical physics greatly exceed yours, Luboš, yet who don’t share your view that theoretical computer science is “just dirty applied maths, a form of engineering.” But we’ve already been through all this over on your blog; no need to rehash this rather boring debate any longer.(UPDATE: Oops, just after writing this, I saw Luboš now has a massive new blog post rehashing the debate! I never cease to be amazed by his energy.)

Comment #40 September 21st, 2013 at 1:01 pm

Dear Scott, I would love to have more energy, but with the yeast annoyance and many things I don’t even want to discuss – and the broken chain I had to fix today LOL – it’s bad.

An interesting coincidence. The constants ζ(2) = π2/6 and especially ζ(3) frequently appear in the expansion of the very same theory that is described by the amplituhedron. See these 2006 comments inspired by a Dixon talk on transcendentality in this theory:

http://motls.blogspot.com/2006/05/lance-dixon-transcendentality.html?m=1

So some of these things could actually store some mathematical content that is isomorphic, after all.

Otherwise, I would just never choose the adjective “beautiful” for most of these game-theory-style results. The word “clever” would be more appropriate, wouldn’t it? The word “beautiful” must have much more indispensability, inevitability. It’s like a fatal love – it’s not the right term, I forgot the right one. I mean the really beautiful results must be unique in their beauty.

Methods to achieve something in a game-theory-styled task are like another game that Kasparov won (whoever is good now, I don’t follow it). It’s clever and requires high IQ but it’s not a fatal love, it’s not really beautiful. With these results in complexity theory and game theory etc., if they’re used against me (or someone else), I have the tendency to say a “clever bastard”. That’s not so with the beautiful results. They’re not really protecting any egotist interests of anyone. They just look beautiful. Even if these beautiful results are used to build a new bomb.

I don’t think that the difference in the amount of beauty has anything to do with the age of the two disciplines. Game theory and complexity theory etc. are bound to be getting increasingly messy, complex, they’re building new floors of a skyscraper. Theoretical physics is approaching the center of the Earth or the Universe, so to say. It’s just a different direction of making progress. This difference will be growing larger as both disciplines are getting older.

Comment #41 September 21st, 2013 at 1:38 pm

Game theory and complexity theory etc. are bound to be getting increasingly messy, complex, they’re building new floors of a skyscraper. Theoretical physics is approaching the center of the Earth or the Universe, so to say … This difference will be growing larger as both disciplines are getting older.

Luboš: Good, now you’re making an empirical prediction! But I think it’s already falsified. When I compare the CS theory papers of the 60s and 70s to those of today, I don’t detect any trend toward increasing “messiness”—on the contrary, all sorts of messy details that people worried about then are now taken for granted. (Is there increasing

complexity? Sure! But that just goes along with a large, unmistakable increase in the depth of mathematical techniques used, as well as the difficulty of what people are trying to do.)It’s not like the original problems—P vs. NP, the complexity of matrix multiplication, and so forth—were all solved, so people responded by inventing increasingly-arcane variants of the problems. Rather, almost all of the original problems are still open! But now that we have PCPs, interactive proofs, nontrivial circuit lower bounds, hardness vs. randomness, etc. etc., our understanding of the problems is deeper: “closer to the center of the earth,” one might say.

In your blog post, you talk about how a field concerned with proving (and improving) non-tight inequalities can’t possibly be as deep as one that discovers exact identities. So let me ask you something: how do you think computer scientists

discoverthose faster algorithms, thereby lowering the exponents or whatever? More often than not, it’s by discovering an unexpectedidentity: a rewriting of the problem that lets you calculate the same quantity with vastly less work, by taking advantage of “miraculous cancellations” and things like that. Which, of course, is precisely what Nima and his friends have apparently done for planar N=4 SYM. So, kudos to them! But should we say that it can’t possibly be that deep, since the process of discovering better ways to calculate scattering amplitudes might simply go on forever, with no one ever knowing the “ultimate” way?Comment #42 September 21st, 2013 at 1:41 pm

To “screwed over reader”, #22:

Find a comfortable spot to sit down. Relax the muscles in your legs, then your back, then your arms, and then your neck and face. Close your eyes, and breathe slowly in and out. As you do so, repeat the following words:

“Scott Aaronson does not owe me jack shit. His writing is a gift to me, not an obligation.”

Make this a daily practice, substituting the names of other blog writers for “Scott Aaronson” as necessary. I think you’ll find your stress levels significantly diminished.

Comment #43 September 21st, 2013 at 1:45 pm

It appears Richard Dawkins has you all beat. According to the latest issue of Creationist Weekly, Dawkins claims to have unified all of Biology with his Hedonic-Heathen-Hedron.

Comment #44 September 21st, 2013 at 2:00 pm

As an engineer, I find Luboš Motl’s description of computational complexity theory as “dirty applied maths, a form of engineering” quite amusing. While fascinating from a philosophical perspective, from an engineering perspective it seems to me the theory is mostly useful as a guide to what problems you shouldn’t waste your time trying to solve.

Comment #45 September 21st, 2013 at 2:45 pm

Dear Scott, I think that even the question whether P=NP is more or less contrived. It sounds simple but it’s not fundamental. But the whole “progress” in the related field has been about adding random modifications, extra conditions etc. to the original task, so all these things are getting increasingly artificial. I don’t say that there’s anything wrong with that but it’s the “engineering” way of growing. not like the physics’ march towards the core of Nature or “the face of God”.¨

Kevin #44: it is a subjective question or moral values whether you “should” waste time with solving particular problems. Complexity theory obviously starts with the assumption that similar problems are “important enough” for people to waste time with them, or even with discussions whether they could be solved etc. The role of engineering isn’t to tell you which problems you should consider important. The purpose of engineering is that given the input what you find important, the engineering gives you the technical tools to achieve it or to calculate lower bounds on the amount of space and time you need to solve the problems etc. All these things are “practical” in the sense that they are details you need to know if you decide to try to solve things or realize tasks. But engineering isn’t telling you it’s “worth your time” to be interested in one way or another.

Complexity theory is engineering in this sense – it is about man-made “inventions” and “constructions” – while theoretical physics is the continuation of the programs of philosophy or even theology etc. using the scientific method – “discovering” things that are already there.

Comment #46 September 21st, 2013 at 3:41 pm

Dear Scott,

I am pleased to say that I have received even more abuse from Lubos Motl than you have. It is a club of which I am happy to be a member.

Here is his abuse, sent to me privately, together with a notice that he has blacklisted me from his website for asking for some physically verifiable

prediction of SUSY. My so-called rant was only a few lines long.

Incidentally I have been working on SUSY using BRST cohomology and spectral sequences for many years, with many publications, including some in Phys Rev Letters and Commun Math Phys, so I do not think his abuse is at all fair or informed.

Private letter from Lubos Motl to John Dixon:

Dear John, update: after you posted another comment here, a rant full of bizarrely used words “testable” etc. that completely ignored all the important information and corrections I kindly provided you with, I placed you on the black list to protect this blog comment section from junk from your keyboard and my future time from additional worthless attempts to explain something to a person who has no chance or no will or both to understand anything.

You effectively asked what’s the difference between maths and physics because you are totally confused about this elementary point. I was assuming that you wanted to know the answer so I gave you the answer. I didn’t expect you, a person who has really no clue about anything, to start to argue. If you think that you are my peer who has the credentials to argue with me, please go to one of the numerous crackpot forums on the Internet flooded with similar loons. You are not welcome here.

That is Lubos Motl’s comment on my question. I submit that it speaks for itself, as do his remarks on you in comment 34 above.

Best regards,

John Dixon

Comment #47 September 21st, 2013 at 3:46 pm

Scott,

I like that you are cautious about advances in theoretical physics and in building quantum computers, but shouldn’t you be as cautious about advances in theoretical computer sciences?

How different is calling some advances “amazing discoveries, striking dualities, remarkable coincidences, tantalizing hints” than “breakthroughs, spectacular results?” This is not to say that the results shouldn’t be celebrated (they should), but that’s only different ways of saying similar things, and there is not much reason to call the words of some people studying theoretical physics as “good PR lessons,” unless the same is true for some people studying theoretical computer science.

It is great that we now have nontrivial circuit lower bounds in theoretical computer science. But for those recent advances you blogged earlier, if we are as cautious, can we ask if “it can’t possibly be that deep, since the process of discovering better ways to extend that lower bound might simply go on forever, with no one ever knowing the “ultimate” way?”

M

Comment #48 September 21st, 2013 at 4:31 pm

Mero #47: You’ve put your finger

preciselyon one of the great ironies here. Namely, over the years I’ve been repeatedly attacked by commenters for, of all things,blogging excitedly about recent discoveries in my own field.(Which has led me to do less of that, being the thin-skinned creature that I am.)Yet the same people who get lathered up over my “hyping” advances in theoretical computer science, don’t seem to mind at all when advances in other fields are trumpeted at a thousand times the decibel level.

So, what could possibly explain this apparent double standard? In investigating this mystery, I suspect that Lubos’s diatribes provide important clues—since, as usual, Lubos happily shouts from the rooftops what many other people probably think quietly.

Namely, it seems that, to people who were trained a certain way, anything whatsoever that physicists do—even, let’s say, a particular 2-loop calculation in an 8-dimensional superdupersymmetric toy model, which is motivated at most by a tenuous mathematical connection to

othermodels that themselves remain extremely speculative—is a “discovery,” maybe even a “fundamental” discovery, becausephysics, by definition, is about the real world. By contrast, anything that computer scientists do—even something like the P vs. NP problem, which Americans and Russians discovered independently around the same time (and both recognized as extremely important), which shows up in dozens of guises in physics, mathematical logic, economics, biology, cryptography, operations research, etc., and which there can be little doubt that extraterrestrial civilizations would discover as well—is an “invention” and “non-fundamental,” becausecomputer science, by definition, is not about the real world.So, when presented with examples that challenge this sharp dichotomy, people who think that way seem to get agitated, and respond by insisting on the dichotomy even more vehemently.

To me, it just seems obvious that there’s the potential to learn something “fundamental,”

any place whatsoeverwhere there’s interesting and meaty mathematical structure—whether that place is a high-energy particle accelerator, or a cell, or a computer, or an economic market. For meaty mathematical structure, once it’s found, has a way of getting repurposed all over the explanatory hierarchy. And partly for that reason, it also seems to me that anyone who’s interested inany“fundamental” issue, ought to be interested inallof them—as, indeed, Turing and von Neumann and Ulam and many other great mathematicians and scientists were.(Which brings me to another irony in a discussion rich with them: namely, however much time I spend remonstrating with physicists about the “fundamental” nature of the P vs. NP problem, I spend twice as much time remonstrating with my fellow computer scientists that they ignore modern physics at their peril!)

Anyway, I have no idea how to convince someone who doesn’t already think this way that it’s a good way to think. So let me throw out a challenge to the commenters: are there any of you who

usedto think in terms of a sharp dichotomy between “inventions” and “discoveries,” but have since moved to the “deep mathematical structure can be found almost anyplace you look for it” camp? If so, what is it that caused you to change your mind?Comment #49 September 21st, 2013 at 5:22 pm

(The other) Scott #48,

I don’t agree that “computer science, by definition, is not about the real world.”

Computer science can be (and probably should be) about the computational aspect of the real world. There are researches on applying computer science to study the computational aspect of evolution, bird flocking, etc. It may be argued whether the models studied there is a good enough model of the real world (as in any branch of science), but it does not mean that computer science must be detached from reality.

Also, the techniques developed in theoretical computer science can have applications to the mathematical foundation of quantum physics.

M

Comment #50 September 21st, 2013 at 5:58 pm

You’re not funny!

Comment #51 September 21st, 2013 at 6:04 pm

[…] was in a comment to his recent blog post were he has some fun with Nima Arkani-Hamed's Amplituhedron. The latter is actually some of the […]

Comment #52 September 21st, 2013 at 6:06 pm

I think of mathematics as a mix of discovering things and inventing things, since i first heard of Paul Erdoes. What is been proven counts as a discovery for me, how it is proven as a form

of art. I wouldn’t devalue the invention-part though, it makes mathematics attractive.

Finding deep fundamental truths is of course awesome, but it doesn’t mean it’s cool to brag about how this work is so much more important than the work of applied mathematicians, musicians, medical researchers curing cancer or whomever.

@Scott, as to the question how fundamental CS is, i would love to know if you ever heard of the integrated information theory of consciousness and if you have any opinion on it. I think it deals with a pretty fundamental property for a theory which looks like something coming out of computer science.

http://en.wikipedia.org/wiki/Integrated_information_theory

Comment #53 September 21st, 2013 at 6:10 pm

#49: Maybe this is clear, but seems like it’s not: Scott is not defending that by definition computer science is not about the real world. He is putting those words into the brains of “people who were trained a certain way”.

Comment #54 September 21st, 2013 at 6:47 pm

Jellyfish #50:

You’re not funny!

Actually, to an extremely specific audience—namely, people versed in quantum computing and complexity theory, but

not alsoin quantum field theory or string theory—I’mhilarious.Comment #55 September 21st, 2013 at 6:50 pm

A #53: Yes, thank you!

(Earlier today, Dana remarked to me that reading this blog could be good preparation for the reading-comprehension portions of standardized tests, or vice versa.)

Comment #56 September 21st, 2013 at 8:12 pm

Scott’s gracious embrace of all the pieces, from inventing and discovering to math and physics and computing to wit and to the many other things he as written about on the web and spoken about in his videos, v. some people’s territorial defense of the boundaries between their turf and other’s turf, and their degree of investment in the ensuing turf battles, very much reminds me of the recent case of Pinker’s generous invitation v. Wieseltier’s embarrassing response.

For better or worse, I think that a non-trivial part of the difference between these sorts of people is more a matter of their personal nature rather than of learned attitude. For the record, count me on the deeply naturally curious about everything side, the risk from which one of my grandfathers used to worry about when I was a little boy. I didn’t learn to be curious about everything, I always was that way. Or maybe I just never unlearned it like other post-baby people did ~ hmm, that could be it.

Yet howeverfor, without people who straddle inter-turf fences, from logic to music, we wouldn’t have brilliant inter-disciplinary results like Tim Blais’s recent Bohemian Gravity. Congratulations on your new Master of Science in Physics degree from McGill, Tim. If you take your show on the road, Scott, (I’d line up for a ticket) you could ask Tim if he would be your band.

Comment #57 September 21st, 2013 at 9:06 pm

Does anyone else here find it absolutely hilarious that Scott complains when someone else get a little bit of attention for what seems to be a perfectly interesting (but perhaps exaggerated) advance? This is the same person who, basically, stands on a street corner screaming “Look at me, look at me” from morning to night.

Comment #58 September 21st, 2013 at 9:40 pm

I have almost no training in any of the fields mentioned here. Well, I am a long time programmer, but that hardly endows me with any special knowledge of information theory, etc.. But when I see the same constants — many of which can be derived straight from number theory itself — popping up in very different fields of study, well, even the lowly armchair scientist realizes that some common thread runs through these different fields, and at a very deep level at that.

One would think that the true scientist would be seeking the reason for this commonality rather than trumpeting the relative importance of this or that field. As my father once said, “Every academic believes that their own particular area of endeavor is going to save the world.”. You have my respect for your particular contributions, but when you manage to explain the appearance of the same constants in completely disparate fields of study, that will be quite a feat.

In light of my ignorance, any examples of this that I am unaware of would be welcome.

Comment #59 September 21st, 2013 at 10:17 pm

anonymous #57: Look, I do feel for those poor underdog theoretical physicists, who never get any popular articles written about their work because the media-savvy computational complexity theorists keep hogging the limelight. But if you think I’m “complaining,” you should spend some time with my relatives. What I’m doing is called “having fun.”

Comment #60 September 21st, 2013 at 10:26 pm

Alton,

The same constants turning up in very different (or, maybe not?) places has been discussed for a long time. Steven Finch has condensed a vast amount of info on this topic in:

http://www.amazon.com/Mathematical-Constants-Encyclopedia-Mathematics-Applications/dp/0521818052/ref=sr_1_1?ie=UTF8&qid=1379820058&sr=8-1&keywords=mathematical+constants

This book is absolutely worth $50. It is hard to put down, you might find yourself reading it straight through.

Comment #61 September 22nd, 2013 at 1:38 am

Scott says:“However, one obvious difference is that the physicists don’t call them conjectures, as mathematicians or computer scientists would. “Using ” beyond-Standard-Model theoretical physics” to characterize all Physics is unfair.

This is a particular area of Physics especially prone to Crackpottery, Hype and grad delusions while being especially light on empirical testable predictions (at least practically testable).

If you must compare use some other, less hype prone, yet still elegant while being simultaneously testable and perhaps, even better,

useful.As someone who’s not from either of these fields it is ironic that

“beyond-Standard-Model theoretical physicists”seem far closer to TCS or Pure mathematicians than any other physicist. The unifying similarity being the appeal to elegance at the cost of messy empirical verification.Comment #62 September 22nd, 2013 at 1:49 am

John Dixon, it wasn’t any private letter. It was a completely public, accessible explanation why you were banned. I find it important for such things to be public to clarify the policies on my blog by precedents – and also to inform the people who would be expecting your replies that they have to search for you elsewhere if they fell in love with you. You were banned because you are an obnoxious troll and spammer repeatedly posting the same worthless nonsense on my blog, moreover in an aggressive tone you have no conceivable justification for given your utter stupidity.

Dear Scott, yes, I happily shout from the rooftops what others think quietly. The silence of the people who actually know, inside the noise of the ignorant and aggressive folks, is something that simply drives me up the wall. They may be silent because they think that the noise by the morons isn’t hurting them (and in some cases, it’s right as they were grown up in environments where knowledge was respected and where they would be getting any millions of dollars etc. they would ever think about). But it has been hurting me for quite some time and I am totally convinced that it is hurting present and future theoretical physics and all that is good and just in our world.

Let me comment on your (Scott’s) sentences

“Namely, it seems that, to people who were trained a certain way, anything whatsoever that physicists do—even, let’s say, a particular 2-loop calculation in an 8-dimensional superdupersymmetric toy model, which is motivated at most by a tenuous mathematical connection to other models that themselves remain extremely speculative—is a “discovery,” maybe even a “fundamental” discovery, because physics, by definition, is about the real world.”

This is a sequence of distortions and each of the distortions is done in the same direction – to make your indefensible position look more defensible. I would personally never get “super-excited” about a particular 2-loop result. Also, there are no interesting theories in 8 dimensions to the extent that they would get any “hype” treatment, ever. M-theory is in 11D, superstring theories are in 10D, then there are simple compactifications, and the next “new beasts” are perhaps little string theory in 7D and especially little string theories and SCFTs in 6D that tend not have Lagrangians. Nothing too interesting happens in 5D and a huge focus is on 4D theories, not only because this matches the apparent number of dimensions in our Universe. (Lots of important theories exist in 3D,2D,1D,0D, too. These are low dimensions so that their spacetime can’t be interpreted as “normal spacetime” and if it can, it has some qualitatively different properties from ours, e.g. that gravitons have no polarizations.)

The unitarihedron, oops, I got brainwashed by you, I mean the amplituhedron is about the very same N=4 gauge theory in d=4 that is the most well-known example of AdS/CFT and that has been called the 21st century harmonic oscillator. It’s the same theory whose calculations were started by Witten in 2003, it’s the only maximally supersymmetric gauge theory in d=4.

It’s just a complete idiocy to suggest that it is “just another theory” which is what you did. There is no other theory like that in d=4. It’s the most important “uncontrived” interacting field theory in d=4. For the research of important insights in theoretical physics, it’s vastly more important than e.g. the Standard Model. Laymen like you may fail to understand this fact but it’s an important fact, anyway. Moreover, the amplituhedron didn’t find a particular 2-loop result. It has found a compact formula to calculate all on-shell amplitudes in this most important d=4 field theory at any order of perturbation theory. That’s why it’s important (although it simplified just some results that were already out there). There is no single theory or algorithm or “single structure” in all of computer science that would have this “great advantage in importance” over others, like the N=4 gauge theory, and there’s certainly no solution to this “super single most important problem/algorithm/game” that would be this more compact and unified.

And yes, physics is by definition about the real world which is why discoveries in physics are generally important. But it’s not enough to be about the real world for a discovery to be important. Some discoveries are important, change the way how “almost” everything is thought about or how everything is calculated; others don’t.

Comment #63 September 22nd, 2013 at 2:52 am

Luboš Motl Says:Comment #62

“Some discoveries are important, change the way how “almost” everything is thought about or how everything is calculated; others don’t.”I will bet this amplituhedron isn’t one of those.

I will further wager this as flavor of the month (or year). Wait for a year or two and this will sound less revolutionary than people make it out to be right now.

Comment #64 September 22nd, 2013 at 3:00 am

If anyone thinks they have a hype-worthy physics-related result, let them first go publish it in PRL (or, even better, PNAS, Science or Nature). That includes computer science folks who work on “quantum” stuff.

Anyone who thinks they have a hype-worthy maths result can go publish it in Annals of Math.

Comment #65 September 22nd, 2013 at 3:25 am

Those who hype work that they are unable to publish in the aforementioned places (#64) are just charlatans, who are trying to compensate for their mediocrity by generating a cult of personality around themselves.

Comment #66 September 22nd, 2013 at 4:19 am

Rahul, I have written a text (linked to above) in which I argue that a revolution isn’t a right word for that, too. It’s still an unusually important advance in a subfield.

Comment #67 September 22nd, 2013 at 7:02 am

amused #64, #65: Umm, have you

readPRL, Science, or Nature lately? Plenty of cringe-worthy hype gets published in all three. (Which is not to say there’s necessarily a problem with their editorial processes: if they tried to reject more hype, they’d also risk rejecting more genuine breakthroughs.)Annals of Math is admittedly a different story; not too much hype there.

Comment #68 September 22nd, 2013 at 7:06 am

Scott Says: Comment #67“Which is not to say there’s necessarily a problem with their editorial processes: if they tried to reject more hype, they’d also risk rejecting more genuine breakthroughs”I think you let of the editors too easy. There’s indeed a problem.

They don’t accidentally let through hype; they often shamelessly court hype because it gets them nice publicity & sensationalist headlines.

There is indeed the risk of rejecting genuine breakthroughs, but I don’t think the editorial screen is currently at the optimal tradeoff point.

Comment #69 September 22nd, 2013 at 7:32 am

Scott,

Sorry to have this battle on your blog. It should be on Lubos Motl’s blog, but he is unwilling to fight the battle there.

However you seem to be willing to host a Shtetl, which includes such battles as I understand it.

Thank you for your integrity and your interesting blog.

Above in 62, Lubos says that

John Dixon, it wasn’t any private letter. It was a completely public, accessible explanation why you were banned. I find it important for such things to be public to clarify the policies on my blog by precedents – and also to inform the people who would be expecting your replies that they have to search for you elsewhere if they fell in love with you. You were banned because you are an obnoxious troll and spammer repeatedly posting the same worthless nonsense on my blog, moreover in an aggressive tone you have no conceivable justification for given your utter stupidity.

I reiterate my question to Lubos here, since this is the only place I can reach him.

What evidence is there for SUSY in physics? I mean experimental evidence, not waffle about theory.

You must admit there is none. Also I request an apology for your rudeness and false statements about me personally.

In 2008 you attacked a paper of mine on Susy breaking which was in arXiv, stating that I was an amateur scientist

who knew nothing about BRST. Then you went into a rant about amateur scientists.

I am not an amateur, as you would know if you looked at my publications. Also I know far more about BRST, and SUSY, than you do, as I can tell from your remarks at that time and since.

So how about an honest, simple, answer Lubos? Please cut out the personal attacks–they only prove that you do not have a better answer.

John

Comment #70 September 22nd, 2013 at 8:03 am

[…] Aaronson, Shtetl-Optimized, The Unitarihedron: The Jewel at the Heart of Quantum Computing, here. […]

Comment #71 September 22nd, 2013 at 11:16 am

I think the importance of the discovery is that there are definite structures governing the physics that are otherwise invisible. If one considers the kind of structures that emerge when one graphs complex functions, the idea that there is an analogous representation of tree level interactions is an important insight even if not revolutionary. Presumably distorting or perturbing these structures are possible, and at any rate it allows one to begin building new measures to determine how far one has to deviate from the global structure to understand the local dynamics. In this sense it is rather scary since we are dabbling with questions of equivalence that we haven’t been able to explore to date. It is ok to put things in perspective, but it is not a good strategy to mock these developments.

Comment #72 September 22nd, 2013 at 12:42 pm

Hal Swyers #71: In case this wasn’t clear from the post, let me reiterate that, even from the minuscule amount I understand about them, the amplituhedron / positive Grassmannian results sound genuinely exciting to me. As an interested layperson, I like that there’s a much more efficient way to calculate something (even just in a toy theory), which precisely matches what can also be calculated in the traditional way. I like that the new way of calculating gets rid of Feynman diagrams, and of the entire concept of off-shell particles (which I never particularly liked…). Most of all, I like that, according to this paper, questions about scattering amplitudes are essentially reduced to combinatorics—which is what I tend to hope

allmathematics should eventually reduce to, and now maybe even Lubos ought to agree with me. I’ll be very curious, again as an outsider, to see how this develops in the next few years.At the same time, I was getting emails from people asking me whether this was really the death-knell of locality, unitarity, etc. etc.—statements that

even Lubosagrees seem incredibly premature. So I didn’t see the harm in having a little fun.Comment #73 September 22nd, 2013 at 1:14 pm

“Most of all, I like that, according to this paper, questions about scattering amplitudes are essentially reduced to combinatorics—which is what I tend to hope all mathematics should eventually reduce to,….”

But that is a profound statement there.

Many times combinatorics is hard! Ngo won the fields for the fundamental lemma but the essential insigtht was to reduce the combinatorics to number theory and etale cohomology was from other researchers.

Counting is not easy.

Comment #74 September 22nd, 2013 at 1:39 pm

Yes, those are silly questions, in the practical sense that for

purposes of large-scale quantum simulation,the ‘death-knell’ of strict quantum locality and strict unitarity (etc.) was sounded years ago. The work of Arkami-Hamedet al.is providing (wonderful!) new mathematical and physical answers to the crucial-to-engineers question: “Why are strict quantum locality and strict unitarity so little missed, when large-scale computation algorithms simulate these attributes only emergently and/or approximately and/or in thermodynamic limits?”Comment #75 September 22nd, 2013 at 2:42 pm

@scott 72 to be sure I am as much of an outsider as anyone, but I am interested only as a person interested in how our society develops. I can only claim to be ever vigil and nothing more. At some essential level, if one assumes some sort of quantization we always descend to combinatorics, but despite our best intentions, there are ultimately real consequences to these sorts of debates. I think that what is encountered here is a little more involved, Nima is certainly exceptional in his work, it is likely his mind is computing variables outside most peoples understanding. So when articles talk about nonlocality and nonunitary interpretations one needs to spend some time thinking about how he came to that statement. Outside of long term implications of these thoughts my interest is limited.

Comment #76 September 22nd, 2013 at 3:21 pm

Scott

The fact that Lubos is obnoxious and rigid doesn’t take away anything from advances in the computation of scattering amplitudes, although it seems to me that Freddy Cachazo should probably receive most of the credit for getting this line of research going.

The amplitudhedron seems to me an incremental advance from what had been done before. Until it yields a new interpretation of a deeper physical principles it will remain that.

None of this takes away from the fact that quantum computing is potentially the most significant field of study anyone can undertake.

But it will probably won’t be us who either enjoy or suffer its consequences.

Comment #77 September 22nd, 2013 at 3:47 pm

Scott, re. #67, PRL and the other glamor mags are certainly not immune to hype, but the point is that if you what to publish there you need to convince a couple of experts in related areas to buy into your hype, and basically convince them that your paper is of a comparable quality to the best of their own work. This is completely different from telling a journalist that you have discovered a shiny brilliant jewel of some sort.

When people don’t send their best work to these journals, the real reason, despite what they claim, is because they doubt that it will meet the standards. They choose to hype their work themselves, rather than let expert colleagues decide its value, because they know the judgment is unlikely to be favorable for them. Perhaps it’s a bit much to call them charlatans like I did in my earlier comment, but they should certainly be called out for trying to advance themselves in such a pathetic way.

Comment #78 September 22nd, 2013 at 3:50 pm

Amused #65, Scott #67 is right—charlatans publish in your prestigious journals too. For example, this dubious experiment: http://www.sciencemag.org/content/339/6121/794.full?sid=ae734ab5-d0e7-4d26-bbca-b6e7afb5c623

Comment #79 September 22nd, 2013 at 4:12 pm

So they are claiming to be able to compute something in a better way than before. Does this have any computational complexity implications?

Comment #80 September 22nd, 2013 at 4:16 pm

Luboš, what are you doing for a living these days?

Comment #81 September 22nd, 2013 at 5:07 pm

also amused #78: I guess you can’t win with everyone!

Back when I didn’t talk much to experimentalists, I was an ivory-tower theorist obsessed with abstract complexity classes that had nothing to do with the real world. Now that Arkhipov and I proposed an experiment that was actually done by some of the world’s leading quantum optics groups, actually gave the results we predicted (no big surprise there), and actually advanced the state-of-the-art by some tiny amount (before this, to our knowledge, no one had ever demonstrated the 3-photon analogue of the Hong-Ou-Mandel dip)—now I’m guilty of hype and charlatanry.

For the record, let me tell you what happened:

(1) After Alex and I published our theoretical paper, we saw it as our professional responsibility to offer advice to any experimentalists who contacted us because they were interested in maybe doing a small-scale demonstration. We even hired a phenomenal summer student, Justin Dove, to help us liaise with experimentalists. To a large extent, our conversations consisted of the experimentalists asking things like, “so, if we get this to work with 3 photons, can we say that we solved a #P-complete problem and exponentially outperformed classical computers?,” and us responding, “NO. (But do go ahead and try for 3 photons, if that’s the current frontier!)”

(2) At some point, Andrew White from UQ got in touch with me and Justin to say that his group had succeeded in demonstrating 3-photon BosonSampling, and that we were invited to join a paper about it that they planned to submit to

Science. This came as a total surprise to us.(3) I was hesitant to accept Andrew’s generous offer, mostly because I’d obviously contributed nothing directly to their experiment! (I hadn’t even been within 10,000 miles of it.) However, after seeing their draft, I realized that there

wassomething I could contribute: namely, I could help themtone down the hype level, improve the accuracy of the complexity-theory discussion, and make more explicit the connection to the permanent. Those sorts of things were the entirety of my contribution.(4) After the paper came out, I also “contributed” by insisting that UQ tone down its press release, and by telling MIT News not to bother putting out its own press release (they’d already done a story about the theoretical side of BosonSampling, and I thought a second story would be overkill).

So, those are the facts. You tell me what I should have done differently, and what I should feel guilty about.

Comment #82 September 22nd, 2013 at 5:21 pm

Mathematician #79:

Does this have any computational complexity implications?

Excellent question! I wondered the same thing. Hopefully someone who understands it better can comment.

(My

guesswould be that, sure, it shows that the computational complexity of calculating planar N=4 SYM scattering amplitudes is much lower than one might have thought. But I don’t know what, if any, are the natural parameters here to let go to infinity—i.e., what’s the “input size”? And I also don’t know whether there are any other computational implications.)Comment #83 September 22nd, 2013 at 5:26 pm

Scott #81,

That’s an interesting story, and I’m glad I got to read it, but I hope you don’t feel obligated to defend yourself from trolls like also amused #78. And trolls is all they are, why else would they be wasting their time reading what they consider to be a charlatan’s blog?

Comment #84 September 22nd, 2013 at 5:35 pm

@84

Professor from a classical complexity perspective the results probably are trivial and probably not worth your time. However for your field they may have something.

Comment #85 September 22nd, 2013 at 5:51 pm

Here are some of the points I took from this post (and the links) and my own take on them.

1)

The news. There is a new remarkable geometric object “the amplituhedron” which seemד to shed a new light on certain models from quantum field theory, with a lot of exciting combinatorics.My take: This was completely new to me although the paper is from 2012 and I know well the two mathematicians among the authors – Alex Goncharov and Alex Postnikov. The paper has a lot of cool mathematics/combinatorics related to many deep areas, and to recent advances in algebraic combinatorics. Congratulations Alex and Alex (and the others)! The paper is “Scattering Amplitudes and the Positive Grassmannian” by Nima Arkani-Hamed, Jacob L. Bourjaily, Freddy Cachazo, Alexander B. Goncharov, Alexander Postnikov, and Jaroslav Trnka. (Side remark: I know it may look differently if your last name starts with “Aa” but why not mention all authors by name?)

2)

Hopes for physics.There are interesting claims/hopes that conceptually this object will shed new light on spacetime and locality, which will emerge from rather than being imposed on the model.My take: it will be interesting to hear more about it. Time will tell how important it is. I suppose that, as always, with the great excitement, the involved physicists should also apply some self-criticism/skepticism.

The fact that fresh, newly-discovered, mathematics enters so quickly into high-energy physics seemed to me always a little suspicious (like movies where the hero repeatedly saves the world in the last minute). But the great effectiveness of physics to mathematical exploration is well established.

3)

The critique. Scott makes the point that largely (from the physics point of view) we have just “reorganization” (and a new brand name) of known things about some theories, and that, moreover, these theories are unrealistic and have limited scope. (Then Lubos makes the claim that these theories are actually of great and central importance even well-beyond the “standard model”.)My take: In mathematics “reorganization” can be quite crucial, at times, and I suppose that also in physics. Scott (off-hand) description seems somewhat uninformed (there is a nice relevant quote of Diaconis: “This type of careless commentary can be accepted in praise but is unforgiveable in criticism”), while Lubos’ evaluation seems over-enthusiastic.

4)

The hype.There is a further critique here (as in several previous posts) and in some of the links of the amount of “hype,” especially in the popular coverage.My take: This is something I thought about in various contexts. Sometimes I feel uncomfortable with excessive hype but the anti-hype crusade is rather silly. (But perhaps not as silly as the forced open-access-mathematics endeavor.) Anti-hype writings often simply express dislike of what people in

otherfields do, or general anti-science sentiments. A colleague of mine, Bob Aumann, often says that in science, like in other areas of life, salesmanship is more than 50% of the job.5)

The case against science journalists.It is suggested that science journalists do frequently a terrible jobMy take: this is a repeated theme here on Scott’s blog. I largely disagree. Science journalists usually do a fairly good job, and often the complaints here are unjustified

6)

The unitaryhedron.Scott asserts thatquantum information and BQP have much to offer, perhaps much more than this new gadget. (Perhaps this is Scott’s main point.)My take: Sure! As important as the amplituhedron is, we have here a single paper from mid 2012 while quantum information is a stable fruitful theory of great importance for the last 2-3 decades.

Comment #86 September 22nd, 2013 at 5:52 pm

(continued..)

7)

The “quantum method”. In particular, Scott offers that thinking quantumly can give insights on classical questions like Scott’s 1/2 page quantum proof of the result by Beigel, Reingold, and Spielman.My take: This is a very nice idea, that “the quantum method” will be as fruitful to mathematics (unrelated to quantum physics) as, say, the “probabilistic method”. At present we have only handful of examples coming from quantum information. This is certainly something to look for. (There are some glorious examples of things of this type coming from high energy physics.)

8)

Polyhedrons are sexy.It is sexy to consider polyhedrons and polyhedron-type objects.My take: This is quite surprising. We, the professional polyhedra people are sometimes amazed with the fetish power that polyhedron-like objects sometimes have. But it is rather flattering. Are polyhedrons, permutahedrons, associahedrons, cyclohedrons, amplituhedrons, etc. the “high hill shoes” of mathematics and physics. How exciting! (But what about biology, economics and history?)

9)

Computations, speed-up, and computational complexityThe popular article gives a nice drawing with the following explanation: “A sketch of the amplitutahedron representing an 8-gluon particle interaction. Using Feynman diagrams, the same calculation would take roughly 500 pages of algebra.”

The geometric object seen in the figure is a 7-vertex “stacked” triangulation of a 2-sphere, drawn as a non-convex polytope. The ability to use this guy to shorten dramatically a 500-hundred pages of algebra stretches the imagination. Is it so?

Of course, there is a serious issue here about computation which is what is the actual speed-up we witness in this case and what is behind this speed-up. Certainly, this question has a computational-complexity flavor.

10)

Depth and beauty tug-of-war.Is the mathematics of theoretical computer science (TOC) as deep as the mathematics of related to high energy physics (HEP). (An ongoing debate between Scott and Lubos.)My take: I suppose that everybody realize that depth is a vague and subjective matter. I cannot make a call about mathematics in general. Personally, I suspect that HEP mathematics is somewhat deeper than TOC mathematics simply because I have better understanding of the TOC mathematics, and my personal inclination is to regard things that I don’t know as somewhat deeper than things that I know. (But only somewhat.)

Another personal approach that some people have which is well-represented here is to regard things that they know as much deeper and more important than things that they don’t know, and I think that this attitude is perfectly OK and (restrained) it can be quite instrumental.

When it comes to combinatorics which is my research area. I am quite familiar both with the (extremal, probabilistic, and analytic) combinatorics involved in TOC and with the (algebraic and geometric) combinatorics related to HEP and the

ABCGPTpaper by . The highlights are quite deep and involved in both cases, and I do not regard one of these branches of combinatorics as deeper than the other.11)

Is NP&P fundamental?Scott enjoys arguing about it with Lubos (who enjoy it too, I suppose).My take: I do think (like many mathematicians) that the NP=!P is a fundamental open problem in mathematics of central importance. I do not agree with Scott when he claims that the NP=!P is the most important (and by far) open problem in mathematics, and I tend to think that this view represents not just (welcome) enthusiasm Scott has for his own area, but also misunderstanding on the nature of mathematics.

12)

Double standards and universal commentators.Some of Scott’s complaints about double-standards and examples of great irony are based on a “trivial flaw”. The flaw is assuming a universal commentator. The commentators here are special-purpose commentators and each one has his own views and standards.13)

The positive Grassmanian:For readers who wish to know what the positive Grassmanian is: Here is a short description. The GrassmanianG(n,k)is the space of k-subspaces on an n-dimensional vector spaces. You can think about the Grassmanian as the set ofkbynmatrices under simple equivalent relation. The paper deals with stratifications of the Grassmanians which are simply partitions of the Grassmanians to building blocks, where often the partition has a combinatorial nature. (My own early work on “algebraic shifting” was about stratifications of the Grassmanian when then-dimensional vector space itself had a structure of an exteriorr-power.) One possible stratification would be to organize the matrices according to the sets of columns which are linearly dependent. This is quite important notion but leads to a terrible mess. The important and fruitful notion used here is based on linear dependencies among consecutive columns (w.r.t a certain order) in a cyclic sense. One part of this stratification has a very simple description over the reals. It corresponds to all k by n matrices where all thekbykminors are positive. (This depends on a fixed ordering.) This is the positive Grassmanian.Comment #87 September 22nd, 2013 at 7:31 pm

“8) Polyhedrons are sexy. It is sexy to consider polyhedrons and polyhedron-type objects.”

Well, now that almost nobody disputes that God plays dice, it remains to find out what shape they are.

I’d go with turtle-shaped, personally — if only for the beautiful underlying mathematical structure.

Comment #88 September 22nd, 2013 at 7:45 pm

I seem to recall that finding the volume of a polyhedron could be hard. (Of course this depends how they are represented.)

Comment #89 September 22nd, 2013 at 8:59 pm

Sorry, I just realized that I never picked a winner for my contest to explain the “point” of this post! OK, I guess Wolfgang #12 wins with “physics envy,” simply because of succinctness.

Look, I hope my comment #72 above makes sufficiently clear that I don’t actually have a “critique” of amplituhedron research, any more than Weird Al Yankovic had a “critique” of Michael Jackson with his “Fat” parody. For that reason, I found Gil Kalai’s (#85-86) long attempt to distill my “critique” and respond to it to be unintentionally funny in parts.

Comment #90 September 22nd, 2013 at 10:18 pm

While these interesting topics are outside my areas of expertise, I found Gil’s remarks #86 and #87 to be informative and entertaining. I was unaware of the connection to partially ordered faces.

Comment #91 September 22nd, 2013 at 11:13 pm

#60, Raoul Ohio, I just bought the book. I thank you for the reference to it.

Now I am off to discover something profound. Maybe.

Comment #92 September 23rd, 2013 at 2:47 am

Scott, re. #81, I hope that theoretical paper with Arkhipov that you mentioned was published in PRL or some other supposedly high-quality journal, so that it obtained a stamp of supposed high quality from anonymous supposed experts in the field, before issuing the press release… Assuming it was, then your actions in that case all sound fine to me FWIW. I encourage you to hype the Science paper as much as you want!

On the topic of hype in general, going back to your #67: I’ve no experience with Science or Nature but quite a bit with PRL, and based on that it seems, at least in my own field, that hype is actually frowned on. My own pathetic little attempts at hype in my PRL papers usually don’t survive in the published version, removed at the insistence of referees. Maybe it’s different in other fields though. (I confess to being too lazy to do much reading outside my own field.)

On the other hand, these days the PRL acceptance letter contains an exhortation to tell your university to issue a press release to publicize your glorious PRL publication and hype it to the heavens. So it seems the journal editors and publishers do want lots of hype. But the real power and control over this is held by the referees, and their approach will typically reflect the standards and culture of the field they work in. The culture in my own field is boringly low key and conservative, which I suspect is because there are lots of Continental Europeans in this field, so we aren’t able to get away with much hyping in our PRL publications or elsewhere.

Comment #93 September 23rd, 2013 at 5:39 am

Scott #89 I’m just surprised that’s all. In any case I would be extremely excited since physics is now intersecting with linear equations in a real and definable way. Nima’s lecture now tells us that there is a link between convex polytopes and locality and unitarity. http://susy2013.ictp.it/video/05_Friday/2013_08_30_Arkani-Hamed_4-3.html#.

I understand the parody here, but now we have the ability to connect most of computer science to physics. Absolutely mind numbing if you consider the early goals of trying to make the universe computable.

Comment #94 September 23rd, 2013 at 7:40 am

Hal #93:

now we have the ability to connect most of computer science to physics. Absolutely mind numbing…

OK, but see, that sort of rushing to judge the significance of a theoretical contribution, before

anyonereally knows where it’s going to lead, was precisely the sort of thing that inspired my parody. (Incidentally, I don’t think such rushing does the authors any service, nor does keeping it in check do them a disservice.)Lots of people have been trying to understand the connections between physics and computation for decades. Yes, it’s possible that this line of work will deepen the connections even further … or not. Either way, the normal process of science will sort it out, if it’s just given sufficient time (meaning years, not days ).

Comment #95 September 23rd, 2013 at 7:54 am

amused #92: I certainly didn’t mean to suggest that the peer review process is useless. It’s not perfect, but it

doesprevent lots of really bad papers from appearing in journals, and itdoestypically improve the papers that do appear, especially by forcing authors to remove overhyped claims.Having said that, we now live in extremely interesting times—where countless overhyped claims can make it to Slashdot,

New Scientist, etc. etc., and thereby permeate nerd consciousness, before the peer review process “even has time to get its pants on” (i.e., when the paper is still an arXiv preprint, or sometimes even before that). I’ll be curious to see whether the norms of science can evolve to deal with this new situation.As for my paper with Arkhipov, it appeared in STOC’2011, and the journal version appeared in Theory of Computing (a great volunteer-run open-access journal that I try to support whenever possible). It was actually suggested to us that we submit to Science or Nature, but we decided against: we figured that, first and foremost, this was a theoretical computer science paper. In the past, I’ve sometimes been slightly annoyed when people have tried to market TCS results

directlyto the wider scientific world, bypassing peer review by their TCS colleagues. Well, in this case we chose not to do the same … and with that, I guess “the defense rests its case.”Comment #96 September 23rd, 2013 at 8:24 am

I took a look at it yesterday night. I am not an expert in AG. However it looks like elementary enumerative geometry here. Someone with motivation should be able to grind through the blind math part. By comparison I bought Matilde MArcolli’s Feynman motives and I got no where. Of course there is a difference in flow and style. One is for mathematicians and looks into mathematics and another is for an application. Paper is well written and Pages 48-49 are interesting. I am guessing if not of fundamental use in TCS, there is plenty of use to other fields. Associating grassmanians to graphs is new to me. It could help reorganizing results in TCS.

Comment #97 September 23rd, 2013 at 9:38 am

“In the past, I’ve sometimes been slightly annoyed when people have tried to market TCS results directly to the wider scientific world, bypassing peer review by their TCS colleagues.”Even if you submit to Nature / Science, a TCS paper will get TCS reviewers. No bypassing. I think.

Comment #98 September 23rd, 2013 at 10:41 am

Scott #82. A natural computational complexity question is whether we can compute the amplitude for 2 –> N gluons in time subexponential in N using the amplituhedron. I would be quite surprised if we can, but I don’t know enough to say. Maybe experts can chime in.

Aside from the issue of hype, the amplituhedron seems to be a genuinely exciting conceptual development. We’ll see where it leads.

Comment #99 September 23rd, 2013 at 10:43 am

[…] In fact, nothing special, just my comment deleted from there: […]

Comment #100 September 23rd, 2013 at 1:40 pm

Not sure about those distinctions between physics and logic (comp science)…

I think that eventually, for us, all those disciplines are going to hit the same rock bottom, defined as fundamental limitations to our knowledge.

Comment #101 September 23rd, 2013 at 2:00 pm

I also object to the observation that somehow physics is more beautiful because supposedly independent of the human mind, while complexity theory is a very human/engineering messy endeavor.

If you take the point of view that the most fundamental and irreducible “real” object in the whole universe is our own consciousness (I think therefore I am), and that the nature of consciousness is intimately related with the nature of computation, then complexity theory is a very profound and fundamental endeavor.

Comment #102 September 23rd, 2013 at 3:32 pm

Comment#98

@John Preskill

Think it may give something like Ksatleyn’s planar permanent?

Comment #103 September 23rd, 2013 at 4:06 pm

[…] fa il troll nei commenti all’ultimo post di Aaronson (in cui scimmiottava l’hype di cui dicevo qualche giorno fa), atteggiandosi a primadonna in […]

Comment #104 September 23rd, 2013 at 5:23 pm

Scott #72: “Most of all, I like that, according to this paper, questions about scattering amplitudes are essentially reduced to combinatorics—which is what I tend to hope all mathematics should eventually reduce to,”

hmm, this is very flattering, but being a combinatorialist I don’t have any expectations, hopes or desire that all mathematics will reduce to combinatorics. (or vice versa). I wonder what could be the source of this hope.

BTW, Scott, my comments #85,#86 were mainly meant to express my own views, and to raise points that I regard as important, and not to distill or guess what you are thinking.

Certainly your comment #72 is close to my own view on the new paper. I was glad to read this comment (but read it only after writing my own), and, of course, I was glad to read your enthusiasm about combinatorics.

You may find it useful to learn why your post and remarks can be read as being critical towards the new endeavor. (Your implied (off-hand) critique on ABCGPT was just one item in my comments.) Indeed until #72 it looked that you don’t care much about the new paper apart from being a vehicle to express the idea of re-branding BQP as a study of the unitaryhedron or something like that. Not saying anything informative or positive about the paper, the parody itself about branding old stuff with new terminology and hyping it, and your comment #48 do express a critique on the importance of this line of research, even if you did not intend it, which I am glad to learn is the case.

What is nice for me about your post was that this is was a place to learn about the new paper and through it to learn or be reminded of some very beautiful combinatorics. E.g. the older papers by Postnikov [37] and Lauren Williams [101 ].

The issue of what is “fundamental” in mathematics and physics which was raised in the discussion is also of interest. (And I did not relate to it, beside regarding the NP=!P problem.) It seems that for many physicists the term “fundamental” has concrete, almost formal sense, and it will be interesting to learn more about it. So we (or others) may come back to it at some other time or other place.

Comment #105 September 23rd, 2013 at 6:58 pm

It was Scott Aaronson who gave to me (and many students/colleagues) the worthwhile advice “Consider posting your math/physics questions on forums like MathOverFlow, TCSStacExchange,

etc..”It was Gil Kalai, by his many thoughtful observations and questions sustained over many years, who supplied the concrete inspiration for the just-posted MathOverflow question

What is the “tangle” at the heart of quantum simulation?Most importantly of all, it was Nima Arkani-Hamed, Jacob L. Bourjaily, Freddy Cachazo, Alexander B. Goncharov, Alexander Postnikov, and Jaroslav Trnka (naming all the

Amplituhedron,authors per Gil Kalai’s excellent suggestion) who have helped the diverse community of quantum researchers to appreciate that geometric emergence phenomena in quantum dynamics can be plenty of fun *and* have great practical utility.Thank you, Gil and Nima and Jacob and Freddy and Alexander^2 and Jaroslav and Scott! Oh yeah, and John (Preskill) too!

Comment #106 September 24th, 2013 at 12:00 am

John Preskill #98 clarified what in my mind is the connection to calculation complexity. My understanding was that quantum computer can be expected to simulate QFT efficiently, but the way the Amplituhedron has been presented, makes it look as if this should also be possible with pedestrian classical resources.

Comment #107 September 24th, 2013 at 12:23 am

Dear John #98,

the number of gluons N isn’t the only parameter. The amplitude contribution depends on the number of loops L and, more specifically for this enterprise, the number of “helicity flips” K.

The N^K MHV amplitudes – next-to-next-to-next-to and so on K times – maximally helicity amplitudes are much easier to compute for low K where the space where the amplituhedron is effectively embedded is low-dimensional.

When K becomes of order N/2 and large, things are hard as the amplituhedron lives in a high-dimensional space and is bounded by planes given by inequalities involving minors – subdeterminants. However, subdeterminants can be computed in polynomial times.

I still tend to guess that it doesn’t help with the complexity for generic large N,K because one generically may need something like a triangulation and the triangulation of a high-dimensional polytope of this kind may need an exponentially large number of pieces. But maybe not. And there may be non-triangulation-based calculations.

Best wishes

Lubos

Comment #108 September 24th, 2013 at 5:49 am

Am I right in thinking that because unitarity is thrown away here (or not necessary at a fundamental level), virtual particles aren’t needed to make everything sum to 1?

Apologies for my dumb question. I’m not a physicist; just interested in science.

Comment #109 September 24th, 2013 at 7:09 am

Scott #94 fair enough, but I think this sort of approach is more analogous to the situation of the discovery of FFT. Here each boundary is viewed as the discrete log of some mass, so the canonical form appears to give a unique identity to each interaction mode. The reduction of the Feynman expansion is not apparently some sort of voodoo, but a simple reduction based on group theory. I might be wrong and I am still reading this, but the speed up should be applicable outside of the specific example used.

Comment #110 September 24th, 2013 at 7:42 am

Gil Kalai #104: I understand, and I’m sorry if the way I wrote my original post annoyed or offended people. (I should probably just include that as a standard disclaimer on

everypost I write… ) I think my position is pretty clear:(1) Not only do I have no problem with this line of research itself, but it seems quite interesting and exciting to me … even though (like everyone else) I don’t really know where it’s going to lead.

(2) On the other hand, I

dohave a problem with the process by which new results in theoretical physics tend to get “exponentially overhyped” these days, via a sort of Chinese whispers game. First, the authors might overstate the importance/generality of what they did (but that’s forgivable, given how excited they are about their brand-new discovery). Then the first round of popular articles, though trying to be semi-responsible, overstates things even further. And then, by the time it makes it to Slashdot,New Scientist, etc. (i.e., 2 days later), it’s become the biggest thing since Einstein.I hope you agree that this “exponential overhyping” is not how the critical evaluation of scientific ideas is supposed to work, and that (as I said in #94) it doesn’t do the authors themselves any favors. I wouldn’t want it to happen to me, and have tried several times to prevent it. (For example, I told science magazines NOT to write about my Ghost in the Quantum Turing Machine essay. The essay itself was already meant to be accessible to anyone who cares about these issues.)

Comment #111 September 24th, 2013 at 7:50 am

I think the mexahexaflexagon is the culmination of human endeavor:

http://www.youtube.com/watch?v=GTwrVAbV56o

Comment #112 September 24th, 2013 at 7:53 am

” (For example, I told science magazines NOT to write about my Ghost in the Quantum Turing Machine essay. The essay itself was already meant to be accessible to anyone who cares about these issues.)”

There may be a point you are missing with science magazine writing such things. They aren’t writing just to distill things down for laypeople (or people in other fields). They are also writing so such people can know that there’s something interesting to look at. If there’s a field that I don’t know much about and there’s something that looks sufficiently interesting in a popular description, then I’ll go and look at it (if I have time). There’s a lot of stuff out there, and in that context having magazines write about what you’ve done helps make more people actually go and read the original.

Comment #113 September 24th, 2013 at 8:16 am

Scott #110:“I do have a problem with the process by which new results in theoretical physics tend to get “exponentially overhyped” these days,…”

I agree with Scott that theoretical physics results are often over-hyped, but then again a Quantum Computing background is hardly a good credential to complain from.

In terms of hype, QC is one of the undisputed leaders in whole game.

Comment #114 September 24th, 2013 at 8:46 am

As a layperson, what about “digital physics”?

Is that overhyped rubbish or a sign of a true convergence of physics and CS?

Comment #115 September 24th, 2013 at 8:50 am

Physics hype goes hand in hand with government budgets and the need for institutions to compete for funding, obviously.

Comment #116 September 24th, 2013 at 9:29 am

i think what scott is trying to imply is ;though this is important

work it is not academically very relevant because it is only applicable to restrictive class of theories …. i think he is not

satisfied with the work . personally i am happy we are doing away with locality as entanglement makes more sense now and if

we get get rid of unitarity what will happen to our unitary transformations do they still hold good ?

Comment #117 September 24th, 2013 at 9:40 am

#115

In the end what’s needed are professionals who can capture the imagination of the public, so kids get inspired to go into science/engineering/math.

Feynman was so great because he could clearly show the beauty of the scientific method beyond the current theories/dogmas of the day.

Comment #118 September 24th, 2013 at 10:51 am

Rahul #113:

then again a Quantum Computing background is hardly a good credential to complain from.

In terms of hype, QC is one of the undisputed leaders in whole game.

Dude, I’ve spent much of my time for the last decade trying to

fightQC hype—just look at the tagline of this blog! I’m on the phone with yet another journalist, pouring cold water on something, pretty much every week. So please retract your implied accusation of hypocrisy.Admittedly, it’s questionable how much difference I’ve made—my adviser, Umesh Vazirani, once compared trying to stop the “quantum computing = unlimited exponential parallelism = what D-Wave has already built” popular-science articles to throwing oneself in front of a truck—but I see it as my professional obligation to try. Likewise, if I were a string theorist, I’d see it as my professional obligation to try to fight popular articles, university press releases, etc. etc. that misrepresent the current state of string theory.

Comment #119 September 24th, 2013 at 11:06 am

Scott #118:

The accusation was against the field not the person. Your caution and restraint hardly is typical for the field.

Your admission that

“it’s questionable how much difference I’ve made”bolsters my point.Perhaps we are on the same page: Over-hype is bad. Among QC and Theoretical Physics (or at least parts of it) it is akin to the pot calling the kettle black.

Comment #120 September 24th, 2013 at 11:18 am

Fred #114:

As a layperson, what about “digital physics”?

Is that overhyped rubbish or a sign of a true convergence of physics and CS?

Different people mean very different things by “digital physics.”

(1) Many people mean: trying to model the actual physical universe as a

classicalcellular automaton, either pretending that quantum mechanics doesn’t exist, or imagining that QM can be derived from underlying classical laws. In my opinion,all such ideas can be thrown immediately to the trash.They don’t work, we know exactlywhythey don’t work (Bell, etc.), and at some point, explaining the un-get-riddability of quantum mechanics over and over to people who don’t want to accept it becomes as interesting as arguing with flat-earthers.(2) Other people mean: studying classical, digital “laws of physics” (e.g., Conway’s Game of Life and other CAs) because of their intrinsic mathematical interest, or because of the broader insights we can gain that way, while admitting from the outset that we’re leaving out the quantum aspect of the real world. I have zero objection to this, and in fact I often find it quite interesting.

(3) Other people mean: asserting that “the universe is literally a computer (maybe a quantum computer, maybe some other kind), which computes its own time evolution.” In my view, that perspective is not

wrong; it’s “merely” completely vacuous and devoid of scientific content! To see that, ask yourself what the universe could possibly do, such that itwouldn’tbe understandable as a computational process of some kind. I don’t think you’ll find an answer.(4) Still other people mean: using a computational perspective to gain insight into

particular aspectsof the physical world. For example, if the universe is a computer, how many bits can it store? How many operations can it perform? Which problems can it solve, and which problems can it not solve? What computational resources are necessary and sufficient to simulate the universe? I (and other quantum information researchers) have in some sense devoted our entire careers to these sorts of questions. So, yes, I think they’re pretty interesting.Comment #121 September 24th, 2013 at 11:46 am

There are some interesting aspects of this discussion related to computation, computational speed-up and computational complexity, that I hope we come back to. Regarding critique and hype. I did not regard Scott’s implied critique as offensive or insulting but just as off-hand, and not very detailed or informed. Certainly the concerns regarding the huge hopes from the new developments, are legitimate, reasonable, and should be discussed.

My own comments contained plenty of friendly/gentle criticisms on some of Scott’s positions. (Had I read #72 beforehand I’d probably delayed some of them to another occasion.) In particular, as I mentioned in the hype item, often when people claim that they fight hype they actually fight directions/opinions that they disagree with and dislike. This also refers to Scott’s excessive fighting with D-wave or with any claim/hope that quantum computers may eventually solve NP-complete problems.

I largely regard the hype as a side issue. If you really think that BosonSampling is a promising path to demonstrate computational speed up via quantum mechanics avoiding quantum fault-tolerance then this may lead to one of the most spectacular experimental achievements of quantum physics in a long time, and it may be your duty to promote this idea as much as you can. The question itself is not a hype question but a scientific question. For several years I regarded BosonSampling as quite exciting theoretically but did not think that the experimental speed-up direction is interesting. Recently, (partially triggered by useful “local hype” over this blog), I thought about it again and changed my mind.

Again, the crucial questions regarding D-wave, and BosonSampling, and quantum speed-up in general, and having unitarity and locality (and possibley QM itself) emerge from a grand larger theory, etc. are the scientific questions and not the hype issue.

Comment #122 September 24th, 2013 at 11:49 am

Rahul #119: Sorry, but I’ve found most of my colleagues to have

outstandingscientific integrity. In that case, you might ask, how do so many crummy popular articles about QC get published?In #110, I tried to sketch a possible dynamic: in any given field, some people are a little more “hypey” than the rest. Sometimes, as you’d expect, those people are marginal researchers, but other times they happen to be

superbresearchers. Either way, all their colleagues know about their “hypey” tendencies and automatically correct for them (“oh, there he goes again…”), so everything seems fine.The press, however, seeks out the hypiest people, and amplifies their pet speculations, simplifications, and exaggerations even further, while leaving out all the context that their colleagues would have. (Much of this is not even deliberate; it’s purely the result of inadequate homework.) Then, by the time you get to Slashdot, etc., the hype and misunderstanding has been launched into outer space.

As for the researchers themselves, most of them don’t even pay any attention to the end result. If they do, they figure that trying to stop it would be like throwing themselves in front of a truck, and it’s pointless to even try. Besides, there are all those science bloggers, whose entire

reason for existenceis to throw themselves in front of such trucks!I’ve seen this dynamic play out over and over in QC, and I’d guess that a similar dynamic is at work in string theory. If so, then one inference we could draw is that the often-cringeworthy popular coverage is 100% compatible with most string theorists being perfectly intellectually honest—something that’s also consistent with my observations.

Comment #123 September 24th, 2013 at 12:09 pm

scott: i am confused by this development ,

suppose unitarity is holding good what are the

implecations , it means a superposed state an have

probablity amplitudes less than one or may be greater than one (i assume that is not the case very wierd i start with one electron and end up with two..) so if it is less than one will our

unitary transformations produce desired results

say will our quantum algorithms work as expected?

Comment #124 September 24th, 2013 at 12:13 pm

I’ve been trying for the last 2 years or so to catch up on the scattering amplitudes material, and in particular spent last summer working through ABCGPT with a group of mathematicians and physicists. I haven’t caught up to the specific ideas in the amplituhedron work yet, and there is lots I don’t understand about the earlier paper, but I am willing to speculate on complexity issues on the understanding that this is educated speculation by an outsider. If nothing else, writing this out will certainly benefit me!

I strongly recommend Elvang and Huang http://arxiv.org/abs/1308.1697 for anyone who is trying to seriously get into this field; it covers material that is logically between a first course in QFT and the ABCGPT paper.

As Lubos says, there are three parameters here, $latex n$, $latex k$ and $latex \ell$. Here $latex n$ is the number of particles; $latex k$, which is between $latex 0$ and $latex n$, is the helicity, and $latex \ell$ is the loop level. Physicists also use the jargon $latex N^{\hat{k}} MHV$; here $latex \hat{k}$ means $latex k-2$.

What we are trying to compute is a certain function of $latex 4n$ real variables; the variables encode the momenta of our particles. These functions are $latex 4 \ell$-fold integrals of algebraic functions of $latex 4n+4 \ell$ real variables. When $latex \ell=0$, ABCGPT claim that all the Galois conjugates of the algebraic function occur symmetrically; so we are actually dealing with rational functions, in general they say that this is not so. See Section 16.4 where they discuss an $latex (k,n,\ell) = (5,10,2)$ case that gives rise to an elliptic integral. I must admit that I do not understand the origin of this Galois symmetry, nor why it can disappear for positive $latex \ell$.

As all of this suggests, the case $latex \ell=0$ is particularly nice, and is called the tree-level case. I understand this case much better than the general case.

Note that ABCGPT is dedicated to the question of describing the

integrand; there is almost no discussion in that paper of finding the correctcontour, let alone evaluating the integral. (Sometimes they seem to be saying that the right contour is the positive real points of the Grassmannian, but then the integral is divergent and they don’t discuss how to regularize it.) It is my understanding that the amplituhedron work does address that issue.Note that the Feynmann diagram approach also involves a sum of integrals of algebraic functions. The difference between the methods lies in how many integrals they lead to, and how complicated those integrals are. So, indeed, this seems like a subject where complexity theory would help to make precise statements.

So, the first

question for complexity mavensis what language has been developed for working with integrals of algebraic functions. I know that polynomials are usually talked about in terms of $latex VP$ and $latex VNP$, and it seems like it wouldn’t be hard to add division to those theories. Algebraic functions seem like they would add a bit of subtlety; integrals of algebraic functions seem like they would add a lot of subtlety.As Lubos says, the most obvious improvements come for fixed $latex k$ and $latex \ell$, with $latex n$ growing. The Feynmann description here is exponential in $n$. Here are some small cases at tree level:

When $latex k=0$ or $latex 1$, the answer is $latex 0$. I think but could easily be wrong, that the individual Feynmann diagrams are not zero for $k=1$, so this is an exponential to constant improvement in that case. (Of course, the vanishing at k=1 was known before this; Wikipedia suggests that it might be sue to Parke and Taylor.)

When $latex k=2$, the answer is given by the Parke-Taylor formula http://en.wikipedia.org/wiki/MHV_Amplitudes . (Remember that $latex N^{\hat{k}} MHV$ means $latex k = \hat{k}+2$, so $latex MHV$ means $latex k=2$.) This is linear in $n$ and in a very simple way: The denominator is $latex \prod_{i (i+1)} \langle i\ i+1 \rangle$ where $latex \langle i \ i+1 \rangle$ is constant time, and the numerator is also constant time. So, this problem can be computed in parallel constant time, plus a cyclically symmetric “putting it together” step.

Question two for complexity mavensIs there a good theory of things that are parallel constant time plus very simple “putting it together”?Something similar happens at $latex k=3$: The formula looks like $latex \sum_{1 \leq i,j \leq n} A(1, i,i+1,j,j,+1)$. I’m not sure who gets credit for this; Hodges http://arxiv.org/abs/0905.1473 states an equivalent formula as equation (33) without citing it, but he doesn’t explicitly say it is his either.

Sections 11 and 12 seem to be saying the following (WARNING, my paraphrase). Fix $latex k$ and take $latex \ell=0$. There should be finitely many rational functions $latex \phi_a(x_1, \ldots, x_r)$, each involving $\latex \leq 5k$ variables, such that the amplitude can be written as

$latex \sum_a \sum_{1 \leq i_1, \ldots, i_r \leq n} c(i_1, \ldots, i_r) \phi_a(x_{i_1}, \ldots, x_{i_1})$

times a very simple Parke-Taylor like product. The $c(i_1, \ldots, i_r)$ should be purely combinatorial integers.

In the $latex k=3$ example above, the coefficients $latex c$ are really simple: They are $latex 1$ if $latex (i_1, i_2, i_3, i_4, i_5)$ is of the form $latex (i,i+1,j,j+1,n)$ and zero otherwise. It doesn’t seem obvious to me that the answer will be so simple for larger $latex k$, but my guess is that it will and I am thinking about how to prove it.

Question for complexity mavensHow should I even state something like $latex c$ is extremely simple, just testing whether $latex (i_1, i_2, i_3, i_4, i_5)$ is of the form $latex (1,i,i+1,j,j+1)$? Saying that it is polynomial in the bit length of the indices $latex i_j$ is true but seems to fall dramatically short of the truth.Question for people who know physics historyBeforethe Grassmannian theory, was it clear that there would be formulas like $latex \sum_{1 \leq i,j \leq n} A(1,i,i+1,j,j,+1)$ at tree-level for $latex k>3$? I want to make sure that, if I ever wind up giving a talk about this, I know where the credit goes.

Hope this helps!

Comment #125 September 24th, 2013 at 12:20 pm

Scott #122:

So also, most theoretical physicists probably have outstanding scientific integrity too (as you concede towards the end of your comment.) That’s one of my points: The hype situation in Physics that you are annoyed by is really not that different from that in QC.

As to your point (i.e.

“how do so many crummy popular articles about QC get published”), I think you are partly right but not entirely.I’ve spent some time in the academic enterprise too (though not in QC) and I feel you are being either too naive or too charitable if you attribute all (or most) of the hype to mere inadequate homework by journalists and intellectually honest scientists who just do not care. In reality a portion (I feel a big portion) of the hype is not incidental or accidental but a carefully orchestrated strategy by various people (damn, this is starting to sound too much like a wacky conspiracy theory! )

The journalists hype to sensationalize their articles: Everyone wants to read about a new mega-computer that will crack encrypted files & cure cancer. No one cares to read about some incremental development in an esoteric, “useless” theoretical field.

The academics: Everyone needs more funds and not everyone is as reluctant of front page publicity as Scott might be. And beyond a point you get so used to hearing your own crap (ok, hype) that you actually start believing it. Perhaps deluding yourself to believe in your own hype is

essential in being convincing to others about it.

Again, I’m not saying this accurately portrays everyone, or even a majority. But this sad picture is (IMHO) a non-trivial part of academic reality and needs to be acknowledged.

Comment #126 September 24th, 2013 at 12:27 pm

Yes. And that is why a natural question is simply “Why has progress in probing high-order macroscopic photon interference been so

glacially slowin the fifty-six years since the seminal Hanbury Brown and Twiss observations (of 1957)?” What are the obstruction(s) that have, for so many decades, prevented quantum experimentalists from scaling their experiments to probe 10-, 20-, 50-photon coherences? Why have 50+ years of high-effort quantum research advanced us only from two photons to a handful (at most)?The 2002 and 2004 QuIST Roadmaps were a serious scientific attempt to answer these questions concretely. Nowadays no-one puts much credence in the QuIST roadmaps, yet neither have any concrete roadmaps replaced them … save perhaps for nascent quantum roadmap of Arkami-Hamed

et al?That is why it is both striking and regrettable that a researcher whose self-proclaimed (and worthy!) scientific objective is to “fight QC hype” would coauthor a high-visibility QC study that conspicuously declines to address these urgent open questions; instead content to begin with the hypeful (but unexamined) assertion of quantum faith that

“Scaling this [experiment] to large numbers of photons should be a much simpler task than building a universal quantum computer”and closing with a similarly unexamined assertion that“Our experimental results are highly promising in regard to the robustness of boson sampling.”Given the evident difficulty of advancing these experiments from two photons to three, it is entirely reasonable to ask, when the “hype” is removed from statements like these, what substantive quantum-related math-and-physics-and-technology conclusions are left to us? By what scalable path

canwe foreseeably advance to the promised land of 10-, 20-, 50-photon macroscopic quantum coherences? If there are insurmountable obstructions to the practical demonstration of macroscopic quantum-coherent technologies, what is the nature of those obstructions?Obviously no

singlearticle is obligated to tackle these tough questions, but wheneveryarticle declines to tackle them in-depth, that’s a train-wreck for quantum science overall. That is why (for all its flaws) yet another QuIST-style roadmap would (as it seems to me) be beneficial to the QC/QIT community.ConclusionOne main justification for the principle that “quantum hype is harmful” is that the elucidation of insurmountable obstructions to scalable quantum computing, bosonsampling (etc.) might plausibly entail advances in math-and-physics-and-technology that are of comparable (or greater) long-term fundamental and practical value to any quantum computation technology that has ever been envisioned. That’s why the ubiquity of scientific hype should not disuade us from asking the tough yet time-honored scientific question “What if the hype’s been wrong? Maybe that’sgoodnews?”Comment #127 September 24th, 2013 at 12:58 pm

John Sidles #126: I didn’t write either of the sentences you quote. I’m one of seven authors on a paper that contains them, which only means that I endorse the sentences as “not sufficiently far from my views as to have caused me to remove my name.” (Fuller story in comment #81.)

I.e., yes, I think it’s entirely reasonable to

hopethat BosonSampling can be scaled to larger numbers of photons, especially because the problem might well remain classically intractable even in the presence of a large number of photon losses (we don’t know yet), and because optical multiplexing seems to hold promise for improving the reliability of single-photon generation. But I also think that these questions have to be regarded as open. As usual, if we want to find out, then the obvious thing to do is to push forward on the main theoretical and experimental open problems that we know about, and see what happens.More broadly, if you want to know my views, the thing to do would be to read the many, many words about BosonSampling that I’ve actually written, rather than just agreed to!

Comment #128 September 24th, 2013 at 1:02 pm

john

though your points are valid , it may only be a question of time and luck we overcome these practical difficulties but if the rules of quantum physics change then it implies we need to revisit all our previous work .

Comment #129 September 24th, 2013 at 1:04 pm

upen #123: Personally, I wouldn’t even know how to begin to make sense of a theory where the probabilities don’t add up to 1. And it doesn’t seem to me that the existence of a representation where unitarity isn’t

manifest(but is nevertheless there), should give us the slightest reason to think that unitarity is only approximate or can be done away with. And I’m reassured that even Lubos agrees with me here. If so, that would reinforce the view that the really interesting/novel aspects of the amplituhedron work have to do with simplifying calculations (and maybe even computational complexity), not with calling into question the unitarity of QM.Comment #130 September 24th, 2013 at 1:19 pm

David Speyer #124: Thanks so much for your detailed, extremely interesting comment! I think I’ll call it out in the original post, if you don’t mind. Comments like yours make me sorry that my blog doesn’t have LaTeX support.

You ask many excellent questions that I don’t know the answers to (maybe I should read the Elvang-Huang paper).

So let me restrict myself for now to one thing I

doknow. In complexity theory, we have lots of ways to say that something is “even easier than P” or “super-easy to compute in parallel.” For example, one can say that a problem is in NC, or TC^{0}, or AC^{0}. On the other hand, if you get much further down even than that, it’s probably best just to drop the language of complexity theory, and state the mathematical form that you’ve reduced things to! (E.g., “it’s just a sum of n terms,” or “it’s just the constant 0 function.” )Comment #131 September 24th, 2013 at 1:25 pm

scott: so we need to translate all our quantum algorithms

to amplituhedron based representation where unitarity is not manifest and develop a new theoritical framework

Comment #132 September 24th, 2013 at 2:01 pm

Scott, as you know, I’m a huge fan of your work with Alex Arkhipov. Please let me state plainly that (as it seems to me) *any* quantum experiment that yields in PTIME a data-record that cannot be indistinguishably simulated in PTIME will rank among the most significant experiments of this or any century.

And that is why (as it seems to me and many folks) the Aaronson/Arkhipov

conceptof BosonSamplingalreadyis abundantly deserving of mathematical and scientific respect, entirely independent of whether this class of experiments proves to be scalably feasible/infeasible. So much so, that business-as-usual scientific hype in regard to future feasibility serves mainly todiminishthe esteem that the (wonderful!) idea of BosonSampling experiments already enjoys.Perhaps the cure is as simple as a universal editorial practice of requiring that the keyword “hope” appear in every expression of quantum “hype” as follows:

The result of this modest editorial reform: hype dissipated and hope sustained!

Comment #133 September 24th, 2013 at 2:20 pm

upen #131:

so we need to translate all our quantum algorithms

to amplituhedron based representation where unitarity is not manifest and develop a new theoritical framework

Just when I was feeling bad about my parody post, your comment reminds me

exactlywhy I wrote it! There’s a strange tendency to jump to conclusions in this subject.The amplituhedron is a way to calculate certain scattering amplitudes, in certain special QFTs (e.g., N=4 SYM). A quantum circuit acting on qubits could, in general, look extremely different and indeed much more complicated than such a scattering amplitude.

In fact, there are subtle issues in

bothdirections: how to estimate scattering amplitudes using a qubit-based quantum computer, and whether arbitrary quantum computations can be encoded as scattering amplitudes. Jordan, Lee, and Preskill explained how to do the former, but the details are complicated, and very interestingly, they can’t yet handle massless particles or chiral fermions. In the other direction, I don’t know ofanyresults saying that estimating a scattering amplitude is a BQP-hard problem: it would be extremely interesting either to have such a result or to know an obstruction to one.Now, suppose (super-hypothetically) that we

didfigure out how to encode arbitrary quantum computations into scattering amplitudes. Even then, before I’d recognize a “need” to reformulate all of quantum computing theory in those terms, it would be nice to knowwhat we’d stand to gain by doing that. For example, could we then simulate all of BQP in classical polynomial time? That seems wildly implausible and way too much to hope for. So then, could we simulatesomeof BQP in classical polynomial time? If not, would we at least get some new insight into existing quantum algorithms?Such questions

might conceivablyhave positive answers, and it would be great if they did! My point is just that you don’t get toassumepositive answers without an argument.Comment #134 September 24th, 2013 at 2:27 pm

David Speyer #124: I started reading the Elvang-Huang notes, and they’re phenomenal! The clarity of those notes, divided by the clarity of most things I’ve tried to read about QFT, diverges and needs to be renormalized.

In particular, it seems obvious to me that there’s a wonderful research opportunity for someone to—if nothing else—just

go through the results covered by Elvang and Huang (BCFW recursion relations, etc.), and translate them into computational complexity language.(Unless someone has already done it.) Indeed, the one thing I dislike about those notes is that they seem to go to great painsnotto say anything about, e.g., the running times of the algorithms that you get from the various recursion relations, even though the subject matter is practically begging for it.Comment #135 September 24th, 2013 at 2:50 pm

scott your response very reassuring and very practical

Comment #136 September 24th, 2013 at 8:14 pm

> I don’t know of any results saying that estimating a scattering amplitude is a BQP-hard problem: it would be extremely interesting either to have such a result or to know an obstruction to one.

Do you mean it could be outside BQP or it could be hard to prove in BQP?

Comment #137 September 24th, 2013 at 8:23 pm

Kuru #136: Neither. In that sentence I was talking about whether the problem is

BQP-hard, i.e., whether all of BQP reduces to estimating scattering amplitudes. That’s a different issue from whether estimating scattering amplitudes isinBQP.Comment #138 September 24th, 2013 at 10:35 pm

Scott #137

Do you agree that “in BQP” means “BQP complete or easier”; “BQP hard” means “BQP complete or harder”; “BQP complete” means “both in BQP and BQP hard”?

You seem to be saying “I’m talking about whether it’s BQP hard, which means whether it’s BQP complete, which is not the same as whether it’s in BQP.” I just don’t understand.

Comment #139 September 25th, 2013 at 2:42 am

David #124 As a ABCGPT maven (just learned this word from your comment) can you give us just the definition of the amplituhedron? (I suppose it is a polyhedron-like object and there are claims that it is accessible to junior high school students.)

Regarding computational complexity, it seems that many of the issues we are talking about computational complexity if questions with tiny-input (perhaps unary input is the appropriate technical notion) so while there are interesting computational complexity issues it is not clear how computational complexity theory can be made relevant. Even in cases where insights from physics leads to massive simplifications of certain computation it is not clear if this expresses a computational complexity speed-up. E.g. mirror symmetry leads to vast simplification of certain computations in enumerative algebraic geometry but I am not sure if this has a manifestation via computational complexity theory.

Consider, for example the following computational questions

1) Compute the Ramsey number R(n,n) [The minimum number of vertices in a graph without a clique of size n and without an independent set of size n]

2) Compute the number of alternating sign matrices of order n (those are matrices with entries 0 1 and -1 so that in every row and column the no zero entries alternate in sign and sum to one).

3) Compute the number of isomorphism types of graphs on n vertices

1) is notoriously hard (Erdos famously claimed that we will never compute R(7,7)) and computational complexity insights certainly support it, but i dont see how the hardness of computing R(n,n) is related to computational complexity theory. E.g., how such a statement would follow from assuming all standard conjectures of CC.

2) requires exponential computation if you do it by exhaustive search and a formula conjectured by William Mills, David Robbins, and Howard Rumsey allows an “exponential speed-up”. The formula was proved by Zeilberger and a simple proof was given later by Kuperberg. Kuperberg’s proof is related to computational speed-up achieved via certain computations in physics for solvable models. So there may be some conceptual understanding of this speed-up in a more general framework but it is not clear if there is a connection to computational complexity theory.

3) Polya theory gives a formula that requires exp (sqrt n) computation. So there is some speed up over the most direct algorithm but not an exponential speed up. It is not clear even how to usefully ask the question if this is best possible.

The type of questions and the type of computational speed up in our case seems (on the face of it) related to questions of the type above where the computational task is a function of very tiny input.

So in a sense many combinatorial identities of exact counting give some sort of computational complexity speed-up, but it is not clear to me how to relate those to computational complexity theory (but it worth the effort) and such speed-up are related to various mathematical theories and techniques.

I can think of two cases (one in hindsight) where computational speed ups with some physics origin have led to speed-up in the full sense of computational complexity theory. The first is the polynomial-time computation of determinants and the second is the proof by Kuperberg that telling that a knot is knotted in in NP (complementing the result of Hass, Lagarias and Pippinger that proved that telling it unknotted is in NP).

Comment #140 September 25th, 2013 at 4:53 am

Gil #139: Actually, I think complexity theory is perfectly capable of dealing with situations with “tiny inputs” (i.e., where the input is just n and maybe a few other parameters), and often giving insightful answers in that case. See, for example, this paper by Gottesman and Irani, which successfully adapts the theory of QMA-completeness to the case of

translationally-invariantHamiltonians (where now the only input is the number of qubits n written in binary, and possibly a Hamiltonian description involving a constant number of particles). It’s true that thereductionsin the tiny-input case can look even more contrived than in the ordinary case, but I think the questions themselves are perfectly meaningful. Indeed, that’s true for the interesting examples you gave: for example, whatisthe computational complexity of counting the number of non-isomorphic graphs on n vertices, in terms of n? can you beat exp(√n), or can’t you? Good question!Comment #141 September 25th, 2013 at 7:30 am

Incidentally, Gil #139:

As a ABCGPT maven (just learned this word from your comment)…

One of the amusing things about being married to an Israeli, has been needing to teach her so many English words derived from Yiddish.

Comment #142 September 25th, 2013 at 7:43 am

Kuru #138: Look, a problem can be in BQP. It can also be BQP-hard, which means that everything in BQP reduces to it. A problem can also be both in BQP

andBQP-hard, in which case it precisely characterizes BQP, and we call it “BQP-complete.” Or it can be neither in BQPnorBQP-hard—as seems to be the case for, say, the NP-complete problems. (Please remember that computational problems are not arranged in a strict linear order from “easier” to “harder”: reducibility only gives rise to a directed acyclic graph!)Or, of course, a problem can be in BQP without being BQP-hard (as is believed for factoring), or it can be BQP-hard without being in BQP (as is believed for PSPACE-complete problems).

Thus, we see that the two questions, “is this problem in BQP?” and “is this problem BQP-hard?,” are logically independent of each other: all four combinations of answers are possible.

Comment #143 September 25th, 2013 at 9:24 am

@GilKalai

“….I suppose it is a polyhedron-like object and there are claims that it is accessible to junior high school students”

I do not have evidence of Junior high students exhibiting such mathematical maturity. It is possible seniors with mathematical maturity and who have stared at such things for a while can pick up the ‘blind’ math part. After all some intel prize winners (for school kids) have submitted papers on more abstract parts of algebraic geometry. One has even chugged through Deligne’s notion of complex rank http://arxiv.org/pdf/1006.1381.pdf

Comment #144 September 25th, 2013 at 10:39 am

Scott #143

Ok, so your question was “is this problem BQP-hard [or easier/harder]?,” thus neither “is this problem [in/outside] BQP?” nor “is it hard to prove?”. Thanks for the clarification.

Which leads to why I asked in the first place: if you’re not questionning “is this problem outside BQP?”, is it because it’s proven or on firm grounds although unproven?

Comment #145 September 25th, 2013 at 11:13 am

Yes. The claim is concretely asserted in Exercise 10.1 (on page 156) of the Elvang-Huang notes (arXiv:1308.1697 [hep-th]) and is echoed in Nima Arkani-Hamed’s on-line SUSY 2013 video lecture

The Amplituhedron.With intended comedy or not, the following paragraph of the Elvang-Huang notes includes a phrase that would be pretty tough for most junior high school students: “the 3-bracket is the contraction of a 3-index Levi-Civita tensor with the three W-vectors.”

A hype-to-hope adjustment would perhaps yield “It is feasible for junior high school students to appreciate and begin learning the algebraic and geometric ideas that provide the foundation of these new ways of appreciating quantum dynamics, and here is a starting example.”

The broader point is perhaps that quantum mechanics was conceived by persons (Dirac especially) who had a well-grounded appreciation of geometric dynamics, and yet this understanding is not communicated by contemporary quantum mechanics texts. This present-day pedagogic reality motivates Arkani-Hamed

et al.to comment (arXiv:1212.5605)These new “burning arrow” ideas (as Dick Lipton and Ken Regan call them) are plenty of fun!

Comment #146 September 25th, 2013 at 11:19 am

Kuru #144: No, I’d definitely

alsolike to see a proof that the problem is in BQP! As we learned from Jordan-Lee-Preskill, even that sort of statement can be tricky to prove.But yes, for this question I have an extremely strong expectation about the answer. Namely, I expect that

eitherthe problem is in BQP, or else if it isn’t, then that’s simply telling us that our mathematical formalization of what we wanted to calculate didn’t fully capture the physics of the QFT! (For example, because we ignored the nonperturbative part.) On conceptual grounds, it would be absolutely astonishing if there were anything about realistic QFTs that made them hard to simulate using ordinary, qubit-based quantum computers. (But, of course, our belief here ought to be proved rigorously to whatever extent it can—and I liked the JLP paper precisely because it took a first step in that direction.)Anyway, the question of whether estimating scattering amplitudes is

BQP-hardhas a different character, in that I don’t know any strong grounds to expect one answer or the other! I.e., maybe you can simulate an arbitrary quantum circuit “just” by encoding the circuit into the momenta, etc. of a large collection of localized particles, then scattering the particles off of each other and seeing what happens. Or maybe you can’t.Presumably the answer depends on exactly what we mean by a “scattering experiment.” I suppose some physicists would regard the entire universe as “just a gigantic scattering experiment” — and if you take that view, then surely the problem is BQP-hard, since the universe (at least according to the currently-known laws of physics) can contain universal quantum computers, among other things.

Note that, even there, proving rigorously that the problem is BQP-hard could be horrendously messy—since basically, you’d need to give a rigorous proof that arbitrary qubit-based quantum circuits can be constructed as solutions to the Standard Model Lagrangian! Now, with apologies to John Sidles and Gil Kalai, no one has ever given me a clear, comprehensible reason why that

shouldn’tbe the case. And it might actually be easier to prove such a theorem than to build a QC in the lab—since after all, you get to assume a perfect vacuum and whatever else is convenient. On the other hand, it’s also possible thatactually building a QCwill be an easier problem than proving, with mathematical rigor, that whatever you just did in the lab was fully consistent with the Standard Model!Anyway, where I was going was that, if you define “scattering experiment” more narrowly (so that, for example, the particles can’t have arbitrary initial positions and momenta, but all need to be converging on the same point), then I can’t think of an overarching principle that would lead us to expect BQP-hardness. Maybe such things are just BQP-intermediate (i.e., you need a quantum computer to efficiently simulate

them, but you can’t use them to efficiently simulate a quantum computer). Or maybe such problems are even in P, for amplituhedron-related reasons. So, that’s why I’m so interested in the question of whether scattering amplitudes are BQP-hard—because I’m not even really sure how to think about it yet.Comment #147 September 25th, 2013 at 12:31 pm

This remark is to the point, and Gil Kalai has to a considerable extent anticipated Scott’s remark with Gil’s PhysicsStackExchange question

. The point is that scattering theory, considered in its greatest mathematical, physical, informatic, thermodynamical, and practical generality,Onsager’s regression hypothesis, explained and demonstratedeffectively becomes transport theory.Many of the greatest mathematicians, physicists, and engineers (arguably a majority of them) have contributed to the vast literature of transport theory, and yet this mash of literature distills to a liquor of fundamental understanding that is none-too-palatable. One symptom is that the paucity of good field theory textbooks (that Scott deplores) is associated to the paucity of good transport theory textbooks; the former being regarded as a subset of the latter.

Comment #148 September 25th, 2013 at 1:42 pm

did some reading yesterday these mathematical structures are evolutionary step in integrating qft with scenarios where locality and unitarity breakdown fear of unknown is gone

Comment #149 September 25th, 2013 at 2:14 pm

Thank you Scott!

Comment #150 September 25th, 2013 at 6:23 pm

Tip of the hat to John Sidles for the reference to Nima Arkani-Hamed’s on-line SUSY 2013 video lecture The Amplituhedron. That was helpful.

Comment #151 September 25th, 2013 at 10:23 pm

Scott (#146), it looks to me a reasonable possibility that the class of problems that Jordan-Lee-Preskill proved to be in BQP is actually BQP-complete. Certainly this is a very interesting question and I don’t have an opinion about it. As you said, a BQP-completeness proof my well be even feasible.

Comment #152 September 26th, 2013 at 5:04 am

Vitruvius (#150), perhaps you and other

Shtetl Optimizedscattering-theory aficionados would enjoy the following free-as-in-freedom Carlton Caves interview:@article{

Title = {Realizing squeezing: An interview with Carlton Caves},

Note = {Theme issue: “Squeezed light: from inspiration to application”},

Journal = {

LIGO Magazine}, Author = {Michael Landry}, Year = {2013}, Month = {September}, Number = {3}, Pages = {page 16-19}, Volume = {1}}Caves vividly describes the thirty+year long march from theoretical conception to practical application, and his interview includes the eminently

apropos, eminently quotable remark:And let me say also that most of the stories in

LIGO Magazinetouch upon issues in quantum scattering theory that are both fundamental and practical, and moreover these articles are outstandingly well-written, and thus are worth a look.Comment #153 September 26th, 2013 at 4:03 pm

The inscription “Publicity is a whore” should be above the entrance door to any academic/research place. These words were used by Abraham Pais to admonish Feynman, who was excited after talking to a journalist on the phone; this was back in the 1950s, when the two of them were sharing a hotel room, while at a conference in Japan.

Now, although hype is sometimes perpetrated intentionally by researchers, it’s also done independently by the popular media. It seems that even tiny editorial details become crucially important in a story being gobbled up by the multitudes; e.g., the colorful rendition (done by a hired artist) which accompanies Natalie Wolchover’s piece; and of course, the word “jewel” in the title. I think it’s fair to say that the multitudes are the victims of Natalie’s good writing skills. Such popularizers, maybe with the help from their editors, always turn the excitement knob to the max.

Comment #154 September 26th, 2013 at 6:30 pm

i fully agree “Publicity is a whore” … this is a common issue across the board i know many professional field hockey players

not getting as much sponsorship as a soccer player does though there is no way one’s achievement is less important than other.

Comment #155 September 27th, 2013 at 8:55 am

“professional field hockey players not getting as much sponsorship as a soccer player”

And Justin Bieber is more popular on YouTube than Placido Domingo !

Comment #156 September 27th, 2013 at 10:56 am

thanks. i prefer [BRS] still. however, i agree that your proof of #P-completeness of permanent is significantly less nightmarish than all other known proofs.

Comment #157 September 28th, 2013 at 10:45 pm

There are two MathOverflow questions regarding the amplituhedron: I asked “what is the mathematical definition of the amplituhedron?” and Joe O’Rourke asked about geometric/polytopal properties of the amplituhedron. There is an answer by Carlo Beenakker giving a rather simple mathematical description based on slides of a talk by Jaroslav Trnka. (Regarding computational complexity, my uninformed guess is that in spite of the vast simplification, which may demonstrate even complexity theoretic speed-up w.r.t. some parameter, the new algorithm for computing the amplitudes will still be exponential.) There are several quite interesting issues regarding the interpretation of the new concept and results, and the hopes for physics.

Comment #158 September 29th, 2013 at 8:14 pm

FWIW (which might not be much) there seem to be oriented matroids lurking around in this.

Comment #159 September 29th, 2013 at 9:35 pm

Hi Gil (#157)! I have offered a bounty on a third MathOverflow question

What is the “Tangle” at the Heart of Quantum Simulation?Answers & advice from algebraic geometers is welcomed especially … distinguishing trivial-and/or-known questions in algebraic geometry from deep-and/or-open questions is by no means easy (for me and many).Comment #160 September 30th, 2013 at 9:39 am

One remark regarding the emergence of unitarity and locality. Sometimes physicists like to flirt with the idea that QM is actually an “effective theory” emerging from a more fundamental theory, where an especially bold version of this thought is that this “more fundamental theory” will actually falsify some QM predictions (in a similar way to the way predictions of Newtonian mechanics are falsified by quantum mechanics and relativity). Of course, there is nothing wrong with such flirting. In the SUSY lecture Arkani-Hemed (gently) speculated something like that regarding the new geometric object (and this is echoed in Scott’s parody).

I don’t think this is plausible interpretation of

ABCGPTwork. We have here an approximation/computation method to describe a quantum system restricted to a small number of “degrees-of-freedom”. The simplest and least dramatic explanation of the emergence of unitarity and locality is that the emergence of unitarity and locality simply reflects and identifies situations where the approximation method is successful. It may correspond to unitarity “emerging from decoherence,” (namely the effect of the ignored degrees of freedom), and not from a “more fundamental” theory which, in some regimes, falsifies QM.Two math remarks: Regarding the miraculous simplifications of the computations we witness here, one possibility (also mentioned in the lecture) is that it manifests a property of “integrable systems” or “completely solvable systems,” which are systems with wonderful but very special properties (Scott, if you wish they also reflect a special “complexity” class in some sense.) If you are curious to know what are integrable systems so was I, and here are the answers I got.

(Re: #158) Indeed oriented matroid are relevant. One natural way to stratify the Grassmanian (regarded as the k by n matrices M under equivalence relation of GL(k)) is according to the oriented matroid described by the columns of the matrix, namely according to the sign patterns of k by k minors of M, with respect to a fixed order. (Think about “stratification” of a space X simply to mean a partitioning X to “simple” building blocks.) The notion of oriented matroids is a very beautiful concept which extends the notion of “order” of a set of points on the line to high dimensions. There is a much coarser stratification which goes back to Bruhat (or Shubert) and depends on an ordering of the coordinate basis and the oriented-matroid stratification is the intersection of these coarser structures over all orderings. But oriented matroids are much too fine structure and the cells in the oriented matroid stratification are extremely complicated (in fact, they can represent arbitrary complicated algebraic varieties). There is an in-between stratification (closer to the coarser one) where you do not intersect with respect to all orderings but only with respect to cyclic shifts of the ordering you started with. For this gadget the cells are miraculously simple.

Comment #161 September 30th, 2013 at 11:46 pm

Everyone: I just made a crazy connection, and I’d appreciate thoughts about whether or not it’s purely coincidental.

It’s well-known that the “scattering amplitudes” for n identical, non-interacting fermions can be computed classically in time polynomial in n. Why? Because those amplitudes are determinants of nxn matrices—and

the determinant is the volume of a polytope, namely the n-dimensional parallelipiped defined by the matrix’s n rows. Furthermore, this “volume” interpretation of the determinant is actually important in algorithms for FermionSampling: see Section 13 in my and Alex Arkhipov’s just-released paper, BosonSampling Is Far From Uniform.On the other hand, this is clearly very special to non-interacting fermion amplitudes. Indeed, we don’t expect it to generalize even to non-interacting

bosonamplitudes, since such amplitudes are given by permanents and the permanent is #P-complete.In summary, there’s at least one “model situation” that I completely understand where:

(1) Certain scattering amplitudes can be elegantly written as the volumes of polytopes.

(2) For reasons directly related to (1), those amplitudes are computable in classical polynomial time. But

(3) This reflects extremely special properties of the amplitudes in question, and we know excellent reasons why it

shouldn’tgeneralize to other scattering amplitudes.So, is it possible that there are any lessons here for the much more complicated case of supersymmetric QFT amplitudes?

Comment #162 October 1st, 2013 at 2:12 am

Perhaps the two cases, amplitudes for non-interacting fermions and amplitudes for non-interacting bosons, represent paradigm cases for the complexity-theoretic analysis of geometric representations of amplitudes in general. The fact that amplitude calculations in N=4 Yang-Mills have this representation which is far more efficient than the traditional Feynman expansion, may mean that this theory is in some sense “in the same class” as the “theory” of non-interacting fermions.

Comment #163 October 1st, 2013 at 9:08 am

Scott, your observation is very interesting. But often in physics there is a free theory where things are easy to compute and then we add perturbations to that, and pay a price that rapidly grows with the accuracy with which we treat perturbations. Perhaps the way this theory will be useful is if it’s a better base about which to start a perturbative expansion.

Comment #164 October 1st, 2013 at 9:42 am

Professor on your comment “Certain scattering amplitudes can be elegantly written as the volumes of polytopes”

There are #P complete problems in mixed volume as well.

Comment #165 October 1st, 2013 at 1:16 pm

#161 Amplitudes are expressed in Terhal and DiVincenzo et al via Pfaffian, yet it is almost the same as Det, but here is another question – why there is the “fast” expression? IMHO because of Spin group representation of fermionic circuits. I met complex Grassmannian in this area, but I may not see relation with real positive Grassmannians used for construction of polytopes.

Comment #166 October 1st, 2013 at 7:19 pm

J #164: Yes, thanks for the observation. You’re absolutely right that “being expressible as the volume of a polytope” doesn’t imply polynomial-time simulability. Indeed, if we put no restrictions on the means of expression, then I could express

anythingas the volume ofsomepolytope! And more interestingly, even with polytopes specified in restricted ways you can get #P-complete problems.Now, in the special case of noninteracting bosons, there

arepolynomial-time algorithms to compute the amplitudes, and even to sample the associated output distribution. And the latter algorithm reallyisbased on the interpretation of |Det(X)| as the volume of a parallelipiped defined by X’s rows.So it’s not hard to imagine that, in the amplituhedron case, expressing an amplitude as a volume in a sufficiently nice way really

couldlead to faster algorithms for evaluating it. But:(1) There’s no a-priori guarantee that the resulting algorithms will be polynomial-time—for all we know, they could still be solving a #P-complete problem, just with a slower exponential growth (as several previous commenters pointed out).

(2) Even if we did get a polynomial-time algorithm in some case (e.g., N=4 SYM, with a particular definition of the “input” to the problem?), there’s no guarantee that it would generalize to other amplitude-evaluation problems. If we’ve learned anything in complexity theory, it’s that a problem can jump from being in P to being NP-hard or even #P-hard with just the tiniest-looking change to its definition.

Comment #167 October 2nd, 2013 at 7:04 am

” they could still be solving a #P-complete problem, just with a slower exponential growth”

Your comment sounds like Russell Impagliazzo will have to bite his socks on ETH. How slow?

Comment #168 October 2nd, 2013 at 7:06 am

oops sorry I mistook your comment.

Comment #169 October 2nd, 2013 at 12:04 pm

J #167: Maybe you already figured this out, but even the ETH “merely” asserts that 3SAT requires 2

^{cn}time, for some c>0. It doesn’t try to pin down the value of c—which, in fact, we know to be less than 1. Many other NP-complete and #P-complete problems likewise have exponential-time algorithms that are somewhat better than brute force, and those algorithms can matter in practice.But there’s also an additional issue: namely, the ETH applies only to 3SAT (and anything 3SAT reduces to via linear-size reductions). Even if the volume computations relevant to scattering amplitudes were #P-complete, it’s entirely possible that the reductions from 3SAT would involve a polynomial blowup—in which case, even a 2

^{n^α}algorithm for some α<1 wouldn’t be ruled out by the ETH.Comment #170 October 2nd, 2013 at 8:53 pm

[…] http://www.scottaaronson.com/blog/?p=1537 […]

Comment #171 October 3rd, 2013 at 12:59 am

[…] and Physics that includes an amplituhedron-related talk by Andrew Hodges. (Speaking of which, see here for a small but non-parodic observation about expressing amplitudes as volumes of […]

Comment #172 October 5th, 2013 at 9:42 am

Scott#171

I have never seen a natural problem whose running time is $2^{n^c}$. Do you have any examples?

Comment #173 October 5th, 2013 at 9:47 am

Thinking again Crypto breaking has such forms. However are they ‘natural’?

Comment #174 October 5th, 2013 at 9:48 am

….and they are not counting functions.

Comment #175 October 5th, 2013 at 6:36 pm

J, in this very thread, Gil Kalai at 139 mentioned Polya counting isomorphism classes of graphs in exp(n^1/2) time.

Comment #176 October 5th, 2013 at 7:04 pm

@Douglas Knight. Thankyou for the information.

Comment #177 October 21st, 2013 at 10:56 pm

In some of my comments above I described a few things more complicated than needed. The stratification of the positive Grassmanian is precisely the oriented matroid one. While this stratification for the entire Grassmanian can have very exotic behavior it behaves well for the positive Grassmanian (and coincides there with the coarser good-mannered stratification for the whole Grassmanian).

Comment #178 October 25th, 2013 at 10:04 pm

@Scott#161

>On the other hand, this is clearly very special to non-interacting _fermion_ amplitudes. Indeed, we don’t expect it to generalize even to non-interacting _boson_ amplitudes

Err, am I missing something? If we’re talking about unbroken susy (like the n=4 yang-mills theory in the amplituduhedron), can’t you just rotate the bosonic fields into fermionic fields in linear time and then solve the corresponding problem in polynomial time for the fermions?

Comment #179 October 26th, 2013 at 11:05 am

Anish #178: In my comment, I was talking about ordinary free fermion and free boson fields, with no supersymmetry.

Whether unbroken susy makes “fermionic” and “bosonic” problems equivalent in computational complexity is an interesting question. A priori, it seems plausible that it would—but even assuming it does, I wouldn’t know enough to say whether it makes them “equally easy” or “equally hard.” (I.e., with unbroken susy, can you even simulate a free fermion field by calculating determinants, without worrying about whether your fermions are constantly going to rotate into bosons and give you a permanent to calculate?)

What I know is that this paper by Andrew Hodges gives a striking formula for tree-level n-graviton scattering amplitudes in N=7 susy, in terms of the determinant of an nxn matrix. What makes that surprising, of course, is that gravitons are bosons, so one might have expected a permanent instead. From talking to Hodges, I gather that the susy does indeed play an important role in transforming the bosonic problem to a fermionic one. But so far, this seems quite special to tree-level graviton amplitudes: if it were easy to generalize to other situations, then presumably someone would already have done so!

Comment #180 December 8th, 2013 at 7:18 am

Another Parody, a bit older, perhaps along the same line.

“Your Basic Crackpot Theory of Everything”

https://7777777s.com

“The Akashic Records, How To”

http://kraziemonkie.com

enjoy

Comment #181 December 14th, 2013 at 1:06 pm

[…] […]

Comment #182 December 23rd, 2013 at 9:03 am

I think you see me, I like this 13econ1216, It’s a nice post. May be u can go this way follow us.

http://www.tssa.org/louisoutlet.html http://www.tssa.org/louisoutlet.html

Comment #183 January 28th, 2014 at 1:38 pm

Mazal tov on your new c-baby and your new q-baby!

Comment #184 February 12th, 2014 at 2:36 am

There is a jewel at the heart of financial markets – the stockMarkethedron – related to amplituhedron.

http://arxiv.org/abs/1402.1281

Comment #185 January 27th, 2015 at 2:41 pm

Following a lecture by Postnikov,

I think I can explain things briefly as follows:

A) The stratification of the positive Grassmanian:

Indeed there are two ways to look at it, a) as the matroidal stratification reduced to this part of the grassmanian; b) Start with the common refinement of Subert cells with respect to a cyclic family of permutations (for the whole grassmanian this lies between the shubert stratification and the matroid one). Then restrict it to the positive Grassmanian.

B) The amplituhedron:

Recall that every polytope is a projection of a simplex and projections with respect to totally positive matrices gives precisely the cyclic polytope.

Now replace the simplex by the positive grassmanian: The amplitutahedron is a projection of the positive grassmanian based on a totally positive matrix. (So it is a common generalization of the positive grassmanian and the cyclic polytope.)