## Recent papers by Susskind and Tao illustrate the long reach of computation

Most of the time, I’m a crabby, cantankerous ogre, whose only real passion in life is using this blog to shoot down the wrong ideas of others. But alas, try as I might to maintain my reputation as a pure bundle of seething negativity, sometimes events transpire that pierce my crusty exterior. Maybe it’s because I’m in Berkeley now, visiting the new Simons Institute for Theory of Computing during its special semester on Hamiltonian complexity. And it’s tough to keep up my acerbic East Coast skepticism of everything new in the face of all this friggin’ sunshine. (Speaking of which, if you’re in the Bay Area and wanted to meet me, this week’s the week! Email me.) Or maybe it’s watching Lily running around, her face wide with wonder. If she’s so excited by her discovery of (say) a toilet plunger or some lint on the floor, what right do I have *not* to be excited by actual scientific progress?

Which brings me to the third reason for my relatively-sunny disposition: two long and fascinating recent papers on the arXiv. What these papers have in common is that they use concepts from theoretical computer science in unexpected ways, while trying to address open problems at the heart of “traditional, continuous” physics and math. One paper uses quantum circuit complexity to help understand black holes; the other uses fault-tolerant universal computation to help understand the Navier-Stokes equations.

Recently, our always-pleasant string-theorist friend Luboš Motl described computational complexity theorists as “extraordinarily naïve” (earlier, he also called us “deluded” and “bigoted”). Why? Because we’re obsessed with “arbitrary, manmade” concepts like the set of problems solvable in polynomial time, and especially because we assume things we haven’t yet proved such as P≠NP. (Jokes about throwing stones from a glass house—or a stringy house—are left as exercises for the reader.) The two papers that I want to discuss today reflect a different perspective: one that regards computation as no more “arbitrary” than other central concepts of mathematics, and indeed, as something that shows up even in contexts that seem incredibly remote from it, from the AdS/CFT correspondence to turbulent fluid flow.

Our first paper is Computational Complexity and Black Hole Horizons, by Lenny Susskind. As readers of this blog might recall, last year Daniel Harlow and Patrick Hayden made a striking connection between computational complexity and the black-hole “firewall” question, by giving complexity-theoretic evidence that performing the measurement of Hawking radiation required for the AMPS experiment would require an exponentially-long quantum computation. In his new work, Susskind makes a different, and in some ways even stranger, connection between complexity and firewalls. Specifically, given an n-qubit pure state |ψ〉, recall that the *quantum circuit complexity* of |ψ〉 is the minimum number of 2-qubit gates needed to prepare |ψ〉 starting from the all-|0〉 state. Then for reasons related to black holes and firewalls, Susskind wants to use the quantum circuit complexity of |ψ〉 as an *intrinsic clock*, to measure how long |ψ〉 has been evolving for. Last week, I had the pleasure of visiting Stanford, where Lenny spent several hours explaining this stuff to me. I still don’t fully understand it, but since it’s arguable that *no one* (including Lenny himself) does, let me give it a shot.

My approach will be to divide into two questions. The first question is: why, *in general* (i.e., forgetting about black holes), might one want to use quantum circuit complexity as a clock? Here the answer is: because unlike most other clocks, this one should continue to tick for an exponentially long time!

Consider some standard, classical thermodynamic system, like a box filled with gas, with the gas all initially concentrated in one corner. Over time, the gas will diffuse across the box, in accord with the Second Law, until it completely equilibrates. Furthermore, if we know the laws of physics, then we can calculate exactly how fast this diffusion will happen. But this implies that we can use the box as a clock! To do so, we’d simply have to measure how diffused the gas was, then work backwards to determine how much time had elapsed since the gas started diffusing.

But notice that this “clock” only works until the gas reaches equilibrium—i.e., is equally spread across the box. Once the gas gets to equilibrium, which it does in a reasonably short time, it *just stays there* (at least until the next Poincaré recurrence time). So, if you see the box in equilibrium, there’s no measurement you could make—or certainly no “practical” measurement—that would tell you how long it’s been there. Indeed, if we model the collisions between gas particles (and between gas particles and the walls of the box) as random events, then something even stronger is true. Namely, the probability distribution over all *possible* configurations of the gas particles will quickly converge to an equilibrium distribution. And if you all you knew was that the particles were in the equilibrium distribution, then there’s no *property* of their distribution that you could point to—not even an abstract, unmeasurable property—such that knowing that property would tell you how long the gas had been in equilibrium.

Interestingly, something very different happens if we consider a quantum pure state, in complete isolation from its environment. If you have some quantum particles in a perfectly-isolating box, and you start them out in a “simple” state (say, with all particles unentangled and in a corner), then they too will appear to diffuse, with their wavefunctions spreading out and getting entangled with each other, until the system reaches “equilibrium.” After that, there will once again be no “simple” measurement you can make—say, of the density of particles in some particular location—that will give you any idea of how long the box has been in equilibrium. On the other hand, the laws of unitary evolution assure us that *the quantum state is still evolving*, rotating serenely through Hilbert space, just like it was before equilibration! Indeed, in principle you could even measure that the evolution was still happening, but to do so, you’d need to perform an absurdly precise and complicated measurement—one that basically inverted the entire unitary transformation that had been applied since the particles started diffusing.

Lenny now asks the question: if the quantum state of the particles continues to evolve even after “equilibration,” then what physical *quantity* can we point to as continuing to increase? By the argument above, it can’t be anything simple that physicists are used to talking about, like coarse-grained entropy. Indeed, the most obvious candidate that springs to mind, for a quantity that should keep increasing even after equilibration, is just the quantum circuit complexity of the state! *If* there’s no “magic shortcut” to simulating this system—that is, *if* the fastest way to learn the quantum state at time T is just to run the evolution equations forward for T time steps—then the quantum circuit complexity will continue to increase linearly with T, long after equilibration. *Eventually*, the complexity will “max out” at ~c^{n}, where n is the number of particles, simply because (neglecting small multiplicative terms) the dimension of the Hilbert space is always an upper bound on the circuit complexity. After even longer amounts of time—like ~c^{c^n}—the circuit complexity will dip back down (sometimes even to 0), as the quantum state undergoes recurrences. But both of those effects only occur on timescales ridiculously longer than anything normally relevant to physics or everyday life.

Admittedly, given the current status of complexity theory, there’s little hope of *proving unconditionally* that the quantum circuit complexity continues to rise until it becomes exponential, when some time-independent Hamiltonian is continuously applied to the all-|0〉 state. (If we could prove such a statement, then presumably we could also prove PSPACE⊄BQP/poly.) But maybe we could prove such a statement modulo a reasonable conjecture. And we do have suggestive weaker results. In particular (and as I just learned this Friday), in 2012 Brandão, Harrow, and Horodecki, building on earlier work due to Low, showed that, if you apply S>>n *random* two-qubit gates to n qubits initially in the all-|0〉 state, then with high probability, not only do you get a state with large circuit complexity, you get a state that *can’t even be distinguished from the maximally mixed state* by any quantum circuit with at most ~S^{1/6} gates.

OK, now on to the second question: what does any of this have to do with black holes? The connection Lenny wants to make involves the AdS/CFT correspondence, the “duality” between two completely different-looking theories that’s been the rage in string theory since the late 1990s. On one side of the ring is AdS (Anti de Sitter), a quantum-gravitational theory in D spacetime dimensions—one where black holes can form and evaporate, etc., but on the other hand, the entire universe is surrounded by a reflecting boundary a finite distance away, to help keep everything nice and unitary. On the other side is CFT (Conformal Field Theory): an “ordinary” quantum field theory, with no gravity, that lives only on the (D-1)-dimensional “boundary” of the AdS space, and not in its interior “bulk.” The claim of AdS/CFT is that despite how different they look, these two theories are “equivalent,” in the sense that any calculation in one theory can be transformed to a calculation in the other theory that yields the same answer. Moreover, we get *mileage* this way, since a calculation that’s hard on the AdS side is often easy on the CFT side and vice versa.

As an example, suppose we’re interested in what happens inside a black hole—say, because we want to investigate the AMPS firewall paradox. Now, figuring out what happens inside a black hole (or even on or near the event horizon) is a notoriously hard problem in quantum gravity; that’s why people have been arguing about firewalls for the past two years, and about the black hole information problem for the past forty! But what if we could put our black hole in an AdS box? Then using AdS/CFT, couldn’t we translate questions about the black-hole interior to questions about the CFT on the boundary, which *don’t* involve gravity and which would therefore hopefully be easier to answer?

In fact people *have* tried to do that—but frustratingly, they haven’t been able to use the CFT calculations to answer even the grossest, most basic questions about what someone falling into the black hole would actually experience. (For example, would that person hit a “firewall” and die immediately at the horizon, or would she continue smoothly through, only dying close to the singularity?) Lenny’s paper explores a possible reason for this failure. It turns out that the way AdS/CFT works, *the closer to the black hole’s event horizon you want to know what happens, the longer you need to time-evolve the quantum state of the CFT to find out*. In particular, if you want to know what’s going on at distance ε from the event horizon, then you need to run the CFT for an amount of time that grows like log(1/ε). And what if you want to know what’s going on *inside* the black hole? In line with the holographic principle, it turns out that you can express an observable *inside* the horizon by an integral over the entire AdS space *outside* the horizon. Now, that integral will include a part where the distance ε from the event horizon goes to 0—so log(1/ε), and hence the complexity of the CFT calculation that you have to do, diverges to infinity. For some kinds of calculations, the ε→0 part of the integral isn’t very important, and can be neglected at the cost of only a small error. For other kinds of calculations, unfortunately—and in particular, for the kind that would tell you whether or not there’s a firewall—the ε→0 part is *extremely* important, and it makes the CFT calculation hopelessly intractable.

Note that yes, we even need to continue the integration for ε much smaller than the Planck length—i.e., for so-called “transplanckian” distances! As Lenny puts it, however:

For most of this vast sub-planckian range of scales we don’t expect that the operational meaning has anything to do with meter sticks … It has more to do with large times than small distances.

One could give this transplanckian blowup in computational complexity a pessimistic spin: darn, so it’s probably hopeless to use AdS/CFT to prove once and for all that there are no firewalls! But there’s also a more positive interpretation: the interior of a black hole is “protected from meddling” by a thick armor of computational complexity. To explain this requires a digression about firewalls.

The original firewall paradox of AMPS could be phrased as follows: *if* you performed a certain weird, complicated measurement on the Hawking radiation emitted from a “sufficiently old” black hole, then the expected results of that measurement would be incompatible with *also* seeing a smooth, Einsteinian spacetime if you later jumped into the black hole to see what was there. (Technically, because you’d violate the monogamy of entanglement.) If what awaited you behind the event horizon wasn’t a “classical” black hole interior with a singularity in the middle, but an immediate breakdown of spacetime, then one says you would’ve “hit a firewall.”

Yes, it seems preposterous that “firewalls” would exist: at the least, it would fly in the face of everything people thought they understood for decades about general relativity and quantum field theory. But crucially—and here I have to disagree with Stephen Hawking—one can’t “solve” this problem by simply repeating the physical absurdities of firewalls, or by constructing scenarios where firewalls “self-evidently” don’t arise. Instead, as I see it, *solving the problem means giving an account of what actually happens when you do the AMPS experiment, or of what goes wrong when you try to do it.*

On this last question, it seems to me that Susskind and Juan Maldacena achieved a major advance in their much-discussed ER=EPR paper last year. Namely, they presented a picture where, sure, a firewall arises (at least temporarily) if you do the AMPS experiment—but no firewall arises if you *don’t* do the experiment! In other words, doing the experiment sends a *nonlocal signal* to the interior of the black hole (though you do have to jump into the black hole to receive the signal, so causality outside the black hole is still preserved). Now, how is it possible for your measurement of the Hawking radiation to send an instantaneous signal to the black hole interior, which might be light-years away from you when you measure? On Susskind and Maldacena’s account, it’s possible because the entanglement between the Hawking radiation and the degrees of freedom still in the black hole, can be interpreted as creating *wormholes* between the two. Under ordinary conditions, these wormholes (like most wormholes in general relativity) are “non-traversable”: they “pinch off” if you try to send signals through them, so they can’t be used for faster-than-light communication. However, if you did the AMPS experiment, then the wormholes would *become* traversable, and could carry a firewall (or an innocuous happy-birthday message, or whatever) from the Hawking radiation to the black hole interior. (Incidentally, ER stands for Einstein and Rosen, who wrote a famous paper on wormholes, while EPR stands for Einstein, Podolsky, and Rosen, who wrote a famous paper on entanglement. “ER=EPR” is Susskind and Maldacena’s shorthand for their proposed connection between wormholes and entanglement.)

Anyway, these heady ideas raise an obvious question: *how hard would it be* to do the AMPS experiment? Is sending a nonlocal signal to the interior of a black hole via that experiment actually a realistic possibility? In their work a year ago on computational complexity and firewalls, Harlow and Hayden already addressed that question, though from a different perspective than Susskind. In particular, Harlow and Hayden gave strong evidence that carrying out the AMPS experiment would require solving a problem believed to be exponentially hard even for a quantum computer: specifically, a complete problem for QSZK (Quantum Statistical Zero Knowledge). In followup work (not yet written up, though see my talk at KITP and my PowerPoint slides), I showed that the Harlow-Hayden problem is actually at least as hard as inverting one-way functions, which is even stronger evidence for hardness.

All of this suggests that, even *supposing* we could surround an astrophysical black hole with a giant array of perfect photodetectors, wait ~10^{69} years for the black hole to (mostly) evaporate, then route the Hawking photons into a super-powerful, fault-tolerant quantum computer, doing the AMPS experiment (and hence, creating traversable wormholes to the black hole interior) *still* wouldn’t be a realistic prospect, even if the equations formally allow it! There’s no way to sugarcoat this: **computational complexity limitations seem to be the only thing protecting the geometry of spacetime from nefarious experimenters.**

Anyway, Susskind takes that amazing observation of Harlow and Hayden as a starting point, but then goes off on a different tack. For one thing, he isn’t focused on the AMPS experiment (the one involving monogamy of entanglement) *specifically*: he just wants to know how hard it is to do *any* experiment (possibly a different one) that would send nonlocal signals to the black hole interior. For another, unlike Harlow and Hayden, Susskind isn’t trying to show that such an experiment would be *exponentially* hard. Instead, he’s content if the experiment is “merely” polynomially hard—but in the same sense that (say) unscrambling an egg, or recovering a burned book from the smoke and ash, are polynomially hard. In other words, Susskind only wants to argue that creating a traversable wormhole would be “thermodynamics-complete.” A third, related, difference is that Susskind considers an extremely special model scenario: namely, the AdS/CFT description of something called the “thermofield double state.” (This state involves *two* entangled black holes in otherwise-separated spacetimes; according to ER=EPR, we can think of those black holes as being connected by a wormhole.) While I don’t yet understand this point, apparently the thermofield double state is much more favorable for firewall production than a “realistic” spacetime—and in particular, the Harlow-Hayden argument doesn’t apply to it. Susskind wants to show that *even so* (i.e., despite how “easy” we’ve made it), sending a signal through the wormhole connecting the two black holes of the thermofield double state would still require solving a thermodynamics-complete problem.

So that’s the setup. What new insights does Lenny get? This, finally, is where we circle back to the view of quantum circuit complexity as a clock. Briefly, Lenny finds that **the quantum state getting more and more complicated in the CFT description—i.e., its quantum circuit complexity going up and up—directly corresponds to the wormhole getting longer and longer in the AdS description.** (Indeed, the length of the wormhole increases linearly with time, growing like the circuit complexity divided by the total number of qubits.) And the wormhole getting longer and longer is what makes it non-traversable—i.e., what makes it impossible to send a signal through.

Why has quantum circuit complexity made a sudden appearance here? Because in the CFT description, the circuit complexity continuing to increase is the only thing that’s obviously “happening”! From a conventional physics standpoint, the quantum state of the CFT very quickly reaches equilibrium and then *just stays there*. If you measured some “conventional” physical observable—say, the energy density at a particular point—then it wouldn’t look like the CFT state was continuing to evolve at all. And yet we know that the CFT state *is* evolving, for two extremely different reasons. Firstly, because (as we discussed early on in this post) unitary evolution is still happening, so presumably the state’s quantum circuit complexity is continuing to increase. And secondly, because in the dual AdS description, the wormhole is continuing to get longer!

From this connection, at least three striking conclusions follow:

- That even when nothing else seems to be happening in a physical system (i.e., it seems to have equilibrated),
*the fact that the system’s quantum circuit complexity keeps increasing can be “physically relevant” all by itself*. We know that it’s physically relevant, because in the AdS dual description, it corresponds to the wormhole getting longer! - That even in the special case of the thermofield double state,
*the geometry of spacetime continues to be protected by an “armor” of computational complexity*. Suppose that Alice, in one half of the thermofield double state, wants to send a message to Bob in the other half (which Bob can retrieve by jumping into his black hole). In order to get her message through, Alice needs to prevent the wormhole connecting her black hole to Bob’s from stretching uncontrollably—since as long as it stretches, the wormhole remains non-traversable. But in the CFT picture, stopping the wormhole from stretching corresponds to stopping the quantum circuit complexity from increasing! And that, in turn, suggests that Alice would need to act on the radiation outside her black hole in an incredibly complicated and finely-tuned way. For “generically,” the circuit complexity of an n-qubit state should just continue to increase, the longer you run unitary evolution for, until it hits its exp(n) maximum. To prevent that from happening would essentially require “freezing” or “inverting” the unitary evolution applied by nature—but that’s the sort of thing that we expect to be thermodynamics-complete. (How exactly do Alice’s actions in the “bulk” affect the evolution of the CFT state? That’s an excellent question that I don’t understand AdS/CFT well enough to answer. All I know is that the answer involves something that Lenny calls “precursor operators.”) - The third and final conclusion is
*that there can be a physically-relevant difference between pseudorandom n-qubit pure states and “truly” random states*—even though, by the definition of pseudorandom, such a difference can’t be detected by any small quantum circuit! Once again, the way to see the difference is using AdS/CFT. It’s easy to show, by a counting argument, that almost all n-qubit pure states have*nearly-maximal*quantum circuit complexity. But if the circuit complexity is already maximal, that means in particular that it’s not increasing! Lenny argues that this corresponds to the wormhole between the two black holes no longer stretching. But if the wormhole is no longer stretching, then it’s “vulnerable to firewalls” (i.e., to messages going through!). It had previously been argued that*random*CFT states almost always correspond to black holes with firewalls—and since the CFT states formed by realistic physical processes will look “indistinguishable from random states,” black holes that form under realistic conditions should generically have firewalls as well. But Lenny rejects this argument, on the ground that the CFT states that arise in realistic situations are*not*random pure states. And what distinguishes them from random states? Simply that they have non-maximal (and increasing) quantum circuit complexity!

I’ll leave you with a question of my own about this complexity / black hole connection: one that I’m unsure how to think about, but that perhaps interests me more than any other here. My question is: **could you ever learn the answer to an otherwise-intractable computational problem by jumping into a black hole?** Of course, you’d have to *really* want the answer—so much so that you wouldn’t mind dying moments after learning it, or not being able to share it with anyone else! But never mind that. What I have in mind is first applying some polynomial-size quantum circuit to the Hawking radiation, then jumping into the black hole to see what nonlocal effect (if any) the circuit had on the interior. The fact that the mapping between interior and exterior states is so complicated suggests that there might be complexity-theoretic mileage to be had this way, but I don’t know what. (It’s also possible that you *can* get a computational speedup in special cases like the thermofield double state, but that a Harlow-Hayden-like obstruction prevents you from getting one with real astrophysical black holes. I.e., that for real black holes, you’ll just see a smooth, boring, Einsteinian black hole interior no matter *what* polynomial-size quantum circuit you applied to the Hawking radiation.)

If you’re still here, the second paper I want to discuss today is Finite-time blowup for an averaged three-dimensional Navier-Stokes equation by Terry Tao. (See also the excellent *Quanta* article by Erica Klarreich.) I’ll have much, much less to say about this paper than I did about Susskind’s, but that’s not because it’s less interesting: it’s only because I understand the issues even less well.

Navier-Stokes existence and smoothness is one of the seven Clay Millennium Problems (alongside P vs. NP, the Riemann Hypothesis, etc). The problem asks whether the standard, classical differential equations for three-dimensional fluid flow are well-behaved, in the sense of not “blowing up” (e.g., concentrating infinite energy on a single point) after a finite amount of time.

Expanding on ideas from his earlier blog posts and papers about Navier-Stokes (see here for the gentlest of them), Tao argues that the Navier-Stokes problem is closely related to the question of whether or not it’s possible to “build a fault-tolerant universal computer out of water.” Why? Well, it’s not the computational universality *per se* that matters, but if you could use fluid flow to construct general enough computing elements—resistors, capacitors, transistors, etc.—then you could use those elements to recursively shift the energy in a given region into a region half the size, and from there to a region a quarter the size, and so on, faster and faster, until you got infinite energy density after a finite amount of time.

Strikingly, building on an earlier construction by Katz and Pavlovic, Tao shows that** this is actually possible** for an “averaged” version of the Navier-Stokes equations! So at the least, any proof of existence and smoothness for the real Navier-Stokes equations will need to “notice” the difference between the real and averaged versions. In his paper, though, Tao hints at the possibility (or dare one say likelihood?) that the truth might go the other way. That is, maybe the “universal computer” construction can be ported from the averaged Navier-Stokes equations to the real ones. In that case, we’d have blowup in finite time for the real equations, and a negative solution to the Navier-Stokes existence and smoothness problem. Of course, such a result wouldn’t imply that real, physical water was in any danger of “blowing up”! It would simply mean that the discrete nature of water (i.e., the fact that it’s made of H_{2}O molecules, rather than being infinitely divisible) was essential to understanding its stability given arbitrary initial conditions.

So, what are the prospects for such a blowup result? Let me quote from Tao’s paper:

One minor point that I’d love to understand is, *what happens in two dimensions?* Existence and smoothness are known to hold for the 2-dimensional analogues of the Navier-Stokes equations. If they also held for the *averaged* 2-dimensional equations, then it would follow that Tao’s “universal computer” must be making essential use of the third dimension. How? If I knew the answer to that, then I’d feel for the first time like I had some visual crutch for understanding why 3-dimensional fluid flow is so complicated, even though 2-dimensional fluid flow isn’t.

I see that, in blog comments here and here, Tao says that the crucial difference between the 2- and 3-dimensional Navier-Stokes equations arises from the different scaling behavior of the dissipation term: basically, you can ignore it in 3 or more dimensions, but you can’t ignore it in 2. But maybe there’s a more doofus-friendly explanation, which would start with some 3-dimensional fluid logic gate, and then explain why the gate has no natural 2-dimensional analogue, or why dissipation causes its analogue to fail.

Obviously, there’s much more to say about both papers (especially the second…) than I said in this post, and many people more knowledgeable than I am to say those things. But that’s what the comments section is for. Right now I’m going outside to enjoy the California sunshine.

Comment #1 March 2nd, 2014 at 7:45 pm

My friend Steve Shenker who is very wise advocates doing physics by slogan. He doesn’t mean that slogans should replace thinking; he means that if you understand them, the core of important ideas should be simple enough to express in a sentence or two. The slogan for ER=EPR (that’s not a slogan, it’s an acronym) is, “Entanglement builds bridges.”

Now Scott has provided me with another slogan: “The Geometry of spacetime is protected by an armor of computational complexity.” Thank you, Scott.

Comment #2 March 3rd, 2014 at 2:27 am

It just turns out that Prof Richard J. Lipton, complexity theorist from Georgia Tech – who has about 10 times more intensely cited papers than Scott Aaronson – agrees with me, Luboš Motl, and not Scott Aaronson, on the implications of the recent result on the Erdös discrepancy conjecture for the P vs NP issues:

http://rjlipton.wordpress.com/2014/02/28/practically-pnp/

Relevant excerpt:

P=NP?

Their paper can also be read at another level, below the surface, that reflects on complexity theory. Their paper shows that there are SAT solvers capable of solving hard natural problems in reasonable time bounds. What does this say about the strong belief of most that not only is P != NP but that the lower bounds are exponential?

I wonder sometimes if we in complexity theory are just plain wrong. Often SAT solver success is pushed off as uninteresting because the examples are “random,” or very sparse, or very dense, or special in some way. The key here, the key to my excitement, is that the SAT problem for the Erdős problem is not a carefully selected one. It arises from a longstanding open problem, a problem that many have tried to dent. This seems to me to make the success by Konev and Lisitsa special.

John Lubbock once said:

What we see depends mainly on what we look for.

Do we in complexity theory suffer from this? (LM: That’s exactly the accusation I have been making for some years now.) As they say across the Pond, this is bang-on. According to our friends at Wikipedia, Lubbock was:

The Right Honourable John Lubbock, 1st Baron Avebury MP FRS DCL LLD, known as Sir John Lubbock, 4th Baronet from 1865 until 1900, [a] banker, Liberal politician, philanthropist, scientist and polymath.

I know he lived many years ago, but note he was a polymath: hmmm… Meanwhile I remember when another Liverpool product came across the Pond 50 years ago, which some say “gave birth to a new world.”

Comment #3 March 3rd, 2014 at 3:13 am

And it turns out that Prof. Leonard Susskind, string theorist from Stanford – who has about 10 times more intensely cited papers than Luboš Motl – agrees with me, Scott Aaronson, and not Luboš Motl, about the relevance of computational complexity to physics.

Really, Luboš? This is the kindergarten level you want to argue at?Now regarding Lipton, two points:

(1) Even though he’s coincidentally

alsowriting about the recent Erdös discrepancy progress and the possibility that P=NP, the argument he’s making turns out to be completely different from the argument you were making. You were talking about theresult itself, and how it involves a large constant, while Dick was talking about thetoolused to obtain the result (an optimized SAT-solver). As a certain Czech blogger might put it, that sort of superficial resemblance might impress the laypeople, but those with IQ’s above 80 know better…(2) Even though I like and admire Dick Lipton, I personally think he’s dead wrong about the probability that P=NP, just as you are. Dick has a strong contrarian streak, but I often feel like he “wastes his contrarianism” on unproductive things. For example, he had a blog post a few years ago about how maybe Shor’s algorithm

doesn’tprove that factoring is in quantum polynomial time after all. After long discussion, it turned out that … surprise, surprise … Shor’s algorithmdoesprove that. Now, to my mind, speculating that maybe P=NP is onlymarginallymore productive than speculating that maybe Shor’s algorithm doesn’t do what Shor proved it does. The funny thing is that, if we changed the subject to people who say things like“what if quantum mechanics is wrong?”

“what if string theory diverges at the three-loop level?”

“what if there’s no continuum anywhere in physics?”

… you’d be a thousand times more vociferous in shouting those people down as I’ve ever been to the what-if-P=NP-ists. And not on stronger grounds.

Anyway, you’re welcome to take this “discussion” over to your blog, but I’d like to restrict further discussion on

thisthread to the actual ideas of Susskind and Tao that I was writing about.Comment #4 March 3rd, 2014 at 3:58 am

Dear Scott, I have only mentioned these sociological arguments concerning Richard Lipton because you did. The *actual* reasons why your reasoning is quite certainly wrong and why your conclusions are quite possibly wrong is mentioned in my technical posts. They’re technical reasons.

But in the comment section of my blog, you were presenting my assertion that the Erdös discrepancy result had an impact on the question whether P=NP as some conspiracy theory of an outsider (recall your silly comparison with Fox News – yes, you’re wrong about Fox News, too, but this wrongness has absolutely no relationship with P=NP). This was a sociological point you made and I mentioned Lipton’s take because I wanted to point out that this sociological point you made was wrong, too.

You say things rather clearly and directly which I kind of appreciate – in this sense, you’re very different e.g. from Matt Strassler – so it’s easier to sharply articulate the reasons why you’re wrong. Lenny and others (physicists who often talk about computation in the papers) are more careful and perhaps more secretive about the assertions on the relevance of computational complexity for physics but yes, I do think that he or they want to make similar claims in between the lines, and they are wrong, too. I am planning to write a more detailed blog post on the reasons why.

Just a quick review of the main point: the existence or non-existence of an algorithm just *can’t* have an impact on the rational conclusions about the character of the laws of Nature simply because computation is *not* fundamental in Nature. The existence or non-existence of an algorithm is a derived fact. Computation is just another process – a process intelligently designed with a purpose that someone finds useful – but the class of processes that may be called “computation” are just a small subset of all processes in physics and the laws of physics must have the answers to (in quantum mechanics, must predict probabilities of outcome of) any scenario with any process, not just the computation (a subclass of processes). So if there were a paradox in physics that could be seen with some computation, there would also be a paradox that may be seen in other physical processes; the first paradox would be composed of the latter kind of paradoxes. So complexity theory may decide whether something may be quickly calculated or not but this has never an impact on the basic physical processes because computation is *made of* basic physical processes, not the other way around. If something may be computed quickly, it may also be computed quickly in the real world. If it may not, it may not. But physics predicts some outcomes for the processes whether they’re useful (“calculation”) or not.

Just to be sure, I think that many comments about the “long evolution” for “long wormhole” etc. are right but your attempts to spin all this stuff as evidence that computer science is fundamentally important only shows your preconceptions, bias, and hype, especially because the point you are trying to advocate is not only self-serving but self-evidently wrong. I would never do the same with physics. At the end, all science is unified and physics is the fundamental layer in the reductionist explanations in science but most actual classes of problems in science or human activity are *not* more easily solved with the training that physicists (proper) are getting. I would never try to pretend the opposite because I like physics but my love for physics is a *consequence* of my love for the truth, and the truth also says that most other human activities are so composite that (fundamental) physics just isn’t too useful. It’s childish for you to play these pissing contests attempting to overstate the importance of your field.

1) It’s completely silly that there was this difference between me and Lipton. The SAT (or another) solver that is compact and fast enough to be used is clearly the result we want to obtain, an obviously necessary intermediate step needed to settle the question whether the conjecture (for C=2) is right or wrong. There can’t be any other interpretation here. There can’t be any other way to (controllably) settle the validity of a similar conjecture that wouldn’t contain the intermediate steps!

Using your demagogic musical analogies, Mozart was clearly not composing the music “out of thin air”. What he was special at were some “circuits” (and programs inside them) inside his brain. They’re fully analogous to the SAT solver. Unless you believe in miracles that were behind Mozart’s acts of the composition, such circuits had to exist. But maybe the whole point of your frantic belief that P must differ from NP *is* some belief in miracles – a belief that there can’t exist a rational explanation why some people or computers manage to solve tasks that seem difficult to other people. But there is *always* a rational explanation.

2) Give me a break with a “contrarian streak”. You sound like a climate activist (maybe you are one?). Again, he has 10 times more citations than you do so whether his streak is labeled by one emotional adjective or another, it has clearly not prevented him from having done top science in your field.

I just don’t like these attempts to demonize people by similar unflattering slogans as your “contrarian” terminology. When I return to your quad-people analogy, Lenny Susskind would probably deserve the label “contrarian” much more than Richard Lipton. But I would never use this as the first one for an argument why Lenny is wrong about something. Good physics – any I would say good science – always demands some portion of being a contrarian – Lenny has a lot of it and he has used it totally wonderfully – so it’s pathetic for you to pretend that someone’s having a “contrarian streak” is evidence for his being wrong about the widely spread yet so far irrational belief that P must differ from NP.

Cheers

LM

Comment #5 March 3rd, 2014 at 4:13 am

Otherwise I forgot to reply to your claim that “thinking on P=NP is as unproductive as assuming that Shor’s algorithm doesn’t work”. The only problem with your assertion is that the two situations are not correlated in any controllable way – and the vague way that connects them correlates the two answers in the opposite way than you suggest!

Note that P=NP doesn’t contain any “negation” but “Shor’s algorithm doesn’t work” does contain a “negation”. P=NP is actually *the* natural assumption that is analogous to physics principles such as “a symmetry is obeyed” or “a conservation law holds” or “quantum mechanical equations may be trusted”. “P isn’t NP” is analogous to “the charge conservation law [that has showed no clear signs of breaking down] must fail”.

The very fact that you view “P is NOT equal to NP” as an analogy to the assumption that the laws of physics hold – even for classical or quantum computers – shows how perversely illogical and contrived your reasoning and demagogy has to become.

Comment #6 March 3rd, 2014 at 4:43 am

There’s a fair bit of legitimate experts who do believe P=NP, right? Say 10-20% of the research community?

Or is believing P=NP like a total crank position

a laQM is all wrong, or the lunar landing was a hoax or something bizarre like that.Just checking. Or am I misunderstanding what Scott’s trying to say here? Shor proved what he proved; but P vs NP is truly an open problem, right with no overwhelming consensus?

Comment #7 March 3rd, 2014 at 5:45 am

It’s not surprising that, to some physicists, the set of polytime problems seems arbitrary and manmade, since the underlying classification is unfeasible in practice. Whenever a polynomial solution is found, it’s impossible to tell if that was because the problem was easy or because the mathematician was clever. This relativistic analogy seems to be what P?=NP is all about.

Comment #8 March 3rd, 2014 at 5:56 am

What I would really like to understand in Tao’s program is the connection between building computer-like circuits out of fluid and the ability to make a self-replicating entity that keeps copying itself at smaller space and time scales, which, if I understand it, is the key to demonstrating blowup. It’s not clear to me why being able to build a “fluidic computer” entails that second bit. Can someone explain the connection?

Comment #9 March 3rd, 2014 at 6:47 am

The way I understand what Scott wrote, the resolution of the firewall paradox is: “It would take an impossibly long time to set up the AMPS experiment”. Scott, are you saying that the only way to argue this goes through computational complexity? Isn’t it possible that complexity can be eliminated from the final version of the argument?

Comment #10 March 3rd, 2014 at 8:15 am

This is lovely stuff. Thanks for sharing Scott.

Comment #11 March 3rd, 2014 at 8:53 am

“But maybe there’s a more doofus-friendly explanation, which would start with some 3-dimensional fluid logic gate, and then explain why the gate has no natural 2-dimensional analogue, or why dissipation causes its analogue to fail.”

I can’t give you a doofus-friendly explanation, but here’s a parable with political subtext, which puts it in approximately the same level of intelligence:

So, we have our water world, and it has grown have not just a computer, but people and a civilization. This society has all the complexities our own society has and is steadily growing more complex. In the center of it all is a giant vortex which provides everybody a power source. At around the same time two scientists make two different observations.

The first scientist examines the giant vortex, and notices something disturbing: The vortex is rotating slightly more slowly than it did according to observations last century. He posits that all the industrial activities that everyone is engaging is are slowing siphoning off energy from the vortex. Eventually, he believes, the vortex will die off completely, and civilization will die with it.

This causes a mass panic. Some deny that anything is happening, while others are doing everything they can to conserve energy and prolong disaster.

The second scientist looks at trends in recent history. She notices that technology is improving at a rapid pace. Everywhere she looks she sees exponential curves. With rising miniaturization everything is becoming faster and more efficient. Moreover, she believes that the pace of technological development is itself speeding up. Extrapolating this, she notices not a exponential but rather a hyperbolic curve, and so she believes that in some point in the future there will be a ‘singularity’, after which nothing can be predicted with certainty.

This also elicits a huge controversy. Some people inspired by her words are devoting their whole livelihood to making sure that they survive until the singularity, or to make sure the singularity is beneficial rather than harmful. Others decry the whole business as a piece of science fiction.

Obviously these two groups of followers have conflicting opinions. Which one of them is right?

In two dimensions the first group is right. Although they may be capable of performing complicated computations, energy constraints will always get to them in the end.

However, in three or higher dimensions, the second group may be right. Although energy constraints still apply, but making things in a smaller and smaller scale there is no limit to how efficient their devices can be, which make energy constraints irrelevant.

Ultimately, the problem is not in making any particular logic gate, but in the resource constraints all objects are bound by.

Comment #12 March 3rd, 2014 at 9:42 am

Thank you Scott, for today I shall not be bored!

Comment #13 March 3rd, 2014 at 11:36 am

I think the contrarian argument for P=NP could go something like this: if P!=NP, this would be a general, conceptual result, so you’d expect the proof to be explanatory and in particular human-scaled. If P=NP, it could be “an accident” and the reason is simply that there is some astronomic but polynomial time algorithm for SAT, an algorithm which humans might never grasp. The lack of progress on P vs. NP, and also the many negative results, are then some kind of weak evidence in favor of the second scenario.

Comment #14 March 3rd, 2014 at 12:45 pm

Sam #13: Couldn’t your interesting “contrarian argument” have also been applied to Fermat’s Last Theorem, for most of the 350 years until it was proved? We haven’t found an explanatory, human-scaled proof, so that’s weak evidence that it’s more likely that there’s just some accidental counterexample to it.

In both cases, I’d say, a key fallacy lies in your adjective “human-scaled”! It’s the nature of humans that they can keep increasing their scale by building on the work of their predecessors. So, yes, there was a Wiles-scaled proof of FLT, but not a Fermat-scaled proof—and someone making your argument in Fermat’s time simply wouldn’t have been thinking about the three centuries of advances that would be needed, to bring the “conceptual explanation” for why there’s no integer solution

withinhuman scale (at least, the scale of some humans!).In the same way, I believe complexity theory

ismaking progress on its fundamental questions—derandomization results, Ryan Williams’ circuit lower bound, and Mulmuley’s GCT program are just a few examples. The trouble is just that the progress looks unimpressive,ifyou incorrectly assume that P≠NP is an “us-scaled problem”! Once you adopt a different perspective—that there probablyisa “human-scaled,” conceptual explanation for why P≠NP, just not one that thecurrentgeneration of humans is high enough up yet to see—the progress seems fine. 🙂Comment #15 March 3rd, 2014 at 12:48 pm

Serge #7:

Whenever a polynomial solution is found, it’s impossible to tell if that was because the problem was easy or because the mathematician was clever.

I don’t even understand the distinction you’re trying to make. I would say: the problem was found to be easy

becausethe mathematician was clever.Comment #16 March 3rd, 2014 at 12:49 pm

When are you people going to wake up and realize that Aaronson is lying to you?

…

I’m in the Simons Institute right now, and it is NOT sunny here.

Comment #17 March 3rd, 2014 at 1:05 pm

D #16: It’s true that it’s been raining on and off for a while, but the sun was out

when I put the post upyesterday afternoon. And even with intermittent rain, it’s still much more pleasant to be outside than it is in Boston—believe me.Comment #18 March 3rd, 2014 at 1:06 pm

Is there an analogous way to frame “thermodynamics-complete” problems in terms of the growth rate of the boolean circuit complexity of their corresponding classical problem?

Comment #19 March 3rd, 2014 at 1:26 pm

…this notion of “thermodynamics-complete” seems to be useful. Has it been made formal by anyone?

Comment #20 March 3rd, 2014 at 1:57 pm

Dear Scott,

the difference between your “future-people-scaled” promise in #14 and Sam’s argument(s) in #13 is that your #14 actually doesn’t contain any material argument. You only “derive” as much about P=NP as you “assume”.

A simple way to see it is to realize that your rationalization may be *exactly* used for the opposite result P=NP, too. The astronomically huge yet polynomial SAT algorithm may exist and there may even be a (compressed) human-scaled proof of its existence – an existential proof, not necessarily a constructive one – that we just haven’t found yet but some future generations surely may.

These comments are great but they don’t change the probability that P=NP at all. They are just saying “what other things we have to believe if we want to believe that P=NP or P!=NP”. And these extra corollaries may be deduced from both assumptions symmetrically.

But Sam’s comments #13 actually use some “empirical data” to which the probabilities of P=NP and P!=NP react differently, so arguments that help us to discriminate, albeit not rigorously, and these arguments weakly point to P=NP.

What really supports Sam’s view is that many complicated enough algorithms do exist. These algorithms must always be about some “speedup” relatively to the naive exponential/factorial brute-force algorithms. Maybe the speedup for SAT is organized at several levels: the right algorithm has output that isn’t really the final solution but the output is another, much longer algorithm, which runs to create an even longer algorithm, and this very long algorithm, when it runs, spits the result for the SAT problem. The number of levels may be arbitrary and the final algorithm in the sequence may still be in principle polynomial.

There’s no proof that such scenarios are impossible for SAT – and on the contrary, there are “similar” structures in other problems.

Cheers

LM

Comment #21 March 3rd, 2014 at 2:09 pm

California Scott is so much cooler than Massachusetts Scott. You should move to California.

This wonderfully informative post may inspire me to finally pick up that complexity theory text that’s been lying on my desk (specifically, The Nature of Computation by Moore & Mertens).

Comment #22 March 3rd, 2014 at 2:19 pm

Great post, but I’m a bit confused by a couple of points related to the Susskind paper:

1) Why is it the case that quantum circuit complexity of a physical system should not increase to the maximum in any arbitrarily small time? Say for concreteness we take a collection of electrons trapped in lattice defects interacting only via their spins. It seems like (if we take the electrons to be scattered randomly to avoid symmetries) this state should become incredibly complex under any duration of time evolution (although that complexity will be carried with very small amplitude). Is time then, for this discussion assumed discrete?

2) How much of what is written can be ported over to deterministic reversible classical systems? You contrast the quantum system with a thermodynamic one, which is indeed very different. However if you instead compare it to deterministic reversible systems, much of what is said seems to still apply. Again for concreteness lets consider the Q2R cellular automata as discussed in Statistical Mechanics of Surjective Cellular Automata by Kari and Taati (essentially a reversible deterministic version of the 2d ising model which still shows some degree of “mixing to equilibrium” despite such a notion not really making sense due to recurrence). When restricted to a finite lattice, such an automata acts essentially as a giant permutation matrix, and it will keep stepping along a cycle of this matrix well after apparent thermalization. Would one then also expect increase in some suitible circuit complexity (say based on classical reversible logic gates)? If not, what is the key difference between repeatedly applying a large permutation matrix versus a large generic unitary one?

Comment #23 March 3rd, 2014 at 2:29 pm

There is a gap in your logic. Even if one had blow-up for special initial conditions, one would not expect blow-up for

arbitraryinitial conditions. I believe that the expectation is that any counterexample would be very special and that’s why it is not observed in nature. If one had a specific blow-up example in hand, one could compute how much it would blow up as a function of the scale. In particular, I think Tao’s blow-up should be physically noticeable at the scale we live at.Comment #24 March 3rd, 2014 at 2:43 pm

It’s not clear to me why being able to build a “fluidic computer” entails [the ability to (…) keeps copying itself at smaller space and time scales]. Can someone explain the connection?

+1

Comment #25 March 3rd, 2014 at 2:54 pm

“Of course, such a result wouldn’t imply that real, physical water was in any danger of “blowing up”! It would simply mean that the discrete nature of water (i.e., the fact that it’s made of H2O molecules, rather than being infinitely divisible) was essential to understanding its stability given arbitrary initial conditions.”

I certainly agree that even if Euler’s/NS equation allow for universal computation we cannot expect to see memory and computation emerge in real-life fluids in nature. There could be several possible explanations for that and when you look at it carefully it does not look simple at all!

One explanation would refer to stochastic (or noisy, or even quantum) nature of the real-life implementation. However, it seems likely that a “finite time blow-up” result as well as robust computation will extend for various stochastic (and quantum) versions of the equation. This was actually discussed in Tao’s 2007 post.

Scott proposed the explanation that real-life fluids being discrete. Again there is no simple reason to think that computation and memory and even finite time blow up (in a restricted finite sense) will not extend to the discrete case.

So it looks that none of the above explanation hold (all the) water 🙂 , and that this is not a simple matter.

My favorite approach would be to regard “no memory,” “no computation” or “no fault-tolerance” as representing a physical principle (perhaps of thermodynamical nature) which is external to the equations themselves. It is an interesting challenge to formalize mathematically such a principle.

Comment #26 March 3rd, 2014 at 3:12 pm

Scott #15

Thanks Scott for replying. Here’s the distinction I was making.

If P=NP, then there’s no absolute computable complexity – just a hierarchy of problems based on their relative complexity. Whenever a problem gets an efficient solution, that’s just an estimation of the relative intelligence that was used to solve it.

If P!=NP, then each problem has its absolute computable complexity. Whenever one gets an efficient solution, that represents as much an estimation of its intrinsic complexity as an indication of the intelligence that was used to solve it.

But, since we can’t tell in which model we live, then P=NP is (probably) undecidable.

Comment #27 March 3rd, 2014 at 3:14 pm

Dániel #9:

Scott, are you saying that the only way to argue this goes through computational complexity? Isn’t it possible that complexity can be eliminated from the final version of the argument?

I’m of two minds about this question. On the one hand, yes, it would be great if we had an ultimate quantum theory of gravity (an UQToG) that would let us throw away the complexity-theoretic arguments like so much scaffolding, and say: aha,

thisis the true structure of the Hilbert space, which makes it manifest why the experiment can’t be done, in the same way that understanding the structure of QM makes it manifest why not being able to measure both the position and the momentum of the particle is not just a practical limitation (or, understanding special relativity makes it manifest why no-superluminal-signalling has to hold).On the other hand, as Lenny stresses in his paper, one

canconstruct “black hole” states that contain firewalls (or more generally, completely messed-up spacetime geometries)! And for that reason, any argument that says black-hole interiors cannevercontain firewalls must be wrong: it proves too much. Thus, at some level, the questionhasto boil down to: how hard is it to apply the unitary transformations that would create these bizarre spacetime geometries? Is it something that we’d ever expect to happen by natural physical processes? No? Then is there some simple operationwecould perform that would create them? Still no? OK then, is the problem “thermodynamics-hard”? Is it even harder than that? Would it require solving an exponentially-hard computational problem? So, from this perspective, it’s really not surprising at all that complexity considerations would enter the story.Comment #28 March 3rd, 2014 at 3:19 pm

Douglas Knight #23:

Even if one had blow-up for special initial conditions, one would not expect blow-up for arbitrary initial conditions.

My apologies; this is just a confusion over the word “arbitrary.”

Of courseblowup might only happen for extremely special initial conditions. What I meant was: if one wanted to understand why blowup canneverhappen, regardless of the initial conditions, then (if the NS existence and smoothness conjecture were false) one would need to use deeper facts about the physical nature of water than those modeled by the NS equations.Comment #29 March 3rd, 2014 at 3:23 pm

Serge #26: What you write still makes zero sense to me. I don’t see that either P=NP

orP≠NP has anything like the implications that you say.Regardlessof which one is true, every computational problemhasan “intrinsic” complexity (i.e., either a fastest possible algorithm an infinite sequence of algorithms), but how much weknowabout its intrinsic complexity is obviously a function of human cleverness. And none of this constitutes anything remotely like an argument that P vs. NP should be independent of set theory.Comment #30 March 3rd, 2014 at 3:32 pm

Brent #22:

Thanks for asking questions that I can actually answer, unlike some of the others here! 😉

1) Why is it the case that quantum circuit complexity of a physical system should not increase to the maximum in any arbitrarily small time?

Because if the system has been evolving for at most T time steps (starting from some “simple” initial state, like the all-0 state), then its quantum circuit complexity can be at most ~T*n, where n is the number of qubits. (I.e., a circuit to produce the current state can be obtained by simply simulating the physical evolution that led to that state.) On the other hand, we know from a counting argument that the

maximumquantum circuit complexity of an n-qubit pure state is ~2^{n}. Combining these facts, it follows that the quantum circuit complexity can’t possibly achieve its maximum until T ~ 2^{n}/ n.2) How much of what is written can be ported over to deterministic reversible classical systems?

With an n-bit deterministic classical system, the central difference is that the maximum possible complexity is only ~n, rather than ~2

^{n}. And therefore, the “complexity maximum” should generally be reached after only ~n time steps (i.e., the same timescale as when the entropy maximum is reached), rather than ~2^{n}time steps. (This is just another reflection of the fact that n-qubit Hilbert space is much,muchbigger than n-bit classical configuration space: the former is doubly-exponential in size, while the latter is “only” singly-exponential.)Comment #31 March 3rd, 2014 at 3:41 pm

Gil #25:

Scott proposed the explanation that real-life fluids being discrete. Again there is no simple reason to think that computation and memory and even finite time blow up (in a restricted finite sense) will not extend to the discrete case.

Actually, I think there is. Tao’s construction involves exponentially shrinking the relevant length-scale as the computation proceeds. But the discrete nature of water imposes a natural “cutoff” (and moreover, a severe one!) on how long any such exponential shrinking process can continue. If there are 3.3*10

^{25}H_{2}O molecules in a liter of water, and each computation step halves the number of relevant molecules, then you can do at most 84 computation steps until you’ve reached an individual water molecule and your differential equation is meaningless.(Having said that, if you simply meant: it should be possible to concentrate a

lotof energy in arelatively smallregion, then I think that’s probably right! Isn’t that what turbulence does?)Comment #32 March 3rd, 2014 at 3:43 pm

Dániel #19: No, I’m not aware of anyone having formalized “thermodynamics-completeness” (any more than they’ve formalized “AI-completeness,” or my own notion of “sanity-completeness”). 🙂 It would be interesting to try.

Comment #33 March 3rd, 2014 at 3:57 pm

Rahul #6:

There’s a fair bit of legitimate experts who do believe P=NP, right? Say 10-20% of the research community?

Based on personal experience, my estimate would be that maybe 80-90% of experts express belief that P≠NP, while the rest express various shades of agnosticism, or deflect the question with a joke. I can’t remember meeting

anyexpert who ever expressed serious belief that P=NP.Now, regarding the “agnostics”, there are two important points to understand. The first is that many of these people enjoy taking contrarian positions on countless other topics as well—thereby calling into question whether they’re really trying to act as Bayesian truth-seekers, or pursuing other objectives like getting a rise out of people, or patting themselves on the back for their open-mindedness. The second, more important point is that

when doing actual research—as opposed to speculating on blogs or over beer—absolutely everyone proceeds on the assumption that P≠NP.I.e., no oneeversays “sure, I proved this problem NP-hard to approximate, but there’s no point in writing it up, since probably P=NP anyway.”If you like, then, the belief that P=NP—much like the belief that quantum mechanics is false—has been found over the decades to be sterile and unproductive as the basis of a research program,

even supposing it were true(and there are no indications that itistrue).Comment #34 March 3rd, 2014 at 4:08 pm

Scott

If there is no proof that means that there is no reason a-priori to prefer your arguments over those Lubos. Expertise is not enough.

And the fact that Lubos is difficult to deal with doesn’t change that.

The quote assigned to you in wikipedia concerning Mozart is completely unconvincing. I am sure you were trying to communicate to the lay public (such as myself) but it is just as compelling to imagine that there is nothing fundamentally special about Mozart as a phenomenon of nature as to believe that there is. The quote only has literary value.

I hope you we can agree on that.

Comment #35 March 3rd, 2014 at 4:30 pm

Ignacio #34: The idea is that “if there’s no proof, then there’s no reason a-priori to prefer any argument over any other one” is laughable. And what gets me the most is the double standard: in other parts of life, you don’t believe that, Lubos

certainlydoesn’t believe it, no sane person believes it.You remind me of the guy interviewed on Jon Stewart who earnestly explained that, since there’s no proof that turning on the LHC will destroy the world, but also no proof that it

won’tdestroy the world, the only rational inference is that there’s a 50% chance it will destroy the world. John Oliver’s deadpan response at 3:30 is classic: “I’m … notsurethat’s how probability works …”And regarding Mozart, you completely misunderstood my point, in the same weird way that Lubos did. I wasn’t

claimingthere was anything “fundamentally special” about Mozart. Mozart could’ve been replaced by any other composer in that example. The point was simply that, if P=NP (and the algorithm were efficient in practice, yadda yadda), then the ability to program your computer to efficientlyrecognizesome genre of art, music, etc. would imply the ability to program your computer to efficientlycreateart or music in the same genre. Which is a straightforwardly true statement.Comment #36 March 3rd, 2014 at 4:41 pm

Scott: the comparison with FLT is interesting. Before Wiles, what reason did we have to think FLT was true? Okay, certainly before Wiles the connection with elliptic curves/modularity/etc. had been noted. But it seems the primarily driving the belief in FLT was just “aesthetic concerns”; i.e., math would be simpler if it were true. Maybe aesthetics are also what support P != NP. But in some sense the situation is even worse than FLT before Wiles, because, as I just mentioned, there was at least a program for proving FLT at the time, whereas with P vs. NP we have only knowledge of which cannot work.

In my mind is that aesthetics, and not “empirical evidence”, ought to be the basis of our science (at least, the very abstract parts of science). This explains why the P != NP consensus is likely correct: it leads to the more mathematically beautiful and fruitful theory.

Comment #37 March 3rd, 2014 at 4:43 pm

Scott.

I said expertise is not enough. The arguments themselves may or may not be.

I just read the Mozart quote in wikipedia. The implication is that the process of creating a piece of music is a fundamentally different kind of algorithm from a complexity theory point of view. I have no idea what would make you think so.

From a naive point of view these kind of arguments sound vaguely like ones that Penrose used to make (I don’t know if he still does) concerning consciousness.

I am sufficiently intrigued to look into this more but one has to consider the possibility Lipton seems to be alluding to that functionally there is no reason to suppose that a clever algorithm may not in fact solve any problem that a clever mathematician can also solve.

Again, there is nothing fundamentally special about the process taking place inside the brain of a mathematician. If he can do it in polynomial time so can a computer, even a classical one for all we know.

If your wikipedia quote doesn’t imply otherwise then I’d see it is not apt at conveying its intended message, as far as I can discern from what I have read so far.

Comment #38 March 3rd, 2014 at 4:55 pm

Having left physics to pursue a career in medicine, reading wonderful articles like this, about topics in the field (strings, cosmology) that I started to work in, brings me an amount of joy that if not infinite, is certainly exponential. Thanks Scott 🙂

Comment #39 March 3rd, 2014 at 5:20 pm

Ignacio #37

Scott is not claiming that composition or artistic expression is some magical process that could never be implemented by a computer. The point is that if P=NP, then creating good music is no harder than appreciating good music. Factoring is no harder than multiplying. Verifying a mathematical proof is no harder than producing a proof.

Comment #40 March 3rd, 2014 at 5:25 pm

Lubos #4

“So complexity theory may decide whether something may be quickly calculated or not but this has never an impact on the basic physical processes because computation is *made of* basic physical processes, not the other way around.[…] If something may be computed quickly, it may also be computed quickly in the real world. If it may not, it may not. But physics predicts some outcomes for the processes whether they’re useful (“calculation”) or not.”

“What he [Mozart] was special at were some “circuits” (and programs inside them) inside his brain. They’re fully analogous to the SAT solver.”

So, even though computations can’t possibly simulate efficiently all physical processes, the understanding of all physical processes (the truth of Nature), can ultimately be discovered by standard algorithms (i.e. our brains)?

Comment #41 March 3rd, 2014 at 5:26 pm

Peter

That is a human statement about human brains, which do find some tasks harder than others, and has nothing to do with P = NP. Indeed different brain constructions and/or software by all appearances can make some difficult tasks seem trivial.

So it is not helpful in any way with the problem at hand.

Again, expertise is not enough to decide this question. Experts have been wrong many times before and will be wrong many times in the future.

Better arguments are needed from what I can see.

Comment #42 March 3rd, 2014 at 5:29 pm

Rahul #6:

A sort of poll about P!=NP

http://www.cs.umd.edu/~gasarch/papers/poll.pdf

Comment #43 March 3rd, 2014 at 5:34 pm

Even if P improbably turned out to equal NP, wouldn’t appreciating music actually have to be shown to be in P before composing could be shown to be in NP? What would a program to appreciate music have to look like to make it analogous to verification in the NP sense? Would recognizing music as belonging to a particular genre be enough? I can imagine what the world would look like if finding mathematical proofs could be completely automated, but I don’t understand how something as as loosely defined as music appreciation could map to a computable process even if P=NP.

Comment #44 March 3rd, 2014 at 5:40 pm

Ignacio #41

Perhaps I’m misunderstanding, but you appear to be claiming that (non)equivalence between efficient verification of solutions and efficient finding of solutions is merely “a human statement about human brains” and “has nothing to do with P=NP”. Reflect on the definitions of P and NP and consider why that’s funny.

Now, of course it may turn out to be that certain problems (e.g., composition) are just as “easy” as their corresponding verifications. For instance, checking whether a list is sorted and sorting a list are both “easy” (in the sense of being in P).

P=NP would be claiming something much, much stronger. Literally *all* problems that had easy verification would also have easy solutions. Any problem that you could check an answer for “easily” would be solvable from scratch, just as easily.

Comment #45 March 3rd, 2014 at 5:48 pm

Lubos Motl#4

Regarding your point that computation is not fundamental:

If a system acts according to a law, it seems that the system can be said to follow an algorithm; i.e the system or the behavior of the system can be simulated or automated.

With this view, physical laws can be also viewed as computation in some sense? If that’s the case any tool used to study computation also applies to physical laws?

Comment #46 March 3rd, 2014 at 6:10 pm

ks1 #45

I guess we could imagine situations where some law of nature turns out to be described by an uncomputable function?

Comment #47 March 3rd, 2014 at 6:22 pm

Scott, I don’t see any difference between your clarification and your original statement. It sounds like you’re trying to explain something that isn’t observed: why water doesn’t blow up under

anyinitial conditions. This can’t be observed because there are too many initial conditions to test and we can’t control them well enough, anyhow.There are two questions: does Navier-Stokes explain fluid flow (up to a cut-off scale) and does NS blow up. I believe that turbulence is a smooth but chaotic solution to NS, not directly related to either question. A separate belief is that physicists are very happy with NS and thus conclude from observing water not blowing up that the equation does not have generic blow-up. If it were proved to have generic blow-up, that would prove them wrong, but if it is merely proved to have special blow-up, they aren’t going to change their minds about anything, while you seem to be conditionally rejecting the goodness of the approximation.

Of course the equation cannot be accurate down to arbitrarily small scales. Equations like NS usually come with a claim about how error grows with the scale. I believe that Tao’s construction concentrates all the energy at a smaller scale at each step. Thus it may only be able to run for 84 steps, but it winds up with all the energy concentrated in an area 2^84 smaller, a blow-up by any standard, certainly ruled out by flowing water. I imagine that the limiting factor is not the discrete nature of water, but that concentrated energy would break apart molecules. One can imagine a construction that doesn’t concentrate all the energy, but whose peak grows slowly, perhaps merely linearly in the number of steps, and thus the blow-up is hard to observe.

Comment #48 March 3rd, 2014 at 6:38 pm

Scott,

I haven’t yet read all of this (long!) post, but I think that near the beginning you overstate the difference between classical and quantum systems where equilibration is concerned. You describe a gas of classical particles as coming quickly to equilibrium (diffusing over the whole box), and say that with the laws of physics one can calculate how fast.

In fact if you use the laws of physics, that is of classical mechanics for the many-particle system, it’s not so simple, and this is often poorly handled in textbooks and courses. (Often a probabilistic treatment called Boltzmann’s H-theorem is used, which rather begs the question.) If we represent the initial state of the system with a probability distribution in phase space (the space of the possible simultaneous positions and momenta of all the particles), then one can show using Liouville’s theorem that the evolution of the system does not lead to an increase in entropy, which means the probability density (if initially it is concentrated near one point) does not become more uniformly spread out in phase space.

These things are studied in ergodic theory, but unfortunately ergodicity is not enough for equilibration. If the dynamics (i.e. the time evolution under classical mechanics, if Greg K is listening) has the property called “mixing”, then the probability distribution will approach the equilibrium one (which e.g. is uniform over the box). Informally, mixing is like what happens when you drop a dot of ink into a clear fluid in which we assume there is no diffusion, and then stir the fluid. Eventually it looks like the fluid is a uniform grey. (The idea and the analogy are due to Gibbs.) Because of Liouville the convergence is only “weak convergence”, meaning the average of any function using the prob dist approach the average of the same function in the limit distribution. If instead you look at the prob dist, at any finite elapsed time, very close inspection would show that it is high in some places and low in others. It is just that these regions are increasingly mixed up by the “chaotic” dynamics, so that averaging any reasonable (“measurable”) function gives a result that approaches the average in the equil distribution. Passage to the mathematical limit is irreversible (you cannot recover the route by which you got there) and that is how the increase of entropy comes about—by weak convergence. Incidentally, none of this is refuted by the existence of Poincare recurrence.

The quantum case is similar, if one uses the density matrix in place of the prob density in phase space. There is a quantum version of Liouville’s theorem. In *both* cases it never “really” reaches equilibrium, if you look closely enough at the distribution, but keeps evolving (as you said for the quantum case). Of course there is a difference, in that classical phase space is continuous and the distribution can be divided into thinner and thinner regions of high and low, which is not possible in the quantum theory (where there are only finitely many states with energy eigenvalues in a given interval). I believe one should have approach to equilibrium in the quantum case by a version of weak convergence . . . but here my knowledge of the subject gives out.

It is very hard to prove rigorously that a classical system actually is mixing, or even that it is ergodic. Both were proved for a gas of hard spheres, by Sinai.

All these things are fairly well known in the fundamentals of statistical mechanics (and relevant to quantum measurement theory also). (I’m sure you know all this Scott, but hope you won’t mind my lengthy comment, which perhaps may be of interest to some readers.) Yes, they mean that if after a long time you want to reconstruct the initial state/probability distribution, it will be computationally hard. It is all to do with exponentially-growing sensitivity to the initial conditions, in the classical language. All of this seems pretty obvious. On the other hand, the argument as you present it seems to rest on trying to view the density matrix both ways—as both the equilibrium one, and the non-equilibrated, evolving one—and saying that ordinary entropy doesn’t help. I suspect it does, if you take a consistent view. (The looking-at-it-both-ways also occurs when you say it is in equilibrium, “until the next Poincare recurrence time.” The equilibrium distribution is time independent, recurrence is then irrelevant.)

I’m not yet convinced that bringing in computational complexity adds much insight to these standard ideas, but I suppose I’ll have to read on.

Again, sorry for the long comment on your interesting post.

Comment #49 March 3rd, 2014 at 7:40 pm

Nick #48

Very interesting post – I had just read about the (related?) mixing paradox and how it’s related to undistinguishable particles

http://en.wikipedia.org/wiki/Gibbs_paradox

Comment #50 March 3rd, 2014 at 8:05 pm

Lenny #1, Ben Bevan #10, John Stricker #12, Sid #21, William von Witt #38: Thanks so much for the kind remarks! I can’t please everyone, but it makes my day to know that at least some readers appreciate what I do here.

Comment #51 March 3rd, 2014 at 8:10 pm

Fred #49,

Thanks, but that is a different kind of mixing.

Comment #52 March 3rd, 2014 at 8:18 pm

Nick Read #48: I completely agree that there’s no great difference in kind between the mixing behavior of a classical probabilistic system and that of an

openquantum system. But the key point is that, with AdS/CFT, we’re specifically interested in aclosedquantum system—one that evolves in a strictly unitary way and stays in a pure state for all time. And as I said in the post, we’re not only interested in the sorts of observables that would normally be of physical interest, like the average density or pressure at a given point. Instead, we’re interested inallobservables, including ones that might require enormous resources and complete control over all the particles in the system to measure. (Why are we interested in those observables? Precisely because, despite how “complex” they look, in the AdS dual description they might get mapped to something extremely simple!) Since this is speculative physics, 🙂 we’re also interested in much, much longer timescales than would normally be of physical interest.Once you’ve specified that this is what you’re interested in, it’s very easy to see why the long-time quantum behavior is different from the long-time behavior of either a deterministic or a probabilistic classical reversible system. It’s different from deterministic classical behavior, simply because Hilbert space is so much bigger than classical phase space. So for example, while the classical system will get into recurrences after “merely” exponential time, the quantum system will take

doublyexponential time before a Poincare recurrence. The quantum behavior is also different from probabilistic classical behavior, because the latter is irreversible even in principle, and indeed loses all information about the initial state on a very short timescale (compared to the humongous timescales we’re talking about).Comment #53 March 3rd, 2014 at 8:35 pm

Douglas Knight #47: OK, it’s a fair point. The statement “real, physical water doesn’t blow up regardless of its initial conditions,” is not something that we have experimental evidence for the truth of (simply because there are too many possible initial conditions), and that indeed could well be false, depending on exactly what one means by “initial conditions” and “blow up.” (E.g., what if we imploded a sphere a water at 99.9999999% of the speed of light? Could we create a nuclear waterweapon? 🙂 )

So let me try again with a more careful phrasing. We have excellent reason to believe that

singularitiesshould never occur in real physical systems, insofar as they represent a breakdown of the laws of physics themselves. (Thus, even the singularities in black holes should get resolved by quantum gravity, etc.) So if it’s indeed the case that the NS equations can produce a singularity in finite time, even for well-posed initial data, then the most physically-relevant thing to say about that is that we know that the NS equations don’t capture all the relevant physics anyway. In particular, they break down at microscopic scales, since they don’t even “know” that water is made of molecules. Thus, absent further evidence, any singularity in the NS equations should be assumed to be telling us much more aboutthe NS equationsthan about the actual behavior of water.Comment #54 March 3rd, 2014 at 8:47 pm

Sure, but we already know that NS doesn’t apply at microscopic scales, simply because it is a continuum model. One interpretation of the conjecture is that if the inital conditions are in the regime to which NS applies, then the future evolution stays in that regime.

Comment #55 March 3rd, 2014 at 8:51 pm

Vadim #43:

Even if P improbably turned out to equal NP, wouldn’t appreciating music actually have to be shown to be in P before composing could be shown to be in NP?

You switched “P” and “NP” in the second part of your question, but other than that, yes. 🙂

You might enjoy reading Lance Fortnow’s

The Golden Ticket, where he indulges in some sci-fi scenario-building about the actual steps by which a P=NP proof could get converted into the far-reaching practical consequences people talk about. In building such scenarios, a crucial observation is that you get to use the universal fast search algorithmmore than once—and in particular, you get to use it to find other algorithms for you!So for example, you could do a rapid search for a low-complexity neural-network model that recognizes the known works of Beethoven (let’s give Mozart a break 🙂 ), while rejecting other works. And then you could do another search for

newmusical works that were recognized by your neural net. What, you say you don’t know how to build such a neural net model, for which you’d be confident that the results would generalize in the right way (and wouldn’t fall prey to overfitting)? Well, your ability to solve all search problems efficiently mightagainbe able to help with that, by letting you search for neural net models with the best generalization properties. And so on. Any specific scenario of this sort is bound to be speculative, but I think it does give a fair sense of how much the world might be expected to change, if all verifiable search problems were suddenly to become solvable efficiently.Comment #56 March 3rd, 2014 at 8:58 pm

Wow, that would indeed be a strange world. Thanks for the explanation.

Comment #57 March 3rd, 2014 at 9:00 pm

Just one comment about P=NP: by the time hierarchy theorem if P=NP then either NP!=PSPACE or PSPACE!=EXP has to hold. Those that argue that the equality P=NP is the “natural assumption”, more “natural” then the inequality P!=NP, should also explain why one of the inequalities NP!=PSPACE, PSPACE!=EXP is more “natural” than the equality in that case.

Comment #58 March 3rd, 2014 at 9:04 pm

Peter #44

I will check your statements. The question seems interesting. My statement is a reaction to the quote in wikipedia by scott, not a mathematical statement. In fact I do not believe there is any correspondence whatsoever between the two. The Mozart quote is based on a bias in thinking that what Mozart must be inherently difficult. But it is my observation that it came easy to him.

I am self-consistent in admitting that my reaction to scott’s statement is also simply literary. But then again I am not a complexity theory worker.

The question of whether the universe is a computer does not appear to me to be fundamental either. I am also thinking about that some more.

Comment #59 March 3rd, 2014 at 9:17 pm

Sam Hopkins #36: I don’t think there’s even a

questionthat aesthetics plays a huge role, both in math and in theoretical parts of science. If the truth weren’t so often simpler—and yes, more beautiful—than it could’ve been a priori, then math and science couldn’t have made anything remotely like the progress that they have.Now, it’s interesting to ponder (and plenty have)

whytruth and beauty go together so often. Is it just selection bias—i.e., that we’re drawn to the sorts of questions that tend to have simple, beautiful answers, or that we learn in our scientific training to focus on such questions? Or is it also something “objective” about physical or mathematical reality?In any case, math and science don’t sit in limbo waiting for the answers to these chin-stroking mysteries. In some sense, the entire game we play, going back to Archimedes, is to accept the truth/beauty/simplicity connection as an axiom and then

exploitit. And even if we still can’tcompletelyexplain why the axiom holds, by now it’s justified itself so many trillions of times over by what’s come out of it that one need not have the slightest hesitation or defensiveness about adopting it.Comment #60 March 3rd, 2014 at 9:52 pm

@Scott #28 et al.

>> blowup might only happen for extremely special initial conditions

I think “blowup” is a bit misleading and just to clarify what it really means: The question is if one tiny drop (or several) of the fluid with mass epsilon could be accelerated to a velocity 1/sqrt(epsilon) due to turbulent energy transfer to (arbitrarily small) regions. If the case epsilon = 0 would actually be encountered in finite time then we would have the “blowup”.

The whole fascination of turbuilence is that this may very well be possible for ‘reasonable’ boundary conditions, with a laminar stream developing turbulence (e.g. empirically the velocity distributions of turbulent fluids have fat tails).

Of course, molecules have a finite mass and therefore epsilon=0 cannot actually be observed in real fluids.

I understand that T.T. tries to find an argument that in some special cases of initial conditions (this is where the ‘fluid computer’ shows up) the “blowup” has to happen, which would be enough to settle the Clay prize I think; But it does not follow that a “blowup” requires special initial conditions imho.

Comment #61 March 3rd, 2014 at 10:03 pm

Ignacio and others: I have an announcement.

Given how many confusions and weird misinterpretations it’s spawned,

I, Scott Aaronson, hereby formally disown my statement about how “everyone who could appreciate a symphony would be Mozart” if P=NP.In fact, I just edited the Wikipedia page on P vs. NP to remove that statement (though if someone reverts it, I don’t have time to engage in an edit war).

For the record, I wasn’t trying to say anything about the specialness of Mozart, the transcendent nature of human creativity, or the impossibility of AI. Even if P≠NP, I still think it’s possible that all human creative activities could someday be taken over by computers—since after all, the computers wouldn’t

needto be foolproof at solving NP-complete problems; they’d just need to be better than us! (It’s like the joke where one hiker says to the other: “Yes, I know I can’t outrun the bear. I’m nottryingto outrun the bear; I’m just trying to outrun you.”) P=NP might make iteasierto automate human creativity, but it’s far from clear that it’s either necessary or sufficient: at any rate, that’s a separate discussion.Peter Nelson #39 hit the nail on the head:

I wasn’t talking about the difference between humans and machines; I was talking about the difference between finding and verifying.Alas, I’ve learned my lesson: people are so fixated on questions like “whether Mozart’s genius can be replicated by a machine,” that if you use such an example to try to motivate P vs. NP, they’llinevitablyassume you’re talking about issues that you aren’t. Then later, when they realize that P vs. NP isn’t about those issues at all, they’ll blameyoufor having misled them.Well, I don’t want to mislead anyone, not even by accident, and that’s why I’m retiring the statement. If Lubos or others continue to flog this dead horse, then I’ll continue to explain the reasonable, true thing that I

meant—but otherwise, I’m happy to adhere to a “no-first-Mozart” policy.Comment #62 March 3rd, 2014 at 10:37 pm

Everyone: I’ve decided to write a new post soon about the scientific case for P≠NP. For now,

please refrain from further discussion of that topic on this thread: any further comments about it will be summarily deleted.I’m really excited about Lenny’s and Terry Tao’s new ideas, and it breaks my heart to see the discussion of them get hijacked by a much more tired debate.

Comment #63 March 3rd, 2014 at 11:13 pm

“If we could prove such a statement, then presumably we could also prove PSPACE⊄BQP/poly”

Does one really need something that strong to prove what Susskind is trying to do? This seems to require a stronger statement than necessary. If I’m reading this correctly he doesn’t really need that the quantum circuits get more complicated over time as arbitrary quantum circuits but a similar clock idea should also work even if one insists on uniform characterizations. This would still be stronger than something approximately like showing that PSPACE⊄BQP which is of course also far outside current results.

Regarding Tao’s work, I don’t know if this comparison has been made it seems like the fluid computer idea is superficially similar at least to constructions in cellular automata like in Conway’s Game of Life. In that context, it went very quickly from the construction of the “Glider Gun” which allowed configurations to grow in size without bound to then get that the game was in fact Turing complete. However, in this context, I’m not sure that anyone has any idea what specific problem should be described as potentially Turing complete. One almost imagines that one strong possible anti-NS conjecture might be that finite time blowup can occur and that whether a given set of initial conditions described in some natural way leads to finite time blowup is equivalent to the Halting Problem.

Comment #64 March 3rd, 2014 at 11:49 pm

Joshua #63: Well, circuit complexity is useful if you want to first fix the number of particles (or qubits, etc.), then talk about the “complexity” of a

specificstate with that number of particles, rather than talking only about the “complexities” of infinite families of states with increasing particle numbers. But as you correctly point out, the distinction between PSPACE⊄BQP/poly and PSPACE⊄BQP probably isn’t even relevant anyway at this discussion’s level of granularity.As for the analogy between what Tao is doing and universality constructions in Conway’s Game of Life, Tao makes that analogy explicitly somewhere in his paper!

Comment #65 March 4th, 2014 at 12:50 am

I guess the lesson there is I should actually read papers before I comment on them!

Comment #66 March 4th, 2014 at 12:55 am

Coming back to Lenny’s and Terry Tao’s new ideas: Can someone comment on the possibilities for empirical validation of any part of these?

i.e. say the blowup of NSE (for even some special initial condition): Would it be observable at all? With techniques like laser doppler anemometry or something even more advanced, can these increasing velocity fields of turbulence with small scales be observed?

So also for Lenny’s ideas. Of course, a “proof” would be impossible at this stage but are there even small experiments that can strengthen these elegant hypotheses?

If one wanted to falsify Lenny’s hypothesis, is it even possible, and if so what are some things one could empirically show to do so?

Comment #67 March 4th, 2014 at 1:06 am

Douglas Knight #23:

If this is a length scale we are talking about, what sort is it? tens of Nanometers? Microns? Just curious.

Or am I totally misunderstanding what “scale” means in this sort of discussion?

Comment #68 March 4th, 2014 at 1:32 am

Tao’s approach looks really inspired, makes me wonder if this could also be used to better classify the Maxwell-Dirac equation. I.e. if we could see a similar situation there, that’ll give a strong theoretical hint at the necessity for a discretized spacetime background.

Comment #69 March 4th, 2014 at 1:44 am

Yes, length scale. I don’t know at what scale NS is supposed to break down, but, yes, probably between 10nm and 1um. The important point is the number of orders of magnitude over which NS applies. You can increase this by increasing the bulk scale. So a cubic millimeter is only 3 orders of magnitude up from the micron scale, but it is quite reasonable to talk about a cubic kilometer of water, so 9 orders of magnitude. At that point, the 2 orders of magnitude ambiguity between 10nm and 1um is not so important.

Comment #70 March 4th, 2014 at 1:58 am

Rahul #66: Both papers are best seen as deep and fascinating contributions to mathematical physics (or maybe ‘conceptual physics’ in Lenny’s case). For both of them, it’s

extremelyhard to imagine an experiment that would bear directly on the relevant issues, though for different reasons.In Tao’s case, he simply isn’t

talkingabout the behavior of real fluids: he’s talking about the Navier-Stokes equations. And as I said, we already know that those equations cease to be a good model for real fluids,even when the equations are pushed MUCH less hard than Tao’s type of investigation pushes them!Because his interest is in the differential equations, rather than in the fluids that originally inspired the equations, an experimental demonstration of an all-water universal computer wouldn’t be necessary or sufficient for his project even if it were technically feasible. (And FWIW, my guess is that building a universal quantum computer would be trivial by comparison. 🙂 )In Lenny’s case, a big part of the problem is that, in order to confirm or rule out any of the experimental predictions, you’d need to jump into a black hole, meaning that you’d die moments after learning the answer and be unable to share it with anyone else. Even if you don’t mind that, other problems are that you’d first need to travel to a black hole, surround it with photodetectors capable of collecting all the Hawking photons (and a quantum computer to analyze the results), and wait ~10

^{69}years for the black hole to evaporate enough. Oh, and even then, the “predictions” might only work in the hypothetical AdS/CFT thermofield double state universe, not in our real universe.Comment #71 March 4th, 2014 at 2:33 am

Scott #70:

Interesting. Thanks!

Cannot speak as to Tao’s motivations, but at least the Clay Mathematical Institute’s Millennium problem statement speaks often of

“physically reasonable solutions”Is there a specific way to interpret this phrase? I was assuming it meant solutions that at least potentially (if not right away ) relevant to how fluids work in a real world sense.

It may not be in our everyday experience, sure, but it’s still “real”. e.g. Light bending because of a heavy star wasn’t something people regularly observed. But when Einstein came up with his theory, there was motivation to look and sure enough it was observed & theory was validated. Similarly for Higgs too?

Now, if I understand what Scott’s saying correctly and both these theories (Susskind’s & Tao’s) are totally beyond any such empirical attack, my naive, non-expert mind is a bit disappointed. Ergo, I’m wishfully hoping Scott’s wrong, for once. 🙂

Comment #72 March 4th, 2014 at 8:05 am

> it’s extremely hard to imagine an experiment that would bear directly on the relevant issues (…) In Tao’s case, he simply isn’t talking about the behavior of real fluids: he’s talking about the Navier-Stokes equations.

That seems a good reason for why we won’t see a singularity or even a “nuclear waterweapon”. But what do you think would prevent the energy concentration to hold for, say, half a dozen orders of magnitudes? Don’t you think this would be noticable enough?

Comment #73 March 4th, 2014 at 8:34 am

I think that we are in agreement that fluids in nature (say in our oceans) do not manifest finite-time blow up, robust memory, or universal computation even if 3D Euler/NS equations allow it. (Of course, it could be wonderful to fantasize about water demonstrating intelligence, where somewhere deep in the ocean water managed to prove deep mathematical theorems or perhaps write their own version of “Finding Nemo.” )

What we discussed (#25,#31) is what is the reason that finite-time blow up and robust memory and computation are not manifested in real-life fluids. Scott regards the discrete nature of water as a simple reason, and note that it allows only for a few dozens of steps in Tao’s mechanism. Scott also notes that turbulence is a familiar phenomenon that can account for a behavior similar to the one described by Tao when we truncate the description at the molecular size. My opinion is that “no robust computation” should be regarded as a separate physical principle that should be formalized and studied.

There are various reasons why matters are not as simple and Scott’s point of view seems overly naïve and likely to be incorrect. First Scott’s explanation does not apply to robust computation and robust memory which seem also to follow from Tao’s thesis. Second even 10-20 steps in Tao’s proposal look utterly unrealistic. (Ok, even just 4-8.) Third, Scott’s argument seems to apply to other systems that do allow computation and actually manifest computation. (They are also discrete in the microscopic scale.) Fourth there are various reasons not to consider real-life turbulence as relevant to von Neumann machines. The effect for turbulence seems much weaker, and it scales differently. Fifth Scott’s explanations in general and the reference to turbulence in particular do not distinguish 2D and 3D. Also it is not clear if the exponential nature of Tao’s argument that Scott relies on in his calculation is crucial and cannot be improved. So I suppose we will have to agree to disagree on this matter.

Let me mention that a few of the topic discussed here were also reviewed in a recent blogpost “Navier-Stkes fluid computers” on my blog. Some connections between NS equation, regularity and computation were discussed in my debate with Aram Harrow (the comment thread to the seventh post entitled “Quantum repetition”, links can be found in my recent post.) In particular, I was also quite curious in the distinction between 2D and 3D and a specific question is: “Can NS-equations in two dimension be approximated in any scale by bounded depth circuits?” Incidentally the debate’s sixth post entitled “Can you hear the shape of a quantum computer” was very related to the issue of computational complexity and space/time geometry. One thing noted there was that quantum fault-tolerance enables to break the connection between quantum evolutions and geometry. So my slogan would thus be “the Geometry of spacetime is enabled by the failure of quantum fault-tolerance.”

Comment #74 March 4th, 2014 at 8:35 am

About Tao, I’m not quite clear whether he’s building analog (linear mode) or digital circuits (non-linear mode):

“By combining together these gates with carefully chosen coupling constants (a sort of quadratic engineering” task somewhat analogous to the more linear circuit design tasks in electrical engineering)”

But then he also mentions that all his gates are nonlinear.

It would seem really tricky to me to build something stable out of this (even if it’s all a thought experiment in an ideal fluid) – as soon as you have some sort of feedback you risk running into highly non-linear resonances, etc. Especially considering that his circuits also self-replicate somehow!

Also I couldn’t figure whether his circuits are reversible, e.g. what happens to the the inherent heat/garbage generated by any computation? (he claims that when a circuit duplicates itself, there’s no “almost” no trace left of the previous circuit).

The answers are probably in the paper, but it’s all way over my head.

Comment #75 March 4th, 2014 at 8:55 am

Scott #70:

I understand that Tao’s approach isn’t meant to be practical.

But at the very least you’d think that one ought to be able to actually simulate the theoretical circuits/gates he’s talking about using finite element analysis in an ideal fluid, and watch the evolution of the system, like the self-replication he’s claiming to achieve.

Especially when he teases in page 15 with

“Note that such replicators, as well as the related concept of a universal constructor, have been built within cellular automata such as the “Game of Life””

Going from the “Game of Life” to what he proposes is quite a jump!

Comment #76 March 4th, 2014 at 11:14 am

Rahul, where did you see the phrase “physically reasonable solutions”? The only place I see it is in the problem statement, where it is immediately defined as smooth and finite energy.

Comment #77 March 4th, 2014 at 11:28 am

>What we discussed (#25,#31) is what is the reason that finite-time blow up and robust memory and computation are not manifested in real-life fluids.

Actually Tao already answered this question: because it would work for a very small set of initial data. Then, we wouldn’t see naturally occuring blow up for the same reason we don’t see pile of sand spontanously forming chips: because it has to be man made.

What Scott answers (if I may ;-)) is a slightly different question: why we wouldn’t see singularities even with man made all-fluid computer: because the equations should stop applying at some scale.

What you are arguing is a much much stronger statement: that it would be irrealistic to this that working even for 4-8 steps. I don’t see how your argument could work, or what is the strong motivation you see for it.

Comment #78 March 4th, 2014 at 11:35 am

Gil #73, to say that we’ll have to “agree to disagree” is an understatement. If we were debating the question “do bears crap in the woods?,” I suspect you’d somehow turn it into a complicated discussion about whether there’s a separate physical principle of “no-robust-crapping,” over and above the ordinary rules of ursine biology. 🙂

One obvious difference between real computers and Tao’s sort of construction, is that the former maintain their size and speed of operation from one step to the next, rather than speeding up and shrinking exponentially. I would guess that has

someconnection to why the former can exist even if the latter can’t…Also, I was just trying to explain why, even if (amazingly) universal computation turns out to be possible in the 3D NS equations, that fact has little or no bearing on the behavior of real fluids. As I said, the difference between 2D and 3D—why the latter allows turbulence and

possiblyeven universal computation while the former doesn’t—is a fascinating issue that I want to understand better, but I don’t see any relevance to the argument I was making.Getting down to brass tacks, perhaps I should comment on the question: why do I think scalable QCs are feasible, if I think scalable water computers are

notfeasible?As a first remark, that question doesn’t even get off the ground until and unless there’s a proposal for a universal water computer that doesn’t have the exponential shrinkage behavior that Tao’s kind of construction has (and that he

wantsit to have, because of his interest in finite-time blowup).Furthermore, while I don’t know the relevant physics nearly as well as I should, my guess is that such a non-shrinking water computer would be a very tall order, even

supposinga shrinking computer to be possible. The main problem that one worries about is energy dissipation. Both real digital computers, and proposed fault-tolerant QCs, are enabled in their operation by a constant pumping out of high-entropy garbage (the heat you feel on your laptop, or the dirty qubits), and a constant pumping in of fresh low-entropy states (your laptop’s battery or power cable, or the all-0 qubits). A non-shrinking (and hence, potentially “realistic”) water computer would only work if something similar could be arranged, to counteract the dissipation term that’s explicitly in the NS equations.Of course, even if such a thing were possible, we’d next have to ask: can it be made fault-tolerant? And also, is there a procedure for setting up the relevant 3-dimensional initial state, without needing to go into the fourth dimension to arrange all the eddies and currents in our water tank just so?

Ifthe answers to all these questions were “yes,” then my inclination would be to say fine: we now have a physically-realistic proposal for the world’s least useful and efficient classical computer.Comment #79 March 4th, 2014 at 12:05 pm

Dear Scott #79, I actually mostly agree about your observations on the fluid computation.

Real fluids can’t display any of the exponential speedups and so on because there is an atomic scale. Atoms are small but if distances are exponentially shrinking, 20 e-foldings (“20 natural units of time”) are enough to get to the atoms.

We may use the thinking of the renormalization group and “integrate out” short-distance degrees of freedom. This will tell us how all the atomic phenomena influence the behavior of fluids at a 1-meter length scale, for example. The calculation will depend on the atom size but this dependence only imprints itself into the value of the macroscopic parameters such as viscosity.

The models which de facto assume that the atoms are infinitely small and this self-similarity may be repeatedly used for some mechanisms are qualitatively different from any dynamics that actually appears in the real world. It’s somewhat analogous e.g. to the ultraviolet catastrophe. For a vanishing Planck constant, a microwave oven at a given temperature (black body is what I mean) carries an infinite amount of energy because all frequencies (arbitrarily high ones) contribute to the energy. A model of fluids that allows a turbulent structure at arbitrarily short distance scales is similarly pathological and the conclusions about “possible computers” whose transistors’ characteristic sizes and timescales have no problem to exponentially shrink and hide in smaller patterns etc. are unphysical.

Even though the self-similarity is really there in real-world turbulent fluids for a couple of “levels” in the self-similar hierarchy, many of the extreme self-similar properties of the Navier-Stokes equations are therefore unphysical, unapplicable to the real world. Just to remind you, however, if you want to win $1 million, you have to deal with a problem involving the mathematical equations that deny the existence of atoms rather than with the real physically relevant models! 😉

Cheers, LM

Comment #80 March 4th, 2014 at 12:06 pm

Scott, it looks that the “speeding up and shrinking exponentially” aspect of Terry’s suggestion is something required to move from universal fault-tolerance computation to finite-time blow up. I don’t think this is a crucial aspect of the universal computing part. (But i am not sure.)

Comment #81 March 4th, 2014 at 12:24 pm

Lubos #79: Thanks! Glad we agree about something.

Meanwhile, your latest screed about P=NP and P≠NP being equally likely, because all of discrete mathematics (unlike continuous mathematics) is so random and haphazard that you can never generalize from your experience of one thing to make a conjecture about anything else, is in my moderation queue. If you want, I’ll transfer it over to my post about what I see as the scientific case for P≠NP, once I finish writing that post. (See comment #62 for the new ground rules for this thread.)

Comment #82 March 4th, 2014 at 12:34 pm

If one considers the “blowup” of the NS equation, then sonoluminescence should be mentioned, where the tiny bubbles can reach temperatures up to 5000K and perhaps even higher …

en.wikipedia.org/wiki/Sonoluminescence

Comment #83 March 4th, 2014 at 12:55 pm

Well, Navier-Stokes does provide for nice “classical” analogies to QM 😛

http://www.youtube.com/watch?v=nmC0ygr08tE

Comment #84 March 4th, 2014 at 1:36 pm

Douglas Knight #76:

I saw it in the problem description by Fefferman.

http://www.claymath.org/sites/default/files/navierstokes.pdf

Comment #85 March 4th, 2014 at 1:49 pm

Scott, (#78) actually we are not really in a debate here. We both agree that real-life fluids do not demonstrate universal computing. Your simple explanation for why: the breakdown on the molecular level is appealing, and Lobos’ analogy with ultraviolet catastrophe is nice. OK, so I don’t regard it as that simple, and perhaps I don’t see the full argument, but this is not much of a debate.

My main point is entirely different. If, as we agree, real-life fluids do not manifest universal computing, then this very fact, can give us additional modeling power in describing the behavior of fluids in nature.

It is possible that on the macroscopic level “no memory,” “no computation” or “no fault-tolerance” represent a physical principle (perhaps of thermodynamical nature) external to the equations themselves which accounts also for no realistic finite time blow up. (OK, OK, I concede that this principle may be driven from the microscopic behavior.)

There is much interest in conserved quantities for Euler’s and NS equation (see the two follow-up posts on Tao’s blog) and if we impose (and mathematically formalize) a principle of “no computation” this may lead also to additional conserved physical quantities which are not coming from the equation but from the equation with an added principle of “no computation.” As I mentioned in my comment above perhaps a nice way to express “no computation” is by saying that on every scale we can approximate the system by bounded depth computation. Equivalence classes of states up to bounded depth computation can describe a large number of additional conserved quantities for the system. (And, yes, this idea is inspired by recent works in Hamiltonian complexity.)

Comment #86 March 4th, 2014 at 2:01 pm

Gil #85: Actually, as I said in my last comment, I

don’tregard it as obvious that one couldn’t build a universal computer “entirely out of water,” even in real life (probably with a complicated external control system, a huge array of jets to set up the initial conditions, and an enormous continuing source of energy, e.g. from an onrushing stream, to fight against dissipation—and all just to execute the first 5 or 10 steps!). I merely say that such a computer, if possible, would be “the world’s least useful and efficient.” Indeed, the electronic components that you’d inevitably use to set up and precisely time the initial water jets, would doubtless have millions of times more computing power than the aquatic computer itself.Comment #87 March 4th, 2014 at 2:04 pm

Fred #85, these are childish games but there’s one related topic that hasn’t been mentioned, AdS/hydrodynamics.

The scale invariance may always be interpreted as an extra dimension in a gravitational theory in the anti de Sitter space. So trying to hide some transistors to short-distance patterns of a liquid is just like trying to locate some transistors closer to the boundary of the AdS space where there’s “more room”. In fact, there’s an infinite amount of volume (and time) over there which is very different from any truncated theory, i.e. hydrodynamics with atoms at some scale.

Scale-invariant theories aren’t really good enough for creating common transistors, the latter have a preferred scale. The preferred scale is translated, in the AdS analogy, to a preferred radial holographic coordinate. If there’s none, the computer isn’t really “held at one place”.

Comment #88 March 4th, 2014 at 2:11 pm

“Maybe their whole technology is based on that: controlling water…”

http://tinyurl.com/lw4g8vc

Comment #89 March 4th, 2014 at 2:17 pm

Rahul #84

The concept is explained immediately after it is introduced. The formal problem statements A-D make no mention of it. The mathematical conditions stated in them are intended to model physically reasonable solutions.

Comment #90 March 4th, 2014 at 2:17 pm

Scott #86:

Sometimes I wonder: What would a complexity theorist ever say

“couldn’t be built”.If one thinks a universal computer built entirely out of water as practically feasible in real life, almost everything else is! 🙂

Comment #91 March 4th, 2014 at 2:23 pm

Scott, when I said “real-life” I meant natural systems. Like you, I also don’t have an opinion if some sort of fluids computer could be built and depending on the computational power of the controls it could be even difficult to tell if “fluidiness” is manifested.

But, again, this was

notmy point.There is a vast interest in natural systems of fluids and also in human-created systems (without universal computing ambitions). My point is that for all these systems, modeling based on an additional mathematical principle of “no computation” can be a useful idea and can lead to interesting additional conserved quantities.

Comment #92 March 4th, 2014 at 2:45 pm

For that first one, is it the case that removing a small fraction of the atoms will remove all traces of the clock feature, or is it in principle measurable across small numbers of them? It would be fascinating if there might be a way of coming up with a physical experiment which could measure the age that way.

For Navier-Stokes, I *want* there to be a way of making those discontinuities. Furthermore, I want there to be a way of making a practical machine which hits that edge case, and to see what happens when you build and run it in practice!

As a (fuzzy and unlikely) but possible answer to your question about what’s different between 2d and 3d, in 2d rotations are commutative, while in 3d they aren’t, and commutative operators aren’t as useful for making general purpose gates.

Comment #93 March 4th, 2014 at 2:56 pm

Bram, that’s a good point. And rotations in 3 space are not just not commutative they have other complicated behavior also. In particular, rotations which have torsion which are around perpendicular axises can give rise to a rotation without torsion. One simple example is a rotation of Pi/6 about the x-axis and another rotation of Pi/6 about the y axis. Obviously each of these rotations themselves has order 12, but the composition never becomes the identity no matter what power you raise it to (these is easy to see if you look at the corresponding matrix).

Comment #94 March 4th, 2014 at 6:57 pm

Rahul #90:

Sometimes I wonder: What would a complexity theorist ever say “couldn’t be built”.

That’s

easyto answer: a device to solve NP-complete problems in polynomial time in the worst case!(I would add: a superluminal signaler, a time machine to the past, a perpetual-motion machine … really, anything for which I can see a good argument that the fundamental laws of physics should rule it out.)

If one thinks a universal computer built entirely out of water as practically feasible in real life, almost everything else is!

Your problem, Rahul, is that you think only about the difficulty of the stated task: “haha, those crazy complexity theorists, they think they could build a computer out of water!”

You don’t feel at all the immense weight on the other side:

what would it actually take to prove the task impossible, according to the known laws of physics?For example, how would you show that no one, not even 5000 years in the future, could set up a giant water tank with thin, extremely-fast-moving jets of water inside that collided with each other in a manner that simulated AND, OR, and NOT gates? Ponder that enough, and you might come to some conclusion like: “actually, I haveno ideahow to prove that’s impossible … all I can say with any confidence is that it seems absurdly difficult and useless.”Comment #95 March 4th, 2014 at 7:04 pm

>> 5000 years in the future

You don’t have to go that far into the future …

http://en.wikipedia.org/wiki/Fluidics

Comment #96 March 4th, 2014 at 7:09 pm

Bram #92:

For that first one, is it the case that removing a small fraction of the atoms will remove all traces of the clock feature, or is it in principle measurable across small numbers of them?

In general, you expect the evolution to be chaotic—so that if you wanted to perform a measurement on the clock state to verify that exactly t time steps had elapsed, you would need to measure

allof the particles, and in a very complicated way. Removing even a single particle would almost certainly make the measurement impossible.I should also add that, if you don’t

already knowhow many time steps have elapsed, then there’s no measurement you can make on the clock, not even in principle, that will tell you the answer! I.e., the elapsed time / circuit complexity, though encoded nonlinearly into the state, is not a quantum-mechanical observable. (In his post, Lubos harps on this as if it’s something I didn’t understand; in fact it’s an easy observation.)So in summary, this is

reallynot a very useful clock! Its only compensating advantage is that ordinary, non-fine-tuned physical evolution causes it to “tick” for an exponential (if we’re talking about the circuit complexity) or doubly-exponential (if we’re talking about the state more generally) number of steps.Incidentally, thanks for your observation about commutativity of rotations! That’s precisely the sort of thing I was looking for. If I could see how e.g. the noncommutativity played an essential role in Tao’s gadget constructions and/or the formulation of turbulence, I would have what I wanted.

Comment #97 March 4th, 2014 at 7:15 pm

wolfgang #95: Thanks for the link! I guess the difference is that, for a “true Navier-Stokes computer,” we shouldn’t allow hydraulic elements like pipes: instead we should restrict ourselves to a “simple” boundary condition, like a rectangular tank, with the entire logic architecture built out of jets of water themselves. Otherwise, what’s the fun? 🙂

Comment #98 March 4th, 2014 at 7:45 pm

Bram #92: Even were there a way to read the clock measuring only a fraction of the particles (which already seems doubtful), it would have a much more restricted range, since each particle you remove halves the timescale.

Comment #99 March 4th, 2014 at 8:19 pm

What if the fluid is blood pumping in the circulatory system of someone actually trying to prove the blowup of an NS equation?

Comment #100 March 4th, 2014 at 8:35 pm

Apologies in advance for my lay-person questions, but this is another one of those subjects that fascinate me, but of which I can only glimpse small pieces.

Two questions:

1. I’m having difficulty understanding how complexity theory as applied to computing is relevant to the physical theory underlying quantum mechanics. Computer science is in the realm of the deterministic and the discrete, whereas quantum physics (to my limited understanding) is in the realm of the probabilistic and continuous. To me it seems trivial to say that the roots of physics are not computable; how could computing have anything deterministic to say about that which is not deterministic? I’m missing it, aren’t I? 🙂

2. I’m also having difficulty understanding how the P != NP question plays into this. Complex problems may resist precise solutions, but they usually succumb to heuristic or statistical approaches. Isn’t it possible that nature is fundamentally not deterministic at the smallest scales? I thought that was one of the key lessons of quantum mechanics?

Again, sorry if these are obvious questions …

Comment #101 March 4th, 2014 at 9:11 pm

Darrell #100: The short answer to both of your questions is that the notion of computation is much,

muchbroader than your comment suggests. Since the beginning of the computer age, people have been using probabilistic algorithms (for Monte Carlo simulation, cryptography, etc.) alongside deterministic ones, and if you look at any modern papers on theoretical computer science, you’ll see that the whole field has become thoroughly probabilistic these days. And that’s not even mentioning quantum computing, which “bakes in” all the phenomena of quantum mechanics from the outset, and which many of us have been studying full-time since the 1990s! There are even interesting and well-studied models of computation where you’re allowed to manipulate continuous quantities—though one can question the physical meaningfulness of those models, since quantum gravity makes it doubtful that Nature itself has any measurable continuous quantities at the smallest scale. (E.g., there seems to be no physically-measurable unit of length smaller than the Planck length of 10^{-33}cm, and no measurable interval of time smaller than the Planck time of 10^{-43}seconds.)Anyway, theoretical computer scientists study all of these models of computation—in each case, asking what kinds of resources (how many operations, how much memory, etc.) are necessary and sufficient to solve problems of interest. And, to make a long story short, you encounter P-versus-NP-like difficulties

regardlessof which of these models you look at!As one example, it’s

notjust that no one knows how to solve NP-complete problems in deterministic polynomial time—no one knows how to solve them in probabilistic or quantum polynomial time either! (See the tagline of this blog. 🙂 ) Indeed, it’s even known (by the famous PCP Theorem) that for many NP-complete problems, getting a pretty goodapproximatesolution is just as hard as getting an exact solution.You say that “complex problems … usually succumb to heuristic or statistical approaches,” but that simply isn’t true for all complex problems. If it were, you wouldn’t be able to buy anything online! The fact that you’re able to send your credit card number over the Internet, relies on the fact that factoring huge numbers (and hence, breaking the RSA cryptosystem) is a complex problem that no one is yet able to solve efficiently, even using heuristic or statistical approaches. (As you might have heard, a quantum-mechanical approach

wouldbreak RSA, but even that probably wouldn’t work for other cryptographic codes.) Nor does anyone know how to use statistical approaches to find proofs of theorems like the Riemann Hypothesis, or to solve many machine learning problems (both of which provide other examples of NP search problems).And no matter which kind of computation (probabilistic, quantum, continuous, etc.) you care about, if you want to

understandwhat its ultimate limitations are and why, then you’re inevitably led to questions that have the same basic character as P vs. NP. Thus, while it’s true that P vs. NP (by itself) is a slightly arbitrary choice of question, the right way to think about it is that it’s the “flagship” of an entire fleet of closely-related questions that we also study (the limits of probabilistic algorithms, quantum algorithms, etc.), and there’s nothing arbitrary about the fleet.In summary, “computation” doesn’t have to be deterministic, it doesn’t have to be serial, and it doesn’t even have to be discrete. It’s a way of thinking about any rule-governed process whatsoever. I hope that helps clarify things.

Comment #102 March 4th, 2014 at 11:10 pm

Scott #100 – thanks again for helping this beginner understand. 🙂 I guess I need to expand my perspective on what computing is. I have a bias towards the digital world, in which of course everything is deterministic. Even a (pseudo)random number generator produces the same sequence of numbers if you give it precisely the same set of seed values. I guess I have trouble wrapping my head around the notion of a computer that is not discrete, or a program does not always produce the same outputs for the same inputs.

Comment #103 March 4th, 2014 at 11:21 pm

Scott #94:

Actually, I agree with what you wrote. It might be mighty difficult to *prove* it impossible.

OTOH, so long as one admits that it is

“absurdly difficult and useless”my intuition is happy.Perhaps here’s one difference between our thinking: When I conclude something is

“absurdly difficult and useless”, I give up. But I suspect a complexity theorist will see it as an opportunity to come up with a new impossibility theorem?Comment #104 March 4th, 2014 at 11:27 pm

Scott #94:

No, actually I do feel the weight. I just don’t see the utility. Even if I had tremendous resources or intellectual ability, would I really want to devote it to proving such a task as

” impossible, according to the known laws of physics”?I think we agree on the difficulty but disagree on the utility. There’s a huge (infinite?) number of potential propositions that one might try to prove / disprove but one must choose judiciously.

Comment #105 March 5th, 2014 at 1:01 am

Darrell #102: Here’s an excellent

IEEE Spectrumpiece about the various ways computers have long “harvested random numbers from the surrounding world” in practice (for cryptographic and other applications), ending with the hardware random-number generators that are now included in Intel chips.Comment #106 March 5th, 2014 at 1:23 am

Rahul #103:

Perhaps here’s one difference between our thinking: When I conclude something is “absurdly difficult and useless”, I give up. But I suspect a complexity theorist will see it as an opportunity to come up with a new impossibility theorem?

Maybe, but the more relevant point is that in this line of work, you have to sharply

distinguishbetween “impossible / intractable” and “merely absurdly difficult and useless.”“Impossible” and “intractable” are the sorts of things you can hope to

proveunder clearly-stated assumptions (e.g., Turing’s proof of the unsolvability of the halting problem), or at least formulate precisely enough that you can imagine someone in the future proving them (e.g., P≠NP).“Absurdly difficult and useless,” by contrast, is just a statement about human economic interests and engineering limitations

at the present time, which can’t claim any wider validity.Here’s an easy way to remember the difference: in the 19th century, perpetual motion machines were correctly understood to be impossible (because of the Second Law of Thermodynamics), whereas Babbage’s analytical engine was “merely” dismissed as “absurdly difficult and probably not very useful.” Today, in 21st century, the impossible thing remains impossible, whereas the “merely absurdly difficult” one has changed category (as sometimes happens) into “absurdly ubiquitous.”

Comment #107 March 5th, 2014 at 1:39 am

Gil #91:

There is a vast interest in natural systems of fluids and also in human-created systems (without universal computing ambitions). My point is that for all these systems, modeling based on an additional mathematical principle of “no computation” can be a useful idea and can lead to interesting additional conserved quantities.

Well, my specific difficulty is that I don’t see any way to

formalizea “mathematical principle of no computation.” At most, I see ways to formalizeotherprinciples that sometimes have “no universal computation” as side effects. But those principles tend to have the problem that they overshoot the target, by radically constraining the set of systems we’re allowed to consider.And yes, I understand that you’ve been trying hard to formalize such a no-computation principle for quantum systems—and to my mind, the fact that your program has achieved so little is a worthwhile data point that only adds to my doubts! (So thanks! 🙂 ) But turning our attention to water: can you suggest any additional regularity constraint on the initial data to the Navier-Stokes equations, which you conjecture would prohibit universal computation (or for which such a statement follows from known results)?

Comment #108 March 5th, 2014 at 4:38 am

“No computation principle” #91, 107

Despite I am not very happy about a way of application of this principle to universal quantum computation by Gil (it looks like a try to criticize a hypothetical device suggesting even more hypothetical obstacles), the principle itself can be familiar due to more realistic reasons.

After proofs about Turing completeness for more and more systems the property became less convenient for classification.

Rather opposite problem: set of states and transitions that met universality may be small (or even measure zero) between set of whole states for system under consideration. In such a case “no computation principle” is rather not constrain, but extension of the view.

Comment #109 March 5th, 2014 at 5:18 am

“Reversible cellular automata” Brent #22.2, Scott #30.2

Due to some numerical simulations I myself was interested about porting BH information paradoxes into RCA, but was afraid to ask (c).

Relating \(n\) vs \(2^n\) : what about Holevo et al theorems? “Internal” complexity of quantum system can be, indeed, described by necessity to make exponentially big amount of steps, but we may only observe a classical picture. If the picture has a principle difference with stochastic picture often used for description of deterministic model due to lack of data or precision? BTW, reversible classical system (such as CA) also may need exponential number of steps to reach some configuration.

Comment #110 March 5th, 2014 at 8:32 am

Scott #101:

With the Lenny Susskind kind of paper, are we transitioning to a model where computation is not just a convenient way of thinking about a rule-governed process but where computation theory itself provides the rules? The rules & constraints no longer seem an external set but derived from deep within computational complexity itself?

Or am I confusing things?

Comment #111 March 5th, 2014 at 9:34 am

Thank you Scott for sharing this awesomeness!

Regarding the practical implications of T.T. work: I wonder, is there any theoretical obstacle preventing us from building exponentially expanding liquid computer instead of shrinking one? And is there a lower limit on an exponent?

I mean, we could design the shrinking computer to first store some information at high energy/low scale and then to build an expanding computer so we could retrieve that information. It would be a great tool to explore distances and energy scales inaccessible by other methods. Water is probably too boring to be bothered with, but there may be other more interesting NS liquids (googling “navier stokes black hole” gives this, other non-atomic NS liquids are much appreciated!).

I wander what limits relativity theory would impose on NS liquids if atomic discreteness would be somehow bypassed? Would NS equations still hold?

Comment #112 March 5th, 2014 at 11:45 am

Rahul #110: Well, if we’re talking about physics, then the

ultimaterules that determine whether a given model of computation is or isn’t “reasonable” will always come from the laws of physics. On the other hand, a theme that interests me a lot personally is that, once we develop a deep enough understanding of computation, we can often leverage that as a shortcut in situations where we don’tknowall the relevant physics. One common example is the use of the heuristic:if your model of a physical system would imply the ability to solve NP-complete problems in polynomial time, then your model almost certainly fails to capture all of the relevant physics.And yes, a second example is the sort of work that Lenny, John Preskill, Patrick Hayden, and others have been doing on black holes. There, since you don’t know all the relevant mixing dynamics on the event horizon (and the exact details of the dynamics would probably be complicated and not-terribly-relevant even if youdidknow them), you decide to study the questions of interest to you bymodelingthe event horizon as (say) a collection of qubits acted on by a random quantum circuit. Philosophically, I don’t think is really different from what physicists have been doing for centuries: i.e., building “effective models” that capture some of the qualitative phenomena they care about, using whatever mathematical tools are available. It’s just that quantum information and theoretical computer science have expanded the toolkit.Comment #113 March 5th, 2014 at 12:08 pm

Oleg #111

I wonder, is there any theoretical obstacle preventing us from building exponentially expanding liquid computer instead of shrinking one?

Well, there’s conservation of energy (and of the total mass of liquid)! But yes, the Navier-Stokes equations are apparently capable of exponential amplification (i.e., chaos), so it’s an interesting question whether one could “program” them to use exponential shrinkage and speedup to solve an arbitrarily hard computational problem within a fixed time bound, then use exponential expansion to make the answer readable again. On the other hand, I don’t think there are

anyphenomena in nature for which the Navier-Stokes equations are a good approximation at anything smaller than the molecular (or conceivably nuclear?) scale. So this would really be of mathematical interest; practical applications are doubtful. 🙂Comment #114 March 5th, 2014 at 12:52 pm

Scott: “my specific difficulty is that I don’t see any way to

formalizea ‘mathematical principle of no computation.’ ”Thanks much, Scott, for turning to the specifics. You are correct that this idea is related to my similar attempts to define “no computation” (or, more precisely, “no quantum fault-tolerance”) in the context of quantum evolutions. As I said, we even discussed in my debate with Aram this issue in the classical case and I expect the classical case to be even harder, because of the repetition mechanism which allows robust classical memeory. (This was also one of the problems I raised in the concluding post.) Here, I really hoped for a discussion of water computers and not another round of quantum debate, but you are correct that the two issues are related. Your piece on “why do I think scalable QCs are feasible, if I think scalable water computers are

notfeasible?” is very cute. Also, generally speaking, I don’t see why an “I don’t see”-type argument should carry such an immense weight.“I see ways to formalize

otherprinciples that sometimes have “no universal computation” as side effects. ”It will certainly be interesting to know what do you refer to.

In any case, if 3D NS equations allows for finite-time blow up and universal computation it will be an important question why natural systems do not have this behavior (or perhaps what properties they sometime have that manifest this behavior). In fact, this is an important question even now, and certainly the question: if mathematics allows 3D NS to misbehave why it behaves well in nature is something people already thought a lot about, and I even mentioned one popular answer above.

“Can you suggest any additional regularity constraint on the initial data to the Navier-Stokes equations, which you conjecture would prohibit universal computation (or for which such a statement follows from known results)?”

Ohh, I think I can see why you cannot see: First (and mainly), why do you think that the “no computation principle” should be expressed in terms of initial data? or even, more strictly, regularity of the initial data. Second, while I am perfectly fine with staying with NS on the nose we may need to move to a perturbation of the equation, e.g., to express the molecular truncation that you suggested, or some stochastic version as many people suggested.

A closely related analogy is the connection between thermodynamics and classical mechanics and later quantum mechanics. In principle, the laws of thermodynamics can be described in terms of initial data (or intermediate data) for the evolution describing nature. (Or perhaps they can even be derived just from the equations themselves.) But we do not wait for such a description to study thermodynamics.

So if you do not insist that the principle of “no computation” should be described in term of initial data there could some interesting directions. Here are four ideas.

1) We can consider a system to be computation free if it can be approximated in any scale by a bounded depth circuit. (You can see it mentioned above in the context of the 2D NS equation.)

2) An interesting notion of “no computation” (or “no deep computation”) in the quantum case is described by “gapped evolutions with local terms.” We both attended a lecture in Jerusalem by Frank Verstraete where, for certain systems, gapped evolutions with local terms were connected (not in full mathematical rigor) to “no-deep computation.” We can seek analogous notions for fluids. Even on the technical levels of the analytical tools and inequalities there could be some connection.

3) Indeed, when we move to stochastic (noisy) versions (or derived systems) of NS we can try to adopt some of the ideas from my work on quantum fault-tolerance. (Correlation behavior, time-smoothing, etc.) A basic difficulty would be that in general these ideas are not in conflict with a repetition mechanism that allows robust classical information.

4) Still in a different (and vague) direction, one may relate a “no computation” principle to properties of Hamiltonian systems in symplectic geometry. We could vaguely hope in such a context a principle that computation occurs only “globally” and is not manifested for “small” subsystems.

Comment #115 March 5th, 2014 at 1:31 pm

Scott #110:

Interesting. I’m curious, what are examples of physical systems where this heuristic was practically used & led to our naive models being corrected.

Comment #116 March 5th, 2014 at 2:23 pm

Rahul #115: You can use it to explain why the dynamics are important for protein folding, soap films, and spin glasses. Also, once you know that quantum computers can only ever give a quadratic speedup for search, and no better, you can use that to upper-bound the spectral gaps of certain condensed-matter systems. Finally, you can use the heuristic to critique speculative physical theories involving postselection or hypothetical nonlinearities in QM—such as Yakir Aharonov’s “postselected final state,” the Horowitz-Maldacena black hole final state proposal, and Anthony Valentini’s non-equilibrium matter.

In each of these cases, there are also other ways of reaching the same conclusions, without invoking the concepts of NP-completeness or the hardness of search. But to me, that’s simply like saying that, if someone rejects a physical proposal because it would violate no-superluminal signalling or the Second Law, there’s probably a more specific explanation of what’s wrong with the proposal that doesn’t invoke those concepts. I’d say that, by this point, “generic search problems are hard” has proven itself as a fairly useful heuristic in a variety of contexts.

Comment #117 March 5th, 2014 at 3:14 pm

Scott #96, there is turbulence in 2D NS as well as 3D, I think. #97 – the Clay NS problem postulates that the fluid fills all of Euclidean space, so no rectangular boundary.

Comment #118 March 6th, 2014 at 12:29 am

Is this Terry Tao paper a contender for potentially bagging the Clay prize open for Navier Stokes? Or does the averaging disqualify it?

Demonstrating Finite-time blowup for the 3D version even for one specific initial condition is sufficient to win the prize, correct? Are we just waiting for the proof to be fully vetted or are their other problems that would prevent it from winning that prize?

Comment #119 March 6th, 2014 at 1:10 am

I believe also that non-commutativity of rotation is critical in the Banach-Tarski paradox and why that one applies to 3d but not 2d.

Comment #120 March 6th, 2014 at 11:26 am

asdf #117: So then, you can’t claim to have built a Navier-Stokes computer unless you’ve filled the entire universe with water? I guess that would answer Rahul’s question about “practicality.”

Comment #121 March 6th, 2014 at 11:27 am

Rahul #118: The averaging disqualifies it.

Comment #122 March 6th, 2014 at 11:30 am

Is this Terry Tao paper a contender for potentially bagging the Clay prize open for Navier Stokes? Or does the averaging disqualify it?The latter. The equations in question are not the Navier-Stokes equations, they’re an averaged verion.

Demonstrating Finite-time blowup for the 3D version even for one specific initial condition is sufficient to win the prize, correct?That it would be!

Comment #123 March 6th, 2014 at 11:44 am

In fact, a toroidal boundary is also okay for Millenium Prize-winning solutions to NS.

Comment #124 March 6th, 2014 at 2:58 pm

Is there any hope at using the Navier-Stokes equation to attack the Extended Church Turing thesis?

A classical simulation surely seems already much slower than letting a physical system evolve under given initial conditions?

A lathe Boson Sampling experiment.What stops us here? Is the computational complexity of solving NSE known? Are lower bounds on complexity known?

Comment #125 March 6th, 2014 at 3:38 pm

Rahul, suppose you have a tank of water and NSE allows blow up. If you give it a try, the blow up will stop at some point because of friction, ebullition, water dissociation or whatever first phenomena will show water does not obey NS all along down to the Planck scale. That seems no threat on ECT.

Now suppose you can do the same using much stronger material (quark soup?), and blow up is possible, and Planck scale turns out limiting nothing (to the surprize of many, but not all, physicists). Yes, that could disprove the ECT. But don’t hold your breath. Quark soup is difficult to cook.

Comment #126 March 6th, 2014 at 6:38 pm

Rahul #124: No, as Jay says, the Navier-Stokes equations are not a threat to the Extended Church-Turing Thesis. Even assuming the equations theoretically allowed hypercomputation (which hasn’t been shown yet), we know that they break down at the molecular scale and even above that. And if you’re only interested in their behavior above some distance scale, then you can use e.g. finite-element methods to simulate them in polynomial time using a classical computer. It’s true that the simulation might be less efficient

in practicethan just watching a water tank evolve (though even in practice, computer simulations of fluid flow have gotten better and better…). But there’s nothing there that could make the simulationexponentiallyinefficient as a function of the number of water molecules—and that’s the crucial difference from BosonSampling and related quantum proposals.Comment #127 March 6th, 2014 at 10:54 pm

Lubos and Scott. the truth is that the universe is also emergently creative, and not fundamentally one-directional. which resides in the paradox. it is sometimes about piecing information together, such as what scott mentioned earlier, quantum geometrical-lattice computers are alot-more-secure, now taken this in reverse, a computer that works at a much fundamental level (string theory) (it will happen given enough time), can manipulate the physical continuum of fermions and bosons to create hypercomputation which “creates” the framework to manipulate fermions and bosons in the heading of “free-space” into “different” objects and configurations. for now we can shoot individual atoms and such to create new materials, ultimately what this is pointing to is that “most” practically engineering problems will be possible to solve which deals alot with the computational complexity directing the patterns. this ties p=np, navier-stokes, and blackhole waveforms all together. sry if you cant understand me im hungover.

Comment #128 March 6th, 2014 at 11:06 pm

spectro #127: I was about to delete your comment, but then that last sentence imbued it with a certain guttered eloquence. One pictures you summing up your argument by puking into a trash can.

Comment #129 March 6th, 2014 at 11:35 pm

spectro, I can’t parse most of what you said, but the genera gist seems to be the idea that one can use quantum states to do hypercomputation. This runs into a very fundamental problem: we can simulate quantum interactions on classical computers with exponential slow down. Formally speaking, BQP, the set of problems solveable with high probability on a quantum computer in polynomial time, is a subset of EXP, the set of problems solveable in exponential time on a classical computer. In fact, one can do even better than that, since it turns out that BQP is a subset of PSPACE, the set of problems solveable in polynomial space.

There has been some speculation that the actual laws of physics might allow hypercomputation. Penrose is one of the more vocal proponents of this view, arguing that whatever the actual laws of quantum gravity are, they will allow consciousness and high levels of hypercomputation. Penrose is a highly respected physicist but he really doesn’t have much going for this argument.

All of this, and more is discussed in Scott’s book. It might help to read that before commenting. Or, if you want a more comprehensive introduction to the theory of computing, there is an excellent book by Moore and Mertens, “The Nature of Computation.”

Comment #130 March 7th, 2014 at 12:23 am

Although Penrose seems to have a tinge of crackpotery or flights of fancy sometimes? I’ve heard him talk & he sometimes seems more into metaphysics than physics.

Comment #131 March 7th, 2014 at 4:51 am

Standard Navier-Stokes equations describe incompressible fluids, right? So besides problems at the scale where real fluids are no longer continuous, shouldn’t one also expect that when compressibility becomes important (when velocities approach the sonic speed), the results are no longer physical? Or is (in)compressibility not important for blowup?

Comment #132 March 7th, 2014 at 8:50 am

Rahul @130, yes, that’s certainly the case, and one can argue that this particular view of Penrose’s is one example of it, although it does have the virtue that he’s made it precise enough that there are in principle experiments one could do which would falsify the claim.

Comment #133 March 7th, 2014 at 3:48 pm

Pablous #99

I am not sure what you meant but as it happens the behavior of blood isn’t approximated well by the NS-equations.

Comment #134 March 7th, 2014 at 9:09 pm

So, I have a really basic question about black holes, which I’m going to ask here in the hopes that someone here might be able to say something on the matter. (While hoping that A Certain Physicist doesn’t shout at me for asking.) Hoping this is not too off topic.

Namely… how certain are we that black holes — by which I mean the things with event horizons — really exist, can really form in finite time?

Like, there was that whole recent hubbub when Stephen Hawking said there are no black holes with event horizons, and some people were like “Holy crap, he said there are no black holes!” and other people were like “No he didn’t, he just said there are no black holes

with event horizons, jeez.” But, like, as originally conceived, having an event horizon, a point beyond which nothing can escape, is kind of a defining property of a black hole, no?I mean, to me it seems pretty clear that there were these hypothetical objects that relativists were studying, and they called them black holes, and then people went looking for black holes to see if they might actually exist, and they found hey, here are these things that sure seem a lot like black holes! And these things that they found certainly seem to exist, and they certainly seem a lot like black holes (in the original sense), and so they’ve been designated “black holes”, but that doesn’t necessarily mean that they’re actually black holes in the original sense, just like them. So to say “black holes (sense 2) don’t exist” is pretty absurd, but to say “black holes (sense 1) don’t exist” is… maybe not so absurd but I don’t know?

Now everyone talks as if black holes certainly do exist, and I’m certainly not an expert. But I once explicitly asked a relativist (who I won’t name, just in case it prompts Shouty Physicist to go shout at them) whether or not black holes can form in finite time, and to what extent we can be sure things that seem like black holes are in fact black holes, and was told — well, I don’t have the quote, because it’s been lost to webpage rot, but the gist of it was (if I’m remembering correctly and if I understood correctly) that no, black holes

cannotform in finite time in general relativity, and that what astronomers call black holes, and what relativists call black holes, are not really the same thing, and one day they should really get around to writing something about this.So I’m confused here.

Canblack holes (in the relativist’s sense, not the astronomer’s sense) form in finite time?Doevent horizons exist? If the answer to these is no… what was the big deal about Hawking’s statement, and why do people spend all this time on things like the black hole information paradox and firewalls and other things that are then surely unphysical? If the answer to these is yes… why was the person I spoke to so certain the answer was no? (I realize, that last one is probably unanswerable.) Or is there something I’m missing here, some false background assumption I’m making, that will cause this to all make sense?A thank you to whoever can help answer this!

Comment #135 March 8th, 2014 at 11:26 am

Jr 133

What if you take statins and make the blood less viscous? Any way it’s a bad joke, I’m no expert but I do enjoy very much the serious discussions on this blog.

Comment #136 March 8th, 2014 at 3:22 pm

Sniffnoy #135

Maybe I understand the second part of what your physicist friend said, namely that we have a model (actually, several) of a black hole in GR, and we know objects that appears so dense that they are most likely described by this model, but these two things must be considered as conceptually different. So, when he reads wikipedia stating that black holes are defined by having an event horizon, he’s pissed because he thinks black holes are the physical objects, and they will remain intact the day our model will be proved partially or entirely wrong. This underlies also the position of those who said Hawking did not say there is no black hole, as opposed to the ones who said that’s exactly what he said. I say he watchs and laugh loud.

On the other hand, I don’t understand the first part when your friend said that (conceptually) black holes cannot form in finite time in relativity. This sounds as Oppenheimer et al. interpretating black holes as “frozen star”, but I thought this view was now considered more as a simple mistake than a valid interpretation.

Comment #137 March 11th, 2014 at 4:58 pm

Sniffnoy #135, Jay #136; from what I can recall without checking in a text, it was Hawking (and Penrose?) who proved (in the framework of GR, no quantum) that there is a class of initial conditions under which a black hole—with an event horizon—inevitably forms within a finite time.

In quantum gravity you would expect some fluctuations/uncertainty, but for large systems it should still be true, at least in an appropriate approximate sense.

Comment #138 March 11th, 2014 at 9:27 pm

Actually what Hawking and Penrose proved is that GR as a theory inevitably includes singularities (except if, say, hell freezes in summer). No idea if that was also another nail in the coffin of the frozen star interpretation, but you’re right dates make this a possibility. I’d be curious to ear someone that was active at this time.

PS: this one, right?

http://rspa.royalsocietypublishing.org/content/314/1519/529

Comment #139 March 12th, 2014 at 12:38 pm

Huh, thanks; I didn’t know about those. I’ll update back towards “black holes exist” for now.

Comment #140 March 29th, 2014 at 2:54 pm

Here is a speculation regarding the first topic of the post related to the black hole information paradox. There is something problematic about the attempts (as beautiful as they are) to solve the AMPS firewall paradox using computational complexity. The fact that something is not efficiently computable does not mean that we do not have the information needed for the computation. (Harlow and Hayden mention in their paper various explanation around this but those are not so convincing.)

Since the issue is essentially about information and not about computation we may try to assume that the computation power (for everybody, Alice Bob, Charlie…) is unlimited, and only information is limited.

When we take this approach we need to pay a price: we cannot talk about closed quantum systems. For those everybody can compute everything from the initial conditions, and if we allow this type of computation the paradox (along with many other basic physical insights) goes away. What we can do is to talk about open quantum systems, namely to assume to start with, that we consider unitary evolution on a large Hilbert space where we have access only to a smaller subspace. (So this is the setting of noisy quantum evolutions.) And, yes, the small Hilbert space will represent a good chunk of the physical situation outside and inside the black hole.

I see reasons to hope based on standard assumptions on quantum noise that the black hole firewall paradox remains equally powerful (even with stronger informational foundations) for open quantum systems as well. (For example, we can send many Alices rather than one to overcome the noise, and we can take quantum computation for granted.) This may be known.

But perhaps conditioned on a

no quantum fault-tolerance principlewe have a chance to resolve the paradox . In particular, the hypothesis of positive correlations for information leaks for entangled subsystems seems relevant. If so, then in turn, we may gain insight on the quantitative aspects of this hypothesis. (Note that my no-quantum-fault-tolerance conjectures are of information-theoretic nature and not of computational complexity nature.)Comment #141 April 7th, 2014 at 5:49 pm

Say, Motl: you confirm any practical physically observable events with that string theory of yours yet? I get sick of seeing tv science shows featuring that overly-featured pop physicist Michio Kaku dumbin’ it down for the audiences, saying the same things over & over about “what if the universe were made of tiny vibrating strings”. And, so what if it were? Never ever venturing any significant and original prediction about the physical universe. All because tv producers are too lazy to search out lesser known physicists doing the hard theoretical and physical experimental work that’s less glamorous than mere string theory speculation. Because, contrary to their almight holy propaganda that “learning is fun”, in fact, learning IS hard and learning IS boring, and you learn by DOING – whether homework problems or physical experiments – not just by reading & listening to lectures. (Although always better to read more than less.)

Frankly, ANY human discipline, not just string theory, can be corrupted. It’s not obvious or even observable whether the disciple is goofing off, doing mental masturbation, or buckling down and producing real hard work.

Anyway, bottom line: there are far too many string theorists (like Motl) just goofing off with mindless speculation, because they want to avoid the real hard work of either doing proofs or doing hard physical experimental work, like climate scientists do. Now THERE’S a REAL scientist: courageous heroes putting their lives on the line for trivial pay, trekking the poles, warriors against the stupidity of an ungrateful public, fighting to protect future generations that breeders force on us.

Comment #142 April 8th, 2014 at 9:36 am

Lubos Motl clearly does not understand how probability works. Given no information about whether A or not A is true (say, where A is the statement that P=NP), the only thing one can assert is p(A)+p(not A) =1, p(A)=probability of A.

There is no logical reason to make the assumption that p(A)=p(not A)=0.5

Actually, more importantly, the concept of “probability” is not even well-defined in this situation. The only meaning the word “probability” could have is to run an experiment where the universe runs and mathematicians eventually prove P=NP. Then restart the universe and mathematicians prove P!=NP. Then restart the universe, and so on…

Then the only logically consistent well-defined meaningful definition of probability, p(A), would be the limit of the number of times A occurs divided by N, the number of trials.

That’s it. Clearly, Lubos Motl has never earned a degree in a hard science such as physics or math, and he just changed grades to get into school.

Comment #143 November 17th, 2014 at 8:44 pm

http://www.youtube.com/watch?v=94xgbhB_B_Q

I accidently found this, pretty fresh upload. Leonard discusses the same topic, and the funny thing is -well at least as it seems to me- that the confused physicists in the QA section are totally not getting how the generic state/exponential complexity phase fits into the discussion. I also think it has to do with the unfamiliarity with complexity thinking, basically

1) not being afraid very much of exponentials to

really appreciate the sense in which Leonard points out constantly that _theoretically_ firewalls are possilbe and common

2) unfamiliarity with the pseudo random vs truly random point, that is what Scott would immediately realize and does in this post in (3).

Comment #144 April 9th, 2015 at 9:38 pm

“Instead, he’s content if the experiment is “merely” polynomially hard—but in the same sense that (say) unscrambling an egg, or recovering a burned book from the smoke and ash, are polynomially hard.”

Relevant:

http://news.sciencemag.org/sifter/2015/01/scientists-unscramble-egg-proteins

Comment #145 December 15th, 2015 at 9:14 pm

Very late to the party, but just wanted to make an extremely naive and trivial observation: various “probabilistic counting” algorithms for estimating the number of distinct values in a multiset (e.g., https://en.wikipedia.org/wiki/HyperLogLog) rely on hashing each value as it arrives, and then taking the maximum number of leading zeroes over all hashes. Since each consecutive leading zero represents a “halving” of the currently-probed hash space, you can use this maximum number of leading zeroes as an estimate of the base-2 logarithm of the number of distinct values (after averaging results from several hash functions and correcting for bias). This just seemed cutely analogous with the notion of using diffusion as a “clock” to retrodict the time taken to reach the current state. (The analogy of an equilibrium state here is the maximum value of the number of leading zeroes becoming “saturated”, i.e., equal to the number of bits in the hash function’s output.)