- I suppose the objection now is that it’s impossible to construct ideal fluids in the lab, so we could never demonstrate “bucket supremacy” on this counting problem.

Yes, that’s *precisely* the objection. And it’s a completely devastating objection, not just for fluids but for *any* analog system where you depend for your computational power on the exponential precision of observable parameters like position, momentum, or energy. If it weren’t a devastating objection, then it would actually work to build classical analog computers that violated the Extended Church-Turing Thesis, probably even solved NP-complete or PSPACE-complete problems in polynomial time, by encoding exponentially many bits into continuous parameters. But it’s not just that we have 60+ years of experience that it doesn’t work (see, e.g., my NP-complete Problems and Physical Reality survey or my memrefuting post). The deeper problem is that we *already know* that continuous idealizations of nature break down if you push them as hard as would be needed to get computational speedups. So for example, there are no ideal fluids satisfying the Navier-Stokes equations because real fluids are made of atoms. And the Planck scale (i.e., your analog computer collapsing to a black hole), if not stuff long before that, places a fundamental limit on the precision with which you could ever hope to resolve distances, times, energies, and momenta.

- If the teapot problem can be described as a math problem with some very large but finite number of parameters, why is the resulting computation always polynomial, ie why doesn’t it ever become exponential? Scott talked about Avogadro number vs Hilbert space, why doesn’t Hilbert space enter into the teapot math problem as long as the number of inputs are finite.

Once again, because of the Extended Church-Turing Thesis. To whatever extent you can model the atoms classically, a classical computer could simulate the teapot by just keeping track of the position, momentum, and so forth of every last atom. And to whatever extent you need to model the atoms quantum-mechanically … well, to that extent the teapot *literally is* a quantum computer, albeit an extremely special-purpose one! 🙂

- How about if you change the teapot problem so that the output space is exponentially large. For example, outputting the sizes of the shards (in terms of their mass)?

Good question! I thought about that as well. In that case, one would no longer have the knock-down argument that the whole distribution can be estimated by just explicitly estimating all the probabilities. But one could, of course, fall back on the more general argument from this post: that

(1) if you define the problem as “output the sizes of all the shards, given this atom-by-atom mathematical model of a teapot,” then not even the physical teapot itself will be able to solve it, due to microscopic differences, whereas

(2) if you instead define it as “sample from the probability distribution over lists of shard sizes, given this coarse-grained description of a teapot,” then a classical computer *can* efficiently solve the problem, to whatever extent you believe the Extended Church-Turing Thesis.

- In your future back and forth with others on this issue, why not just demand that the inputs and outputs to their favorite experiment/device be as categorically stated as those to the integer-factoring problem/algorithm?

Just for the obvious reason that we don’t yet *have* quantum computers that achieve supremacy (or anything close to it) for factoring, or other problems in NP ∩ BQP. Once that exists, we’ll presumably no longer need this discussion. But the existing quantum supremacy experiments really do require a careful conceptual discussion of what we mean by a “problem,” and by “solving” the problem, in theoretical CS—a discussion I’m happy to have, but a discussion nonetheless!

There are different problems than can be solved.

We know that a real life “perfect” coin has 50/50 probability. Perfect in the sense that the coin is unbiased and that there’s enough chaos in the system (which is the case when a human is tossing the coin).

We’re never talking about reproducing a given coin toss experiment instance (because no one can throw the same coin in exactly the same way either) but capturing the asymptotic behavior of the system, when the coin is tossed over and over.

We don’t need to model the coin down to the atomic level, plus all the molecules in the room, plus all the molecules in the human tosser (including his/her entire brain), … in order to capture the fact that a coin toss gives a 50/50 probability. Modeling all that stuff would buy us nothing. So modeling such a coin toss on a computer is trivial, you just need to generate a random number between 0.0 and 1.0 and if it’s smaller than 0.5, it’s head, and it’s tail otherwise.

But this is all different from saying “here’s a real life coin, which we know is biased in some way (say, head comes up only 40% of the time, but you’re not given that actual distribution)… I give you this coin, and you’re allowed to measure it anyway you want (imagine that the coin’s density is non-uniform), and then you have to model the tossing of this coin such that your simulation will give the same biased distribution as the real coin”

]]>I suppose the objection now is that it’s impossible to construct ideal fluids in the lab, so we could never demonstrate “bucket supremacy” on this counting problem. I’m not really sure I agree with that. Further, we could pose similar questions using other idealized dynamical systems that are easier to build, but extremely hard to simulate (like a double pendulum). If the correct answer depends on the accuracy of the simulation, then classical algorithms—i.e., numerical integration methods—will still scale exponentially.

]]>