I like such very general arguments and I do make them myself. E.g. my suggestion (#48) that realistic quantum processes in nature do not involve quantum error-correction/fault tolerance and therefore computationally superior QC are not required for their simulation is also a “philosophical” non-technical suggestion. (And you offered some counter arguments in #55 #60 to which I offered some counter-counter arguments.) As I said, such arguments of very general nature are interesting and so are attempts to make counter arguments and to find a methodology for discussing them. So if you see some problem with my recent counter argument #117 I’d be happy to hear.

As for my “a priori” argumentation “against” quantum fault tolerance you are absolutely correct. Physicists (and CS people and mathematicians) might see (and actually do see) my argumentation in the same light, and to a large extent I agree.

Part of the challenge is to translate rather general ideas to technical mathematically formulated conjectures. Even then it is very difficult to distinguish between conditions that will cause QEC/FT to fail from conclusions to the (hypothetical) failure of QEC/FT. (This is what Greg referred to as tautological kibbetzing which is a term I actually like, as mathematics is a sort of tautological kibbetzing.)

At the very end experimental QEC or some other experimental physics will decide these issues (and as there is a wide consensus now that QC are possible this may well be the outcome.) But it will be nice to develop, in a mathematical language, an appealing story of how QC fail a little like the highly developed terrific story of successful QC. (And it is also nice to discuss these matters from a philosophical point of view as you and I do.)

]]>Gil (Kalai) said:

If we want to simulate realistic noisy quantum processes (including all [Scott’s] examples) there is no a priori reason to expect that the full power of quantum computers will be needed. If these processes do not involve any phenomena similar to the highly entangled QEC states or any fault tolerance it is not clear why fault tolerance is required for their simulations.

Scott (Aaronson) said:

Why do we find it so useful to use almost-noiseless, fault-tolerant computers to simulate things as noisy and chaotic as (say) the earth’s climate, or the airflow around a wing?

To reconcile these two points of view (and to suggest an explicit answer to Scott’s question) large-scale classical simulations typically *do* include either an explicitly random element (that “state of sin” called a random number generator) or else, an explicitly information-destroying element (like artificial viscosity in turbulent fluid flow).

In consequence, conceiving classical simulations as being “almost-noiseless” is somewhat misleading. The mathematical reality is that *most* of the effort and sophistication in the design of classical simulation algorithms is focussed upon the efficient destruction of irrelevant information!

What are we to make of this? My own view pretty much matches Gil’s (as I understand it) — to me it seems plausible that noisy quantum systems are not algorithmically harder to simulate than noisy classical systems. In particular, the role that artificial viscosity plays in classical simulations is played in quantum systems by the realization of noise processes as covert measurement processes.

In consequence, just as fluid-mechanical simulations need not encode microscopic turbulent eddies, quantum simulations need not encode exponentially high-order correlations.

From this point of view, zero-viscosity fluid flow and zero-noise quantum dynamics both are computationally intractable, for much the same underlying reasons. Conversely, viscous fluid flow, and noisy quantum systems, both can be simulated with (possibly immense) classical resources.

From this point of view, simulating quantum systems is easier than simulating classical system for the common-sense reason that the mathematics of quantum mechanics has more sophisticated tools for information destruction! 🙂

]]>description of interacting physical systems that’s vastly more efficient than writing down the entangled wavefunction

is the set of states known to quantum chemists as Slater determinants, which are manipulated by algorithms that are collectively known as Löwdin Rules.

If we consider these Slater determinants as geometric objects, they turn out to be objects that are known to differential geometers as Grassmannians, which have the interesting curvature property of being Einstein manifolds (their metric tensor is proportional to their Ricci tensor).

IMHO, it is pretty astonishing how little is known about these quantum state-space manifolds from a QIT point of view, given that the fundamental articles on quantum chemistry are among the most-cited in all of the scientific literature.

A good starting point is a recent pre-print by Beylkin, Mohlenkamp, and Pérez.

]]>In the case of Scott’s argument I think I understnd what he claims:

“if quantum mechanics can’t be used to efficiently sample any probability distribution beyond what a classical computer can efficiently sample (forget for the moment about deciding all languages in BQP), then there must be an efficient classical algorithm to simulate the outputs of a quantum process.”

What I say is that by “a quantum process” we should mean “a quantum process that can be simulated to start with” (and not “any quantum process we can imagine”).

Scott continues:

“But that, in turn, means that there’s some complete description of interacting physical systems that’s vastly more efficient than writing down the entangled wavefunction (i.e., that requires polynomially many rather than exponentially many bits).”

And, also here, when Scott talks about “interacting physical system” he really refers to “interacting physical systems that we can describe to start with” (not anything that we can imagine.) So this paragraph can be read as a reference to the inability to describe complex quantum systems and not to the existence of a short description for an arbitrary quantum system.

]]>But the remainder of this post outlines a philosophy-based physics that is pretty much exactly opposite. So maybe the overall lesson is that philosophy-based quantum-informatic physics does not unveil “the truth” but rather unveils an overview of all the possibilities (this overview being just as valuable).

We’ll begin by revisiting a “spooky informatic mystery of classical physics” whose main virtue is that it is an *obvious* mystery. Presumably when the spooky mysteries of quantum physics are unveiled, they too will seem obvious, so hopefully our time is not wasted when we review these obvious classical mysteries.

Our spooky classical informatic mystery concerns von Neumann machines. We start with a von Neumann machine that has a one hour power supply. Assuming (classically) that matter is infinitely divisible, we instruct the machine to build two duplicates of itself, each half the size of the original, with double the clock speed.

Well, you can see how this game goes … after two hours we have constructed an infinite binary tree of von Neumann machines. By including an extra gigabyte of RAM in each machine, and a communication link between nodes, we can search the graph for short proofs of P≠NP, or in general, solve any NP-complete problem we want.

So on purely philosophical-informatic grounds, we (reluctantly) conclude that perhaps matter is not infinitely divisible. Too bad!

Let’s weaken our infinite-divisibility assumption. Basically, let’s substitute exponentially-great state-space dimensions for infinite divisibility. Specifically, let’s assume that the state-space of matter is exponentially large (and complex … it is a quantum state-space).

Bolting on some other assumptions (like linearity and POVM-based measurement) we find that in this restricted quantum state-space we can (presumably) no longer solve NP-complete problems, but we can still solve some classes of problems that are non-classically hard (like database search and factoring).

According to the rules of philosophy-based informatic physics, we must conclude (reluctantly) that the quantum state-space of matter is not exponentially large … because the ultra-conservative quantum-informatic view is that spooky capabilities like non-classically fast database searching are just too “informatically weird” for Nature to embrace.

What could this smaller (but still quantum) state-space be? For reasons that Scott set forth very clearly in his Scientific American article, we have to be careful. We cannot assume that the state-space is linear and the dynamics is nonlinear … this leads to NP-complete computing and/or non-causal communication. But we *can* assume that the quantum state-space is low-dimension and curved, and that the dynamics is simply ordinary quantum mechanics projected onto this curved space.

AFAICT, none of the traditional informatic no-go theorems apply in this latter case. Perhaps someone can comment upon this?

In any case, we are therefore led (by this highly uncertain philosophical physics reasoning) to entertain the idea of a world in which quantum state-space is curved and dynamical, much like the state-space of general relativity. IMHO this would be a perfectly fun world to live in … which is one of the main goals of philosophy. 🙂

Even supposing that Nature’s quantum space is exactly linear, exponentially large, and non-dynamical (which seems like a mighty big assumption when we say it that way), it surely is a very important fact about quantum systems that they can be well-approximated by a curved state-space of lower-dimensionality (as the quantum chemists are teaching us).

To return to a classical analogy, even if *sometimes* the simplex algorithm requires exponentially many steps to run, the fact that *usually* it requires only polynomially many steps, is a very important fact about the world, from both a mathematical and practical point of view.

]]>(My “counter argument” below is just regarding Scott’s argument that can be read as QC being “obviously possible” in the context of QM.)

For scientific computing we can consider two major issues:

A) What can be done,

B) What is the required computational complexity.

Thus, in the context of modeling and simulation, computational complexity is sort of a second-order matter (albeit, extremely important).

Scott’s argument can be rephrased as follows: First he argues that computationally superior quantum computers being impossible (in principle) means that the computational complexity required to simulate physics is described by classical computing. Let’s take this part for granted.

Next Scott claims that this means that quantum physics is vastly ** simpler ** than we think. There is a miraculous polynomial description of exponentially long vectors.

However, there is an alternative interpretation:

The computational complexity required to simulate physics being described by classical computing means that quantum physics is much ** harder ** than we think. We will not be able to describe (or prescribe, by computers or by experiments) quantum processes whose simulation requires superior computational complexity.

I do not think these two interpretations are the same. (I am not sure if neither of them contradict early intuition about QM, but especially the second one does not seem to.)

(Here is an example: the computational complexity required for us to predict the weather the day after tommorow based on measures conducted tomorrow is more than what is required to predict the weather 100 years from now based on measures conducted tomorrow. Because the later task is harder.)

]]>