- Is the relativization barrier for BPP=BQP, just the one mentioned in your Algebrization paper with Avi Wigderson? In particular Theorem 5.11 (iv)?

No, algebrization is an extension of the original relativization barrier. The existence of oracles that make BPP=BQP and BPP≠BQP was shown already by Bernstein and Vazirani in 1993.

- Why doesn’t the existence of fpras for #P with an NP oracle imply that BQP \in BPP^NP \in \Sigma_3?

That’s an excellent question with a technical answer: it’s because the FPRAS is for #P, not for GapP. In other words, it’s for sums of exponentially many **nonnegative** terms, not sums of both positive and negative terms that can cancel each other out. But the latter (i.e., GapP) is what’s relevant for calculating amplitudes and probabilities in a quantum computation.

Incidentally, this distinction between #P and GapP is at the core of the hardness arguments that Alex Arkhipov and I made for BosonSampling, and which have since been adapted to other quantum supremacy experiments.

]]>There’s something claiming some connection between quantum computing and gravity. I haven’t figured out what it’s actually saying. Just putting it here in case it interests anyone.

]]>“It seems to me that a QC with 10,000,000 qubits is relying on the linearity of the wave function in an extreme regime compared to a CS with 50 qubits.

It would be really miraculous to get an (double) “exponential” free lunch out of nature.”

If quantum unitary evolution broke down (just linearity would do this not just nonlinearity), QM would be able to solve #P problems, which most people think take exponential time, so a breakdown of unitary evolution would actually be a boon to QC.

“The wave function encoding so much information about reality, it seems that reality has to have some way to account for it (in the two-slit interference experiment, what exactly is accounting for the interference pattern?)

This must be true, even if it means something really un-intuitive, like the QM reality we observe is the “output” of a simulation, and the wave function we can’t observe is accounted for as the internal of the simulation (at the higher level), and it could even be classically computed, taking exponentially long for the simula-tor, but seeming magical and instantaneous to the simula-ted (us).”

What if (exponential state space of) QM arises from turbulence-like behavior with no smallest length scale? It could even explain the use of complex numbers in QM. If there is a diffusion through length scales induced by unstable modes with imaginary momenta, shouldn’t the diffusion constant be imaginary (or at least complex)? Wouldn’t more complex, simulation-like structures be ruled out unless P=NP?

Scott #20:

(1) Is the relativization barrier for BPP=BQP, just the one mentioned in your Algebrization paper with Avi Wigderson? In particular Theorem 5.11 (iv)?

(2) https://cs.stanford.edu/~trevisan/cs254-10/lecture08.pdf

Why doesn’t the existence of fpras for #P with an NP oracle imply that BQP \in BPP^NP \in \Sigma_3?

Science works by mapping “observables” in the physical world to explicit mathematical objects in the models.

But what about the reverse?

I.e. any “implicit” mathematical object needed to simulate the physical world also must map to some (unobservable) entity in the physical world?

The wave function encoding so much information about reality, it seems that reality has to have some way to account for it (in the two-slit interference experiment, what exactly is accounting for the interference pattern?)

This must be true, even if it means something really un-intuitive, like the QM reality we observe is the “output” of a simulation, and the wave function we can’t observe is accounted for as the internal of the simulation (at the higher level), and it could even be classically computed, taking exponentially long for the simula-tor, but seeming magical and instantaneous to the simula-ted (us).

As an example, we could run a gigantic instance of some sort of “game of life” either on a QC or a classical computer (made of transistor or water pipes and valves). The details of this machinery (and its efficiency) wouldn’t matter to “beings” living in that game of life instance, they have no idea what it takes to update/compute/select the successive states of their universe (*).

And, in a recursive manner, those “beings” would end up trying to recreate their own instance of the game of life, reproducing all the properties they observe, and they would have to do this using a simulation mimicking the machinery of the upper level of their reality (inefficiently, like us building classical computers to simulate QM systems) or somehow using the implicit power of upper level of their reality (like us building a CS).

Of course, just like us, those beings aren’t really “building” anything… it all unrolls in a deterministic way – its about whether a universe, its rules, and its initial conditions have the power to “groom” complexity (i.e. which states are linked together).

(*) its not even clear whether there’s the need for a simula-tor, each state of the universe exist in the infinite stack of all the possible states, and the explicit “act” of linking contiguous states (what we call the dynamic evolution) is maybe totally un-necessary, and all the states exist at once (space time block). Maybe “consciousness” is the implicit chaining of related states (for each possible chaining rule, a different universe exist, with different experiences)… well, that’s my own crazy take on Max Tegmark’s idea of reality being purely mathematical.

]]>So maybe nature’s way to realize quantum effects on a large scale (aka a QC) was first to evolve carbon based classical computers (our brains, and in particular, Scott’s brain)…

]]>What about an analogy to the fact that even though the Maxwell equations are linear (and this is coming from QM being linear, rihgt?) there is a practical limit to the superposition of EM fields (https://en.wikipedia.org/wiki/Schwinger_limit)?

In “extreme” modes, the nature of the fabric of time and space (with virtual particles, etc, which aren’t covered by the base theory) starts to creep in to create unexpected side effects and limitations.

It seems to me that a QC with 10,000,000 qubits is relying on the linearity of the wave function in an extreme regime compared to a CS with 50 qubits.

It would be really miraculous to get an (double) “exponential” free lunch out of nature.

On a meta level, if such a free lunch was indeed available, you’d think we would see it around us… after all nature did a pretty good job at packing so much classical computational power in the human brain (in terms of neurons and connections), even compared to our current best silicon chips!

]]>If a computer(Quantum or not) “could” solve the halting problem, is that similar to solving NP-complete problems in polynomial time? It seems related to CTCs also.

]]>Exactly the same connectivity issues arise with classical computer architecture. Have you ever seen the inside of a Cray—just a giant tangle of connections among processors arrayed on the outer perimeter? Or for that matter, the inside of a brain, after which the Cray was modeled? 🙂

]]>- Is there a relationship between randomness and determinism?

Err, that one precludes the other? 🙂

The more interesting question is whether “randomness” and “determinism” exhaust all the possibilities—my personal inclination is to say no. For more, see my Ghost in the Quantum Turing Machine essay.

]]>