The assumption is wrong for optical quantum comp., due to the dual-rail representation:

It is possible for a cavity to contain a superposition of zero or one photon, a state which could be expressed as a qubit c0 |0> + c1 |1>, but we shall do something different. Let us consider two cavities, whose total energy is ??, and take the two states of a qubit as being whether the photon is in one cavity (|01>) or the other (|10>). The physical state of a superposition would thus be written as c0 |01> + c1 |10>; we shall call this the

dual-railrepresentation.

This is from section 7.4.1 in Nelson and Chuang. The important observation for the phase shift issue is stated in section 7.4.2:

Note that the dual-rail representation is convenient because free evolution only changes |?> = c0 |01> + c1 |10> by an overall phase, which is undetectable.

It sounds like a voluntary decision. OK, for a single qubit, the state |00> will be abundant and useless, and |11> will be extremely rare. But for larger state spaces, using any representation where each tensor product basis state (occuring in the superposition(s)) has the same number of 1s could have been a valid alternative too. According to Scott’s lecture notes the name dual-rail representation arose “since the two channels, when drawn side-by-side, look like railway tracks”.

Is the assumption also wrong for liquid NMR? This was the other room temperature physical realization for a quantum computer described in Nelson and Chuang. The way it is supposed to be operated is such that the frequencies are still realistic for controlling the phase shifts, but the energy gaps are way below the thermal energies, see for example section 7.7.1: “… produce an NMR signal at about 500 MHz when placed in a magnetic field of about 11.7 tesla”. So the states are not protected from thermal state flips, and are basically guaranteed to be destroyed in thermal equilibrium. The question is just how fast they will be destroyed. Apparently this has been measured experimentally, and was on time scales larger than 500 MHz. It should also be possible to estimate this theoretically based on phonon interactions. However, the NMR QC and Chuang’s wikipedia articles indicate that “the field of liquid state NMR quantum computing fell out of favor due to limitations on its scalability beyond tens of qubits due to noise” already by the end of 2002. (Will solid state NMR rise some day?)

But liquid NMR contains lessons for how to deal with the problem of the trade off between temperature (state flips) and speed (phase shifts): We don’t need to control it for an arbitrary amount of time or arbitrary number of qubits, it is enough to control it long enough for a sufficient number of qubits until quantum error correction can be applied.

This lesson shows that I was too pessimistic when I wrote: “Maybe one can get it down to sqrt(n)-times (i.e. the average case) by clever engineering”. One can get it down to O(1)-times, assuming the operations required for error correction (like “measure and discard bad qubits” or “pump in fresh |0⟩ qubits”) can be done fast enough. (Scott’s lecture notes also lists “apply gates to many qubits in parallel” and “do extremely fast and reliable classical computation”.)

But can one also get it down to 0-times? The dual-rail representation discussed above suggests that this might be possible. It could be regarded as a special case of “decoherence free subspaces”. Error suppression and prevention techniques also include “dynamical decoupling”, which is a generalization of the refocusing technique used in liquid NMR (one of my other lessons from liquid NMR). (My third and final lesson from liquid NMR was that quantum computing is still possible, even if the signal to noise ratio is below 1%. One just repeats the quantum computation a sufficient number of times. Not scalable, but also used in the Sycamore experiment, in a certain sense.)

I am skeptical here (whether one can get it down to 0-times). For example, the refocusing technique used in liquid NMR itself has to operate at 500 MHz, and the required accuracy of its own control scales with the number of qubits. And if we try to use a dual-rail like representation outside of the optical domain, then most single qubit errors will throw us out of the comfortable decoherence free subspace. On the positive side, each single qubit error can at most add 2 to the factor, so the technique is still good for ensuring that requirements for the speed and accuracy of the classical control don’t escalate too much.

To end this, I want to come back to my initial assumption that a low temperature compared to the energy difference between the states allows to protect them from random state flips. This is clear for transitions from the lower energy state to the higher energy state. But why should this prevent transitions from the higher energy state to the lower energy state? This is because in the solid state, the temperature is mostly transported by phonons, and phonons are bosons. And bosons provoke stimulated emissions (or in this case stimulated relaxations), which are well known from lasers. You might object that phonons will be unable to trigger the transitions because they have a momentum while the transition (which can be triggered by microwave photons) should have none. You are right for a single phonon, but multiple phonons can still trigger the transition. (Which may explain why this stimulated relaxation is nevertheless a surprisingly slow process.)

]]>My own explanation is that the difference between the energy levels of your states must be sufficiently bigger than 15 mK * k_B to be safe from random termal state flips, and those differences in energy levels lead to phase shifts that would become too big if you couldn’t control your qubits faster (or at least better) than 312.5 MHz.

I claimed that Pagel’s explanation would not be very different from my sketched explanation, but my mental picture of a quantum computer had a subtle flaw. In my mental picture, each qubit had a phase, and phase errors of the individual qubits directly changed the phase of the complex numbers describing the superposition. But that is not how things work. If I simultaneously apply 3 phase shift gates to 3 qubits (to model the phase errors), then the phases are added together first, before they change the phase of the complex numbers describing the superposition. What I don’t like about this is that now the number of qubits has an influence on the impact of the phase errors. It seems like in the worst case, if n is the number of qubits, the quotient of the computation speed by the temperature now has to be n-times as big, for a reliable quantum computation. Maybe one can get it down to sqrt(n)-times (i.e. the average case) by clever engineering, but it is still annoying. So the better route would be to reject the assumption in my sketched explanation that the difference between the energy levels of the two states of a single qubit must be sufficiently bigger than 15 mK * k_B to be safe from random termal state flips.

Happily, I am not at all up to speed on current approaches for physical realization of quantum computing. I am still at the level from 20 years ago, as described in chapter 7 “Quantum computers: physical realization” from Nielsen and Chuan. All of the examples presented in that chapter are obviously unable to allow scalable quantum computing:

As these examples demonstrate, coming up with a good physical realization for a quantum computer is tricky business, fraught with tradeoffs. All of the above schemes are unsatisfactory, in that none allow a large-scale quantum computer to be realized anytime in the near future. However, that does not preclude the possibility, and in fact …

Probably already Sycamore uses a better way to limit the impact of the thermal environment. (Even in case anybody should be able to understand what I am saying here, please consider the possibility that people already understood those thoughts 20 years ago, and had good reasons to pursue physical realizations of scalable quantum computers nevertheless. Keep in mind that any conclusions about the feasibility of scalable quantum computing might be wrong in practice, independent of what can be proven in theory. For example, Mermin’s theorem of 1968 from the paper titled “Crystalline order in two dimensions” (often incorrectly cited as Mermin-Wagner theorem) seems to show that 2D crystals cannot exist. But graphene and other 2D crystals do exist. And Gerhard Gentzen proved the consistency of Peano arithmetic by finite means, even so people believed that Gödel’s theorem would prove that this is impossible. I was serious when I said at Gil Kalai’s blog: “But at the current moment, the field progresses nicely, so why unnecessarily disrupt that progress?” The money poured into quantum computing and quantum information science at the moment is well spent. It is also responsible for results like Ewin Tang’s, which stay valuable even if scalable quantum computers cannot be built.)

]]>“No such assumption is necessary, since the state of each detector can simply be viewed as another of the possibly-hidden variables for the associated particle.”

I am not sure I understand your point. Do you claim that Bell’s theorem works if the hidden variables are a function of the detectors’ settings?

“It is of course impossible for, say, Alice’s hidden variables to depend on the state of Bob’s detector, because they are spacelike separated.”

It is not impossible:

1. In classical EM the electric and magnetic field configuration at Alice (A) depends on the past/retarded charge distribution/momenta at Bob (B). You can find the exact formulas here (Feynman’s lecture, equations 21.1):

http://www.feynmanlectures.caltech.edu/II_21.html

2. But the present/instantaneous state of B does also depend on the past state of B (deterministic theory)

From 1 and 2 it follows that A and B are not independent, even if spacelike separated.

So, according to classical EM, the states of A and B cannot be independent. However, those are microscopic states (position/momenta of electrons/quarks, E and B fields). In order to determine the macroscopic/observable consequences you need to solve those equations which cannot be done with the computational power we have. So, we need to rely on experiment. We just record the detectors’ orientations and submit them to a statistical test and see if they are independent or not. I agree that they are independent.

Unfortunately, we cannot directly measure the hidden variables (they are hidden, right?) so we cannot be sure if they are independent of the detectors’ settings. So, Bell’s theorem cannot rule out classical electromagnetism. In other words, classical electromagnetism is a superdeterministic theory.

]]>Since Alice and Bob have interacted in the past (they are working on the same experiment, after all) it is highly likely their own internal states are interdependent (there is of course no direct dependence between them due to the spacelike separation). So an experimental protocol of some sort is necessary to eliminate this interdependence.

]]>The independence assumption means that the hidden variables (say the spins of the particles) are independent of the states of the detectors, not that the states of the detectors are independent of each other. The hidden variables depend on the state of the source emitting those particles, more exactly, on the electric and magnetic field configuration at the locus of the emission.

In classical electromagnetism the fields in a certain region depend on the position/momenta of all field sources (electrons and quarks) including those in the distant detectors. So, it is a mathematical certainty that the hidden variables are NOT independent of the detectors’ states, as those detectors’ states are ultimately described in terms of the electrons and quarks inside them. This observation is generally true and can be applied to any experimental protocol, including Quasars or whatever.

Alice and Bob can indeed be assumed to be independent on each other because we only care about some macroscopic states, a regime that is reasonably well described by Newtonian mechanics with contact forces only. Independence assumption fails whenever you need to describe physics in terms of fields or long-range forces, such as electromagnetic or gravitational systems. ‘t Hooft’s cellular automaton interpretation is just an example of a discrete field theory and the reason it cannot be ruled out by Bell is the same as in the case of electromagnetism.

]]>As for independence, Bell’s Theorem only requires that Alice and Bob’s measurements can be independent, not that they must be. Of course perfect independence would require fine tuning, so we have to settle for approximate independence; but we can arrange for the correlation to be negligibly small. This doesn’t have anything to do with superdeterminism, as it is simply part of the experimental protocol.

]]>“The best argument I see for thinking that superdeterminism requires some sort of miraculous fine tuning of the universe’s initial state is that T’Hooft’s theory actually does so”

1. This is not a good argument.

2. I do not think ‘t Hooft implies that, but more importantly:

3. It is trivial to find distant systems that are not independent, yet no fine-tuning is involved. Two orbiting stars are such an example.

Please take a look at my previous post (136) to find such examples. In fact, the only theory I can think of that does imply Bell’s independence assumption (except for fine-tuning situations) is rigid body Newtonian mechanics with contact forces only (billiard balls). Any modern theory (local field theories like classical electromagnetism, general relativity) does not allow distant systems to be independent and this is obvious from their formalism.

]]>(from https://arxiv.org/pdf/0908.3408.pdf)

“If, on the other hand, Alice wishes to rearrange her measuring device from measuring one operator σ1 to measuring another operator σ2 that does not commute with σ1, she does something that cannot be expressed in terms of beables alone. The rotation of her device is described not by a changeable, but by a superimposable operator. Somewhere along the line, superimposable operators must have come into play. Therefore, Alice cannot make such a transition if the quasar had only been affected by a changeable operator. This particular disturbance of the quasar is not of the right kind to cause Alice (or rather her measuring device) to go into a superimposed state. From the state she is in, she cannot measure the new operator σ2.

…

Any perturbation that we would like to consider, would be easiest described by a superimposable operator, not just by a beable or even a changeable.

…

Is this contradicted by the Bell inequalities[3]? There may be different ways to circumvent such a conclusion. One is, as stated above, that a pure changeable operator would replace the beables for quasar A into other beables, and thus not allow Alice to turn her detector into the non-diagonal eigenstate needed to measure the new operator. We must then assume the quasars to be in an entangled state from the start.”

In other words, the universe needs to be in a special state in which all modifications must be changeable rather than superimposable.

Considering the alternative case where Alice’s detector settings can be altered by a changeable operator:

“The above considerations lead us to realize that the set of states we use to describe physical events are characterized by two important extensive quantities: the total energy and the total entropy. In all states that we consider, both these quantities are very small in comparison with the generically possible values. Any ontological state of our automaton, characterized by beables, is a superposition of all possible energies and all possible entropies. The state we use to describe our statistical knowledge of the universe has very low energy and very low entropy. It appears that, if in any ontological state we make one local perturbation, the energy will not change much, but the entropy increases tremendously, thus allowing particles α and β at t=t1 to enter into a modified, (dis)entangled state. If we perturb the quantum state, both energy and entropy change very little, α and β stay in the same state, but then Bell’s inequality needs not be obeyed.”

In this case, it seems we have the Many Worlds Interpretation, which is indeed local, and reproduces quantum mechanics on generic initial conditions. But it does not satisfy counterfactual definiteness (because, in T’Hooft’s terminology, it is not possible to determine which perturbations are changeables and which are superimposables.)

@Scott, 135:

From T’Hooft’s earlier papers (like the one I quoted), I actually got the impression that he thought P=BQP (or at least P/qpoly = BQP/qpoly).

Also thanks for letting the comment thread open for so long.

gentzen #70: Let me repeat the claim and the excuse for the missing explanation:

15 mK * k_B / h = 312.5 MHz where k_B is the Boltzmann constant, h is the Planck constant, and 15 mK is the temperature of 15 milliKelvin at which Sycamore was operated. Assume I would try to explain why those 312.5 MHz are a lower bound for how fast the quantum bits must be operated (or at least controlled) for extended quantum computations. …

…

By coincidence, I don’t have access to Lienhard Pagel’s explanation of that claim at the moment, and no longer enough time to properly write down …

Now I have access to Lienhard Pagel’s explanation again. In “Information ist Energie,” section “2.5.8 Quantencomputing,” page 44, the claim is mentioned for the first time:

Wo liegen nun die Vorteile des Quantencomputings? Zuerst sollen einige technische Vorteile genannt werden:

…

3. Quantensysteme, die bei Raumtemperatur funktionieren, müssen sich gegenüber dem thermischen Rauschen der Umgebung durchsetzen. Wie aus Abbilding 4.4 ersichtlich, kommen bei Raumtemperatur nur relativ hochenergetische Quanten in Frage, weil sie sonst durch die Wärmeenergie der Umgebung zerstört würden. Diese Systeme können wegen ΔEΔt ≈ h dann nur schnell sein, auch als elektronische Systeme.

Google’s translation to English feels faithful to me:

What are the advantages of quantum computing? First, some technical advantages should be mentioned:

…

3. Quantum systems operating at room temperature must prevail over the thermal noise of the environment. As shown in Figure 4.4, only relatively high-energy quanta are possible at room temperature because they would otherwise be destroyed by the thermal energy of the environment. Because of ΔEΔt ≈ h, these systems can only be fast, even as electronic systems.

(Figure 4.4 shows energy and time behavior of natural and technical processes. The horizontal axis shows time, the vertical axis energy. Shown processes include super novae, gamma burst, nuclear bomb, ultrafast laser, CMOS, neuron, and many different types of radiation. All radiation processes are on a straight line in that double logarithmic plot, and below that line is the forbidden region.)

To me, Pagel’s explanation is both disappointing and fascinating at the same time. The explanation itself is not very different from my sketched explanation, only less concrete. (In a certain sense, Pagel’s explanation is stretched out over different sections and examples, which makes it hard for me to reproduce it here.) What is fascinating is that Pagel considers his claim to be an obvious advantage of quantum computing (that doesn’t need serious justification), rather than as a serious technical obstacle for practical implementation of quantum computing (that might not apply to all possible implementations). This feels similar to how Svante Arrhenius estimated the atmospheric warming effect of CO2 in 1896 and “saw that this human emission of carbon would eventually lead to warming. However, because of the relatively low rate of CO2 production in 1896, Arrhenius thought the warming would take thousands of years, and he expected it would be beneficial to humanity.”

The following passage from page 104 sheds some light on Pagel’s expectations:

Es muss angemerkt werden, dass die Energiemenge E, die für ein Bit aufgewendet wird, von den technischen Gegebenheiten abhängig ist und keineswegs prinzipiellen Charakter hat. Über die Unbestimmtheitsrelation kann nun berechnet werden, wie schnell ein solches Bit mit der Energie E umgesetzt werden kann:

Δt = h / k_B T ln 2 (4.34)

Diese Zeit liegt bei etwa 100 Femto-Sekunden. Nun kommt die Transaktionszeit ins Spiel. Bei kürzeren Transaktionszeiten ist die Bit-Energie ohnehin größer als die thermische Energie bei Zimmertemperatur und das Bit kann sich gegenüber der thermischen Energie seiner Umwelt behaupten.

Google’s translation to English, with two minor adjustments:

It must be noted that the amount of energy E spent on a bit depends on the technical conditions and is by no means of a fundamental nature. The uncertainty relation can now be used to calculate how fast such a bit can be reset with the energy E:

Δt = h / k_B T ln 2 (4.34)

This time is about 100 femtoseconds. Now the transaction time comes into play. With shorter transaction times, the bit energy is anyway greater than the thermal energy at room temperature and the bit can prevail over the thermal energy of its environment.

Pagel cannot possibly expect that some classical control of a quantum computer would be able to operate anywhere near a timescale of 100 femtoseconds. So he probably imagines a quantum computer as “quantum all the way down,” just like an Everettian like David Deutsch might suggest. Therefore, the extremely fast timescale doesn’t feel like a serious technical obstacle to him. The irony is that all current and probably all future quantum computers crucially depend on the interaction between a huge macroscopic classical control and extremely well isolated and calibrated quantum states (i.e. the exact opposite of “quantum all the way down”). All proposals for error correction depend on classical measurement interactions, so the Copenhagen interpretation seems very suitable to describe those implementations of quantum computation. Maybe N. David Mermin’s Copenhagen Computation: How I Learned to Stop Worrying and Love Bohr hints at some physical constraint with respect to implementations of quantum computers.

]]>