“Bell’s theorem still acts like an immutable part of the laws of nature. If superdeterminism *specifically* conspires to uphold Bell’s theorem, then Bell’s theorem is still baked into the world, in that very specific way, and any complete superdeterministic theory will have to explicitly require somewhere that events that don’t uphold Bell’s theory do not occur.”

Bell’s theorem is based on an unjustified assumption, that the state of the detector at the time of detection and the state of the source at the time of emission (that determines the hidden variable) are independent (not correlated). There is absolutely no reason to accept this assumption as true. The physical systems that we call “source” and “detector”, being large aggregates of massive/charged particles – electrons and quarks, interact both gravitationally and electromagnetically, and, as a consequence of this interaction, are expected to display some correlations. The burden of proof stands on Bell’s theorem’s supporters to show that those correlations still keep the states independent enough to grant the assumption. Until such a proof is presented the most reasonable position is to reject Bell’s theorem as unsound, being based on a premise with unknown truth value. Hence, superdeterminism is perfectly reasonable.

]]>“Superdeterminism is the claim that the macro-universe looks classical and deterministic but if you look deeper you see it is actually quantum and nondeterministic. But if you look deeper still it becomes deterministic again.”

Correct.

“But that means that the underlying deterministic process has to simulate a quantum process. That cannot be efficient in either time complexity or memory space complexity. For example if I am running a quantum computer to factor a million digit number it is actually a classical program running on a deeper deterministic process. The underlying classical computer must be vastly more powerful than the quantum computer it is simulating.”

I think that you place an equal sign between classical physics (General Relativity, classical electromagnetism) and the “classical” computer (a device working with a limited number of bits. Probably the word “classical” is at the root of this confusion. Classical theories based on continuous space-time “process” an infinite amount of information in a limited time. The velocity of an object is a real number with an infinite number of digits and this is transformed instantly into another such number when a force is applied. So, a classical theory could very well “simulate” a quantum one.

“And how does the parts of the deep classical computer know that they are part of a quantum computer? It seems that they must store a record of their past interactions in order to correlate their current interactions so as to produce on aggregate results consistent with valid factors of my million digit number.”

As shown above, classical physics does not work like a classical computer. And your question makes no sense. If there is a device that is built to calculate the factors of a number it will do so, regardless of it being “quantum” or “classical”. And, sure, a classical deterministic state does “remember” the past. By evolving the theories’ equations you can get from the present state to any past (if the theory is reversible, classical EM is) or future state. Nothing is lost.

]]>“The problem really comes down to the fact that there’s asymptotically more events in spacetime than there is information capacity in the initial state.”

This is false. In a deterministic theory (say classical electromagnetism) there is no increase of information. You can calculate any future (or past) state given the present one. So one does not need an infinite amount of information (assuming that the universe is finite, sure). There are also no “events” in such a theory. It’s just charges moving around and fields changing magnitude and direction.

Superdeterministic theories are just a subclass of deterministic theories so the above applies.

]]>“One other thing I don’t get about superdeterminism is if the correlations between particles are now all in the initial conditions, wouldn’t we expect them to disappear after a while? In particular, what does it even mean for there to be correlations between particles in QFT? Particles can be created and destroyed after all. The relevant dynamic equations, namely The Standard Model + General Relatively seem to me to be too complex and too non-linear to allow highly fine-tuned correlations in the initial conditions to exist for all time.”

It is not true that superdeterminism requires “highly fine-tuned correlations in the initial conditions”. This is a straw-man argument. Think about a system of two massive objects orbiting each-other. Their positions are not independent variables (they are correlated), the objects can be arbitrarily far away and there is no nonlocality involved. We don’t explain such correlations by postulating finely-tuned conditions at the Big-Bang, but by understanding that distant objects interact (localy) through fields (gravitational, electromagnetic) and those interactions cause correlations.

In the case of a Bell test we also have interactions between source and detectors v(both EM and gravitational) so their states have to be correlated in some way. Nobody tried to perform the required calculations so we don’t know if those correlations could give you the QM’s statistics or not, but the superdeterministic hypothesis says they might. Obviously, one is not limited to EM/gravitational interactions. ‘t Hooft proposes a model based on discrete physics (his Cellular Automaton interpretation) but the basic mechanism is the same. You have distant objects interacting by some fields.

In regards to your point about the Standard Model, I need to say that superdeterminism implies that the Standard Model is an emergent, statistical theory, based on a deterministic fundamental theory. So the mathematical aspects of the Standard Model could be just an artifact of the data loss implied by the move from the exact theory to the statistical one.

]]>“Whereas superdeterminism is a new, much more virulent insanity against which many otherwise intelligent people seem to have no antibodies.”

As you probably remember I strongly disagree with this view. Let me explain.

1. It is important to have in mind the “minimal” superdeterministic theory. This theory has the minimal number of assumptions that would still make the theory superdeterministic. Fighting against a specific superdeterministic model proves nothing because that model could be flawed, yet superdeterminism could still be true.

2. A minimal superdeterministic theory must deny Bell’s independence assumption, that the settings of the detectors at the time of detection must be independent of the properties of the entangled particles. If we agree that:

a. the settings of the detectors are nothing but their physical state (position/momenta of particles + field configuration for example), and

b. the properties of the entangled particles are determined at the source at the moment of emission,

It follows that the minimal superdeterministic theory requires that the state of the source at the moment of emission (position/momenta of particles + field configuration) and the state of the detectors at the moment of detection (position/momenta of particles + field configuration) must not be independent.

It is easy to notice that if the minimal superdeterminism is true Bell’s theorem is busted as one cannot arrive to Bell’s results. One needs to check which states of the detectors are possible for each possible state of the source and then add those states. This, of course, it’s not possible without doing a lot of calculations, which would be different for different theories.

So, if you still hold to your position provide a good reason for believing that minimal superdeterminism, as defined above, necessarily implies a “virulent insanity” or fine-tuning of the early universe or any other assertion you have made against superdeterminism.

]]>Could anybody give me a hint why sometimes (e.g. Rainer #23) randomness in nature is seen as an enabler for free will?

I think because the randomness is not just “chaotic” but that there is a deterministic equation describing its evolution (The Schrödinger Equation), it would allow a process like (biological) evolution over billions of years to develop the ability to “load the dice” to enable certain macroscopic outcomes, that have no (entirely) deterministic explanation, but are “intended” by the subject.

Modern experiments show that it does take order of milliseconds to “make a decision”, and the physical outcome of the “decision” may precede our conscious recognition of it being made, but that could still be “free-will”.

I do wonder how much “free-will” is actually present in animals though, even humans hardly do anything unexpected, and mental disturbances due to chemistry in the brain account for much of the crazy behaviour.

]]>Obviously randomness affects whether I can know in principle what the mind will decide next.

But in everyday language I would assume most people associate with “free” something more like “free of external influences / factors that influence the result” rather than “known in advance”.

So if it’s nature/physical laws that determine the decisions of your brain and not your mind,

does that make you any more free if nature uses randomness and not determinism?

Or in other words:

If you want a truly free will (at least to my naive understanding), then you would need some kind of mind which is a entity outside of the universe and therefore free of it’s laws.

To me it would not improve my freedom at all if my slave master picked his next command for me by rolling the dice instead of looking it a up on a already written list 🙂 ]]>

I’m not ppnl, but if I understand your question correctly, it’s equivalent (after removing the blind and therefore superfluous demon) to that of why a container of gas in equilibrium doesn’t spontaneously separate into two regions of different temperature before a separator closes down between them at some time t0.

In most contexts, the answer is that nothing prevents that separation except probability.

]]>