Thanks for the notes!

]]>Set up a double-slit apparatus, like in this paper: http://iopscience.iop.org/article/10.1088/1367-2630/15/3/033018

Fire individual electrons (let’s say 10k) and construct a data set that records, for each electron, a.) number of electron (1st fired, 2nd fired, 3rd fired, and so on), b.) time detected, c.) x position, d.) y position, e.) z offset/position. This can be done from the .mov files included with the above experiment, along with a value for the “z” offset.

Now, move the apparatus 1mm in the x or y direction, and conduct the experiment again. Continue doing this to cover a “square” of 1 cm x 1 cm in the x/y plane, stopping every millimeter to re-do the experiment. After the whole square is “covered”, move the apparatus backwards or forwards 1mm and re-do the square. Repeat this process, moving in the “z” direction 1mm each time, until the experiment has been conducted within a 1 cm cube.

Also set up a “control” where the same experiment is run, the same number of electrons are fired, but the apparatus stays in the same physical position the whole time instead of moving after every 10k electrons are fired.

With that data, then, we can conduct a statistical analysis to determine if location affected the outcome in any way. For instance, where does the 4th electron fired tend to land, or is it completely random? Are all the events actually random, or are there any underlying patterns?

If my concept of continuous QFT fields is correct, then differences in field energies “outside” of the quantized values may somehow “skew” the results — not in a way that affects the overall scattering pattern, but perhaps in some other way. I, for one, would like to know how “random” events are placed within the confines of what’s predicted by the wave function.

]]>https://www.sciencedaily.com/releases/2018/09/180918114438.htm

]]>All roads lead to computational complexity theory!

(1) I think that both “Probability&Stats” *and* “Computational Logic” form an ‘abstract duality’. If both fields are equally fundamental to computer science (but at opposite ends of the ladder of abstraction), then the idea is that the field of “Computational Complexity” is a sort of *composite* (middle-out) of the other two fields (the middle between the two poles).

(2) There are ‘ladders of abstractions’ for each of the 3 main areas of Computer Science , (a) “Probability&Stats”, (b) Computational Complexity Theory, ( c ) Computational Logic.

If we ‘zoom-in’ on the field of “Probability&Stats” first (decomposing the field into more fine-grained sub-domains), I propose this ordering:

—

Probability theory (bottom) — Statistics (middle) — Stochastic processes (top)

—

In the other direction, ‘zooming-in’ on the field of “Computational Logic” (decomposing the field into sub-domains), I’m proposing:

—

Formal language theory (top) — Type theory (middle) — Modal logic (bottom)

—

And for “Computational Complexity Theory” my proposed decomposition and ordering is:

—

Automata (top) — Complexity classes (middle) — Information&Coding Theory (bottom)

—

Zooming out again and looking at the 3 main areas of CS a whole, I propose this global ordering:

—

Probability&Stats (bottom) – Computational Complexity (middle ) -Computational Logic (top)

—

For these 3 main areas, I decomposed into a total of 9 sub-fields (3 sub-fields for each), and given the orderings I postulated, “Complexity Classes” are dead-center on the ladder of abstraction.

We then view this as an ‘abstract duality’, with “Probability&Stats”/”Computational Logic” being the two poles, such that one could construct ‘Complexity Classes” *both* from the top-down (starting from formal languages) *and* from the bottom-up (starting from probabilities).

Thus, everything converges to computational complexity theory! All roads lead to the ‘Complexity Classes’… 🙂

]]>