Four updates

A few weeks ago, I was at QIP’2019 in Boulder, CO. This week I was at SQuInT’2019 in Albuquerque, NM. There were lots of amazing talks—feel free to ask in the comments section.

There’s an interview with me at the website “GigaOm,” conducted by Byron Reese and entitled Quantum Computing: Capabilities and Limits. I didn’t proofread the transcript and it has some errors in it, but hopefully the meaning comes through. In other interview news, if you were interested in my podcast with Adam Ford in Melbourne but don’t like YouTube, Adam has helpfully prepared transcripts of the two longest segments: The Ghost in the Quantum Turing Machine and The Winding Road to Quantum Supremacy.

The New York Times ran an article entitled The Hard Part of Computer Science? Getting Into Class, about the surge in computer science majors all over the US, and the shortage of professors to teach them. The article’s go-to example of a university where this is happening is UT Austin, and there’s extensive commentary from my department chair, Don Fussell.

The STOC’2019 accepted papers list is finally out. Lots of cool stuff!

35 Responses to “Four updates”

  1. fred Says:

    As an aside (Scott, I’d understand if you would block this), and interesting interview between Joe Rogan and Andrew Yang, running for president and pushing for basic universal income to counter job displacement due to the current and on-going rise of automation and AI. Lots of interesting questions and data.

  2. Peter Morgan Says:

    With apologies, an unhelpful aspect of your Gigaom conversation is the making of such a strong distinction between classical and quantum: that some things are classical and other things are quantum.
    A better distinction is between measurements that are modeled by functions on a phase space and measurements that are modeled by Hilbert space operators, with the crucial distinction being between commutativity and noncommutativity of the algebras of measurements. Finding ways to say this that are accessible enough for the lay public (or at the level of Gigaom) is not easy, but one elementary statement is that Planck’s constant is a universal constant, equally applicable to all physical models, with quantum fluctuations being at the same scale of the action as noncommutativity in quantum theory.
    The first reason why this matters is that classical mechanics can be presented in a Koopman-von Neumann Hilbert space formalism, in which noncommutativity of measurements is as natural as it is in quantum mechanics, which makes the distinction between what is classical and what is quantum much more delicate. [An alternative way to approach this, from the quantum end, so to speak, making systematic use of Quantum Non-Demolition Measurements, is through Tsang&Caves’ “Evading quantum mechanics: Engineering a Classical Subsystem within a Quantum Environment”, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.2.031016.%5D
    Focusing on the Hilbert space aspect instead of on “quantum” is also helpful because we can usefully use any finite N-dimensional Hilbert space, not just the 2^n dimensional Hilbert space of an n-qubit system. It seems possible we might match Moore’s law for the dimensionality of the Hilbert space, but not as likely for the number of qubits. We should also focus on how far the reliable dimensionality of the group of unitary transformations in a given implementation falls short of N^2.

  3. mr_squiggle Says:

    In the Byron Reese interview, you said this:

    Some people like Roger Penrose have seized on the observation by Godel and Turing that no machine can be a perfect oracle for mathematics. In order to say that the brain or at least the mathematician’s brain must be doing something that a Turing machine can’t, but the obvious problem with that argument is that humans are not perfect oracles for mathematics either to put it very mildly.

    Thanks for saying that. I read a book by Penrose a couple of decades ago, and it was painful. My memory of is is that he seemed to spend the whole book addressing every possible objection to his theory except the obvious one.

  4. anonymous Says:

    It is very depressing for me to see that on one hand, universities are claiming that they are having a hard time recruiting faculty, but on the other, I am on track to be rejected from every single one of the 30 postdoc positions I have applied to. The issue is of course that universities are having a hard time recruiting faculty that meet their very high standards, standards which make it impossible to increase faculty hiring when conferences only accept a certain number of papers.

  5. Tamás V Says:

    In the GigaOm interview, and in general, wouldn’t it be less confusing to say only that entanglement is a correlation, without mentioning that it’s “instantaneous”? Isn’t it rather a “timeless” correlation?

    If Alice and Bob each possesses one qubit of an EPR pair, and Alice measures hers at time t and gets |0>, is there any experiment (incl. the possibility that they match their records later when they next meet) that can single out time t as the time when Bob’s qubit really changed in a way that he’d surely get |0> if he measured? (Other than saying it’s obvious because Alice did something at time t.)

  6. Scott Says:

    Tamás #5: Well, I try to make contact with where people currently are, even if there are misconceptions in it. Indeed, if you could measure a “speed of entanglement,” or a “time at which Alice’s measurement affected Bob’s state,” or anything else of that kind, then special relativity would be wrong and you would have superluminal communication.

  7. Tamás V Says:

    Scott #6: Well, that’s why I generously allowed that Alice and Bob can check their records later as part of the experiment, to avoid superluminal communication and save special relativity 🙂

    At high school, I read lots of popsci books that included chapters that tried to explain special relativity to a wider audience, taking into consideration “where people were”. The net result was that I had zero chance to understand it, had more questions than answers in the end. I could understand the basics of the basics only 20 years later 🙂

    Your intro to QC in the interview was the most concise and precise and understandable I’ve ever seen (shared it on LinkedIn immediately, or rather, instantaneously). I’m sure you could find a way to explain entanglement too without menntioning the word “instantaneous”.

  8. gentzen Says:

    Scott, you wrote:

    I didn’t proofread the transcript and it has some errors in it, but hopefully the meaning comes through

    It would be interesting to know which are those errors. How about this?

    It is now Google, IBM, Microsoft, Intel, a bunch of startup companies, all are investing on a scale of hundreds of billions of dollars.

    That would be more than 10% of the net worth of Alphabet (Google), Amazon, Apple, … and much more than the cost of past and future supercoliders.

    This also seems not have some inaccuracies:

    At a bare minimum, so there are many different approaches to quantum computing, but if you’re doing superconducting qubits, which is maybe the most popular approach today, then at a bare minimum, you need to cool everything down to 10 mKB, or so, so that your chip superconducts and you see the quantum behavior, so that means that you need to see an enormous cooling system.

    Is “10 mKB” the new unit for milli-Kelvin bytes? Also, isn’t this just the temperature that you reach with a dilution refrigerator, i.e. they just cooled it to the lowest temperature available at a reasonable cost, not because they were stricktly forced to do that (1 K would be cold enough for most superconductors, 4 K is cold enough for mercury, and 9 K is still cold enough for Niob), but because they could do it, and because it somehow helped increase the quality of the qubits?

  9. Scott Says:

    gentzen #8: Yes, “billions” should be “millions,” and “10 mKB” should be “10 millikelvin.” And I’m certain that I said those things correctly in the interview. And they’re good examples of why I took care to forewarn people about errors in the transcript. Unfortunately, as soon as I start emailing in corrections, I’m then responsible for anything I didn’t correct, and I don’t have time for that.

  10. Scott Says:

    Tamás #7: If you want to read me explaining entanglement “the right way,” why don’t you check out my undergrad lecture notes or my American Scientist article or any of the innumerable other places where I’ve tried to do that!

  11. Joe Shipman Says:

    I’m still waiting for someone to express the progress in building computers using the metric “what integers can be factored by Shor’s algorithm, without cheating, today?”

  12. Scott Says:

    Joe #11: Then you’ll be waiting a long time, because that’s a stupid metric. (Or rather: anyone willing to use that metric today, is probably someone you shouldn’t trust.) It’s almost exactly analogous to judging the Manhattan Project in 1942, 1943, 1944 by the metric “how big an explosion can you make, today?” I.e., it’s something that’s going to be basically flat for a long time and then undergo a step change, under the assumption that everything is progressing as it should—because you need a critical mass in the one case and to exceed the fault-tolerance threshold in the other.

    No, that doesn’t mean scalable QC is around the corner; it just means you can’t conclude anything one way or the other from the abandonment of factoring circus-stunts at the end of the liquid NMR era almost 20 years ago. If you want to have a real discussion about experimental progress (or where it’s stalled), look at coherence times and 2-qubit gate fidelities, and at the ability to maintain those fidelities in integrated systems.

  13. Raoul Ohio Says:

    Scott #12.

    Agree that biggest number factored is a stupid metric.

    On the other hand, it is totally easy to understand. So there’s that. Also dimensionless – always good.

    Does anyone know if they ever got past 15 = 3 * 5 ?

  14. Scott Says:

    Raoul #13: 21=3*7 was done with some precompiling/cheating. But none of this matters. No really, it doesn’t. The obsessive focus on an “easy to understand” but currently meaningless metric is what set naïve people up to get impressed by the claims of factoring 6-digit numbers or whatever using quantum annealing—something that not only had nothing to do with Shor’s algorithm, but likely had nothing to do with quantum mechanics at all. So it’s done real damage to the field.

  15. Tamás V Says:

    Raoul Ohio #13: There is also a practical aspect. If D-Wave could suddenly factor 2048-bit integers within 1 day, the CISO wouldn’t be amused to know it’s not even a quantum computer. And most likely many people would be very interested to find out where exactly the speedup comes from, even if it’s only a constant factor.

  16. Andrei Says:

    Scott,

    In the interview with Adam Ford, in regards to the double-slit experiment you say:

    “To say that again, like decreasing the number of paths that the photon could take to reach a certain spot, you can increase the chance that it gets to that spot. This is the thing that violates any conventional understanding of probability.”

    In my opinion, the difficulty of understanding the double-slit experiment classically has its origin in using the wrong classical model. This model is Newtonian mechanics of the rigid body. The particles are represented by bullets or marbles and the barrier as a concrete wall. Indeed, explaining the single-particle interference pattern using such a model is impossible.

    The problem is that there are a lot of physical phenomena that cannot be explained with bullets either, like magnets, electromagnetical induction, black-holes. You need a field theory for them. Let’s see how strange the two-slit experiment looks when a field model is used.

    So, we have an electron approaching the barrier which consists of a great number of electrons and nuclei. The trajectory the electron takes depends on the electric and magnetic fields existing at its location, and those fields are determined by the positions and velocities of the electrons and nuclei inside the barrier. It seems obvious for me that the electric and magnetic fields that are generated by a barrier with one slit will be different than those generated by a barrier with two slits. Just like the quantum amplitudes, those fields may add up or cancel each other. So, I see no reason to be perplexed by the fact that closing one slit determines the electron to go into a region where it could not arrive before. The fields, that are different, exert a different force on the electron thereby changing its trajectory.

    I know that you used photons in your example but those are EM waves classically so I think that the experiment with electrons is more interesting to explain.

  17. Gerard Says:

    @Andrei #16

    I don’t see how that makes sense.

    The double-slit experiment would normally be done with an electrically neutral and non-magnetic barrier so there are no EM fields outside of the barrier material.

    Also it is known experimentally that low energy electrons do not penetrate any significant distance into solid matter.

    So the barrier material is doing just what it appears: allowing particles to pass outside of it and preventing them from passing inside it.

  18. Andrei Says:

    Gerard,

    “The double-slit experiment would normally be done with an electrically neutral and non-magnetic barrier so there are no EM fields outside of the barrier material.”

    The barrier is made out of atoms. Atoms consist of charged particles (electrons and protons/quarks). On average, the barrier will be neutral (same number of electrons and protons) but those particles do not occupy the same position so what you get is a large number of dipoles. Those dipoles produce electric and magnetic fields of unlimited range.

    The EM interactions between two macroscopic objects are difficult to see at large distance because the attractive and repulsive forces between very large groups of particles average out. But for a single electron the interaction should be noticeable.

    “Also it is known experimentally that low energy electrons do not penetrate any significant distance into solid matter.”

    The electrons do not penetrate the barrier, they interact from a distance with the fields produced by the charged particles in the barrier.

    “So the barrier material is doing just what it appears: allowing particles to pass outside of it and preventing them from passing inside it.”

    The electron is prevented to enter the barrier because there is a large electric repulsion between it and the electrons in the barrier. But this interaction takes place from a distance. The electrons don’t just bump into other electrons, but their trajectory curves as a result of the Lorentz force. The way it curves will obviously depend on the electric and magnetic fields present at that location and those fields depend on the position and momenta of field sources.

  19. Ajit R. Jadhav Says:

    Andrei #18 and #16:

    The fact that you appreciate the existence of atoms in the barrier is, say, appreciated! Your general line of reasoning has some really good points to it, too.

    However, to understand the QM mysteries right, don’t focus too much on the double-slit experiment. Instead, look for the light-matter interactions.

    Light, of course, can be seen as a local and propagating disturbance within an EM field (at least the basic element of analysis, i.e. the plane-waves can be). But as people soon figured out, if you apply the EM theory to the cavity radiation problem, an inevitable implication is that the ultraviolet catastrophe is predicted, but doesn’t occur in reality. This problem (for one) brought out the limitation of using the classical EM theory to model the light-matter interactions. Also the photo-electric effect, and the curious dependency on the frequency of light, not on its intensity. The classical EM fails to explain light-matter interactions. That’s why QM was born.

    To cut a long story short, the outcome was that people figured out that it’s the classical EM phenomena that should be explained on the basis of the QM principles, rather than the other way round, i.e., rather than trying to explain the QM phenomena on the basis of the classical EM principles.

    Demonstrating how the limitations of the EM approach leads to problems in the double-slit experiment is relatively more difficult, because the moving electron’s contribution to the overall EM field in the interference chamber doesn’t actually interact a lot with matter.

    In case you didn’t know, in actual experiments with electrons, there is no chamber of the kind they sketch in textbooks. It’s more like the insides of a TEM (transmission electron microscope): a very huge and vast cavity (compared to the effective size of the electron), with a very thin wire serving as the middle separating portion of the barrier wall, with huge, huge open spaces on both sides of this wire. In other words, the “width” of the open “slits” on the either sides of the wall is far bigger than the opaque (to electrons) spacing provided by the wire in between the two “slits”. Effectively, rather strong and nonuniform EM fields act as barriers on the _other_ sides of the “slits”, by way of “wall” on that side. But they are so distant from their atomic sources and so strong, that the lone electron’s tiny field would have next-to-no chance to interact with any piece of matter. About the only interaction the traveling electron ends up having is with the detector material, when it lands there. But that event leads to its absorption, and so, there is no chance further “downstream” to see the effect of _this_ interaction of the fields of the electron with matter. All in all, a tough proposition for an analysis from the viewpoint of a classical EM field’s interaction. [And hope that I got the description of the actual chamber not too wrong.]

    Much better it is to see the situations where the matter-light interactions are more extensive, as in atomic gas emission/absorption spectra. That leads you to Bohr.

    Finally, just as an aside, it’s often very fruitful to figure out why developments occurred in the order in which they historically did. You learn not just interesting bits of history, but also get an invaluable chance for (re)organizing the conceptual structure of your knowledge—if you want to.

    Best,

    –Ajit
    PS: Sorry for yet another long reply… But his idea and approach was so interesting….

  20. Gerard Says:

    @Andrei #18

    If electric or magnetic dipole fields were produced by ordinary solid matter those fields would be measurable and there existence would be well-known, which they obviously are not.

    I’m not going to try to give a complete explanation for why they don’t exist but I’ll mention a couple of points.

    First the charge distribution of atoms tends to be spherically symmetric with the positive charges in the nucleus surrounded by an electron cloud, so there’s not normally a net dipole moment.

    Some molecules do have dipole moments, for example water molecules, and these play a role in things like the Van der Waals force, but that is a very short range effect.

    Secondly even if you have a collection of polarized molecules, such as a glass of water, the distribution over orientations will be uniform so there will still be no dipole field any significant distance away from the material.

  21. Andrei Says:

    Ajit:

    “The fact that you appreciate the existence of atoms in the barrier is, say, appreciated! Your general line of reasoning has some really good points to it, too.”

    Thanks!

    “However, to understand the QM mysteries right, don’t focus too much on the double-slit experiment. Instead, look for the light-matter interactions.”

    I have formal studies in QM and I am aware of the history regarding the discovery of QM. Yet I find most arguments against classical physics (the general framework of classical physics not necessarily a particular theory, like classical electrodynamics, which could be certainly improved or changed) unconvincing.

    Take for example the case of the stability of atoms. I’ve learned that the classical atom cannot be stable because the electron accelerates, radiates energy and falls to the nucleus. So, classical EM is doomed! Then I have discovered a classical model of the atom, the “free-fall” model, published in good quality peer-reviewed journals. The idea was that once you take into account the spin of the electron the atoms become stable because the spin determines the electron to avoid the nucleus. There is not much literature on that subject, it was mostly ignored.

    A different idea is to assume the existence of a external EM field. A theory named stochastic electrodynamics makes use of this and made some progress in explaining the quantum behavior. The point is that when the electron accelerates it takes energy from the field so that it never falls on the nucleus. This external field also determines the property of spin, wave-like behavior, etc.

    I cannot comment on your arguments regarding the relative strength of various fields encountered by the electron in a two-slit experiment, I think that a simulation would be required to see what is significant and what is not, but even if you are right there is always the possibility of developing a new classical field theory that does the job. So, the problem is not that one cannot imagine a classical explanation for this experiment. The problem is to see if one can reproduce quantitatively the results.

  22. Ajit R. Jadhav Says:

    Andrei # 21:

    1. I googled on “free-fall model of atom” and got to the Wiki on Gryzinski as well as the YouTube video: https://www.youtube.com/watch?v=P2IsIkSn5bk . I was not at all aware of anything on it.

    The two immediate questions that struck me were: (i) How do they explain the phenomenon of quantum entanglement, if the electron is deterministically going to return after a certain distance in the radial direction (i.e. going by the YouTube video alone)? (ii) What is the nature of the physical mechanism connecting a quantum system to be measured (the System), and a measuring apparatus (the Instrument), which leads to the observed results regarding measurement?

    On second thoughts, also this question: (iii) Do the equations of the theory apply equally well to the universe taken as a whole, when the latter is regarded as a thermodynamically isolated system? Please note, the catch here is this: If the equations of a theory crucially depend on something external to the system in question (i.e., if it depends on some agent in the thermodynamically external environment, an agent which has some daemon-like capabilities to suitably alter the force-fields or potentials existing within the system or the interaction at the System-Environment boundary), then such a weakness of the theory can get exposed only when you consider it from this viewpoint, by applying it to the universe as a whole.

    2. But, apart from it all, it was good to know that you’ve a formal background on QM (which I don’t), but still are willing to pursue lines of thoughts that seek to explain the QM riddles. Such an “attitude” is well appreciated.

    3. As to the tendency to regard QM as presenting arguments against the classical mechanics (CM), thereby meaning even the principles having basis in CM, this is unfortunate. Pop-sci folks do paint such a picture, and it was a tremendous surprise for me when I “discovered” for myself that the potential in the Schrodinger equation was perfectly classical, and more: each development in QM (I mean until Schrodinger’s theory—I don’t know anything about the later developments including Dirac’s) was at its core, fundamentally based only on the CM _principles_: be it the principle of energy conservation, the idea of conservative forces producing potentials, the Hamiltonian theory, the Poisson brackets (which I don’t understand as well as I should), the wave phenomena, etc., etc., etc.

    But that doesn’t mean that classical physics taken as a body of specific conclusions i.e. its contents or the particular conclusions drawn in it before 14 December 1900, are therefore sufficient all by themselves. So, in that sense—the emphasis on the _contents_, on the specific physical conclusions of the classical approach—the pop-sci folks _are_ right. The mathematical techniques and and _many_ physical ideas suggested by the classical physics continue to hold, but certain even fundamental physics theories in it (such as the Maxwellian synthesis) fall short if they are taken on as “as is” basis. The pop-sci folks are quite right in highlighting this aspect.

    Now, from my PoV, any attempt such as Gryzinski’s should be welcome—provided it does not solve just _a few_ aspects of QM (such as the stability of atoms) well, but it also goes on to explain _all_ the known QMcal phenomena, including, for example entanglement, mechanism for the measurement process, etc. For instance, can you use Gryzinkski’s approach in analyzing a qubit or a QC better, as compared to the standard textbook QM? That is the sort of a thing I have in mind here.

    Still, thank you very much for pointing out the free-fall model, and generally for an engaging and thoughtful (and not just a thought-inducing!) conversation.

    Best,

    –Ajit

  23. Andrei Says:

    Gerard,

    “If electric or magnetic dipole fields were produced by ordinary solid matter those fields would be measurable and there existence would be well-known, which they obviously are not.”

    My point is exactly that these fields are measurable and they manifest themselves as “quantum effects”. The particles seem to “know” what happens in distant regions. Sure, you can postulate non-local interaction (Bohm) or multiple universes, or Q-bist style solipsism. It seems to me though that the fields that we already know are there and that are used to compute the Hamiltonian that enters QM’s formula are a better choice. You could treat a two-slit experiment as a n+1 body problem with n particles in the barrier. In this case you will need to explicitly add the EM interactions between all particles.

    “Some molecules do have dipole moments, for example water molecules, and these play a role in things like the Van der Waals force, but that is a very short range effect.

    Secondly even if you have a collection of polarized molecules, such as a glass of water, the distribution over orientations will be uniform so there will still be no dipole field any significant distance away from the material.”

    I think it is difficult to estimate what is significant and what is not without performing a simulation. You also need to take into account that the properties of molecules, like the dipole moments are average quantities. The fields could be much stronger for short periods of time, and they could significantly affect the trajectory of an electron. Yet, when measured over long periods of time, they would cancel out.

    Even if one can show that classical electromagnetism in its present embodiment cannot provide a quantitative explanation for quantum phenomena there is no reason to dismiss the possibility that an improved version of the theory might work. It still seems to me that the logic behind rejecting the possibility of a classical explanation based on the observation that bullets are not an appropriate model is flawed.

  24. Gerard Says:

    @Andrei #23

    > My point is exactly that these fields are measurable and they manifest themselves as “quantum effects”.

    If these fields existed they would affect many things besides the double-slit experiment.

    You would need to take them into account in engineering designs for things like electron guns, photomultiplier tubes, etc.

    They would also affect electron microscopes where you should essentially be able to “see” such fields.

    If you really want to replace quantum mechanics with a fully classical theory the amount of phenomena you’re going to have to explain is enormous.

  25. Don Reba Says:

    Personally, I chose to quit academia after completing my PhD because I valued being able to create working software, and I have never seen a CS professor who was any good at programming.

  26. James Warren Says:

    Ajit, Andrei, and Gerard,

    “If you really want to replace quantum mechanics with a fully classical theory the amount of phenomena you’re going to have to explain is enormous.”

    Yes, it means that your classical explanation is constrained to be fundamentally extremely simple if it’s going to explain everything.

    Here’s a simple derivation (albeit very hand-wavy) of quantum mechanics from classical physics, just by adding extra time dimensions.

    – Assume that there exist extra time dimensions. It probably shouldn’t matter how many of them there are.

    – Extra time dimensions are unstable. Stable perturbations Wick-rotate to exponentially growing modes.

    – Unstable modes have imaginary mass and momentum.

    – If the instabilities obey a diffusion equation, then the diffusion constant will be imaginary and ‘ultrahyperbolic’ diffusion obeys the Schrödinger equation.

    – The instability of extra time dimensions causes objects to split exponentially many times generating a kind of many worlds.

    – Unstable modes can have both positive and negative frequencies. These correspond to the wave function and its complex conjugate.
    Stable configurations occur when these modes cancel each other out, so that stable configurations occur with Born rule probabilities.

    – In principle, quantum gravity should be readily solvable. Just find solutions to GR with extra time dimensions and background noise. However,
    curved spacetime and in particular CTCs is likely to lead to a much larger state space.

    Obviously this idea is crude and I expect some of it to break down (I’d put the least confidence on the derivation of the Born rule), but it seems capable of explaining all of QM.

  27. Andrei Says:

    Ajit,

    “How do they explain the phenomenon of quantum entanglement, if the electron is deterministically going to return after a certain distance in the radial direction (i.e. going by the YouTube video alone)?”

    I do not understand this question. What has entanglement to do with the trajectory of the electron in an atom?

    “What is the nature of the physical mechanism connecting a quantum system to be measured (the System), and a measuring apparatus (the Instrument), which leads to the observed results regarding measurement?”

    I suppose this question is not related to the free-fall model, but it is a general one regarding measurement in QM. My opinion (which I want to be clear is a personal, non-mainstream one) is that the system and the measurement device interact by means of fields. In an experiment dealing with EM phenomena, those will be electric and magnetic fields. These fields are infinitely-ranged so the distance between the system and the measurement device does not represent an issue. If you add determinism this implies that the systems “knows” how it will be measured. This can in principle explain in a classical framework the entanglement experiments, like EPR, Bell, etc.

    “But that doesn’t mean that classical physics taken as a body of specific conclusions i.e. its contents or the particular conclusions drawn in it before 14 December 1900, are therefore sufficient all by themselves.”

    Sure, by classical physics I mean a general framework based on an objective reality, not a specific theory.

    “For instance, can you use Gryzinkski’s approach in analyzing a qubit or a QC better, as compared to the standard textbook QM? That is the sort of a thing I have in mind here.”

    Gryzinkski’s model is a specific model of the atom. It is not an all-encompassing framework that could compete with QM. But it is a another example of a false claim regarding classical physics being debunked.

    I am grateful for this conversation as well!

  28. Andrei Says:

    Gerard,

    “If these fields existed they would affect many things besides the double-slit experiment.

    You would need to take them into account in engineering designs for things like electron guns, photomultiplier tubes, etc.

    They would also affect electron microscopes where you should essentially be able to “see” such fields.”

    What do you think an image recorded by an electron microscope is? It is the effect of the EM fields produced by the observed object on the trajectory of the incoming electrons. If the object would not produce such fields the electrons would simply pass through it, like neutrinos.

    “If you really want to replace quantum mechanics with a fully classical theory the amount of phenomena you’re going to have to explain is enormous.”

    I am certainly not at that level. My knowledge about the field is modest. There are people working in that direction, like ‘t Hooft with his “Cellular Automaton Interpretation” of QM and a team working on a modified version of classical electromagnetism (stochastic electrodynamics). ‘t Hooft’s theory can be found on arxiv:

    https://arxiv.org/pdf/1405.1548.pdf

    A book describing the other approach (The Emerging Quantum) can be found here:

    https://loloattractor.files.wordpress.com/2014/11/luis_de_la_pec3b1a_ana_marc3ada_cetto_andrea_valdc3a9bookzz-org.pdf

    I have not made the claim that I can explain all QM, just that a particular experiment (the two-slit experiment) can also be understood classically.

  29. Gerard Says:

    > What do you think an image recorded by an electron microscope is? It is the effect of the EM fields produced by the observed object on the trajectory of the incoming electrons. If the object would not produce such fields the electrons would simply pass through it, like neutrinos.

    It depends on the type of microscope (TEM vs. SEM) and the mode of operation.

    The most common SEM mode is to look at secondary electrons.

    Yes, of course all of this is mediated by EM fields but again these are very short range interactions.

    You are claiming (unless I misunderstood you) that matter should produce dipole fields a significant distance away from the material. If these fields existed you would expect them to affect electron microscope images in ways that are not described by the current theory.

  30. Ajit R. Jadhav Says:

    Andrei #27:

    Hmm… Looks like you have thought a lot about these issues and the nature of theorizations involved in such models. … However, I still think that the measurement problem is where one would like to see a more detailed description.

    >> “I do not understand this question. What has entanglement to do with the trajectory of the electron in an atom?”

    If entanglement is present between two or more particles in a system, it should result in trajectories that are different from the case when entanglement is absent.

    In real atoms, there _always_ is an interaction between _any_ pair of electrons (and between _any_ pair made up of an electron and a proton too), which goes on to imply that all electrons (and all protons) in an atom _always_ come as entangled with all the others.

    The idea of separable states is just a practically useful technique of approximation, it is just a mathematical device which helps simplify calculations. This approximation makes sense only when interaction terms can be neglected. But theoretically, all particles always interact, and so, all particles in the physical universe always come as entangled—in principle entangled.

    Coming back to the practical issues: Separable systems becomes a poor approximation to make especially when applied to single atoms because the constituent particles are, in the position representation, sufficiently close to each other so that the interaction term cannot any longer be neglected. That’s why I raised that question.

    Of course, I don’t know if the _finite_ maximum radial displacement which they showed for the electron in the video (i.e. the finite extent after which the electron “returns” towards the nucleus) was calculated after taking into account the interaction terms or not. I _presumed_ that they didn’t. On second thoughts, may be I was wrong. … Could someone please clarify?

    >> “If you add determinism this implies that the systems “knows” how it will be measured. ”

    🙂 That is the crux of the measurement issue. …

    Look, the System itself may “know” how it is going to get measured, but that’s only a “half” part of the story. The Instrument is the other “half.” How does the Instrument also “know” which measurement to produce in an act of measurement, if (i) each such a measurement will have only a probabilistically determined (!) outcome, with each measurement picking up only one of the eigenfunctions of the System at a time, not all of them, and (ii) an infinitely long sequence of measurement will still come to honor the measurement postulate ? … How do the free-fall or EM models fare on this count? Have they worked out a _physical mechanism_ for the measurement postulate? If yes, how does it go? Any references? Thanks in advance.

    Warm regards,

    –Ajit

  31. Andrei Says:

    Gerard,

    “Yes, of course all of this is mediated by EM fields but again these are very short range interactions.

    You are claiming (unless I misunderstood you) that matter should produce dipole fields a significant distance away from the material. If these fields existed you would expect them to affect electron microscope images in ways that are not described by the current theory.”

    What I am claiming is that those dipole fields might be the reason for the “wave-like” behavior of quantum particles. This behavior is easily observed in some regimes (slow particle velocities, low mass particles) and more difficult to observe with fast and heavy particles. In QM we would say that the wavelength of such particles is smaller. Classically you would expect the same thing, because a fast particle will stay in the field for a shorter period of time so its trajectory will be less influenced by the field, while a heavier particle will be less displaced as a consequence of Newton’s second law.

    I don’t know much about the technology behind the electron microscope but I am sure that the behavior of those electrons is correctly described by QM, so they would be correctly described by a classical interpretation of QM as well. So, if the shape of the observed object is as such that interference/diffraction effects are expected you will see them there and you will have evidence for those long-ranged fields.

    A somehow related classical explanation of the wave-like behavior is that there exist an EM field that permeates the entire universe and it originated at the big-bang. It is the equivalent of the so-called “quantum fluctuations” , the QFT vacuum and is the main assumption of the theory of stochastic electrodynamics (link above). This field (the zero-point field, ZPF) is the reason behind the Casimir force. In this case, the incoming electron is not directly influenced by the dipole fields produced by the barrier but the barrier leaves an imprint on the ZPF which in turn determines the electron’s trajectory.

  32. Andrei Says:

    Ajit,

    I think you have a wrong understanding of what entanglement means. Entanglement is a type of correlation. The reason for that correlation might be classical as well as non-classical. So, there is no reason to expect that there is any conflict between the existence of those correlations and a classical description of the world that includes well-defined trajectories. There is a proof that this is possible in the existence of de-Broglie-Bohm theory where the electrons do have classical-like orbits and yet this theory reproduces perfectly non-relativistic QM.

    De-Broglie-Bohm theory is also a deterministic one and this provides a proof that the quantum probabilities need not to be fundamental, but they could be interpreted as a lack of knowledge regarding the complete state of the system. This should answer your second point.

    Separability is an assumption that is usually false in both classical and quantum physics. It is useful in those regimes where Newtonian mechanics of the rigid body (bullets, billiard balls and the like) is a good approximation. Newtonian gravity, GR, classical electromagnetism do not allow one to split a system into independent subsystems due to the existence of infinitely ranged forces/fields.

  33. Ajit R. Jadhav Says:

    Andrei #32:

    If I at all decide to reply you, I guess the best place to do it would be at my blog, not here.

    –Ajit

  34. Joe Shipman Says:

    Scott, you misunderstood the spirit of my question, and your use of the word “stupid” was inappropriate and your use of the word “obsessive” was inappropriate. The correct answer to my question seems to be either “a 4-bit number was factored using Shor’s algorithm but it is not possible today to implement Shor’s algorithm for anything larger”, or “even the factorization of 15=3×5 by Shor’s algorithm was sorta cheating”, and any reluctance on your part to admit that in those terms can only have been due to mistaking me for the kind of troll who would have used that admission to retort “ha ha Quantum Computing is bogus” rather than simply “thanks, that’s what I thought”, which is what I would have responded.

    Your “Manhattan Project in 1942-44” analogy is fine. My point is that you and others shouldn’t be so reluctant to come right out and say how much the various theoretical advances have bought us in practice for various quantum algorithms (Shor’s and Grover’s being the most well-known).

    To put it another way: trying to establish “quantum supremacy to classical computing” by figuring out some kind of special problem for which a quantum system outperforms a classical computer is a perfectly fine thing to do, but some of us are interested in whether even the much weaker “quantum supremacy to technologically unaided humans” is a thing yet. Shor’s algorithm may be very far away from being able to factor integers that I can’t factor in my head, but perhaps some other quantum algorithms are now at the point where they can outperform unaided humans on some well-defined computational questions? Why is it “stupid” to want to know this, or “obsessive” to notice that the various popular writings on Quantum Computing don’t actually get around to addressing it?

  35. Scott Says:

    Joe #34: I used the words “stupid” and “obsessive” in reference to the “how big a number has been factored?” metric (and the emphasis on that metric), not in reference to any individual—certainly not you. 🙂 I apologize if it came across wrong. I’m sorry that various sources on the Internet basically lied to you about the current state of QC implementation (or so it seems)—it’s been a serious problem for over a decade, and this blog is one of the main places that’s been trying to push back. So I’m glad you’re coming here for the real deal!