Research (by others) proceeds apace

At age 39, I already feel more often than not like a washed-up has-been in complexity theory and quantum computing research. It’s not intelligence that I feel like I’ve lost, so much as two other necessary ingredients: burning motivation and time. But all is not lost: I still have students and postdocs to guide and inspire! I still have the people who email me every day—journalists, high-school kids, colleagues—asking this and that! Finally, I still have this blog, with which to talk about all the exciting research that others are doing!

Speaking of blogging about research: I know I ought to do more of it, so let me start right now.

  • Last night, Renou et al. posted a striking paper on the arXiv entitled Quantum physics needs complex numbers. One’s immediate reaction to the title might be “well duh … who ever thought it didn’t?” (See this post of mine for a survey of explanations for why quantum mechanics “should have” involved complex numbers.) Renou et al., however, are interested in ruling out a subtler possibility: namely, that our universe is secretly based on a version of quantum mechanics with real amplitudes only, and that it uses extra Hilbert space dimensions that we don’t see in order to simulate complex quantum mechanics. Strictly speaking, such a possibility can never be ruled out, any more than one can rule out the possibility that the universe is a classical computer that simulates quantum mechanics. In the latter case, though, the whole point of Bell’s Theorem is to show that if the universe is secretly classical, then it also needs to be radically nonlocal (relying on faster-than-light communication to coordinate measurement outcomes). Renou et al. claim to show something analogous about real quantum mechanics: there’s an experiment—as it happens, one involving three players and two entangled pairs—for which conventional QM predicts an outcome that can’t be explained using any variant of QM that’s both local and secretly based on real amplitudes. Their experiment seems eminently doable, and I imagine it will be done in short order.
  • A bunch of people from PsiQuantum posted a paper on the arXiv introducing “fusion-based quantum computation” (FBQC), a variant of measurement-based quantum computation (MBQC) and apparently a new approach to fault-tolerance, which the authors say can handle a ~10% rate of lost photons. PsiQuantum is the large, Palo-Alto-based startup trying to build scalable quantum computers based on photonics. They’ve been notoriously secretive, to the point of not having a website. I’m delighted that they’re sharing details of the sort of thing they hope to build; I hope and expect that the FBQC proposal will be evaluated by people more qualified than me.
  • Since this is already on social media: apparently, Marc Lackenby from Oxford will be giving a Zoom talk at UC Davis next week, about a quasipolynomial-time algorithm to decide whether a given knot is the unknot. A preprint doesn’t seem to be available yet, but this is a big deal if correct, on par with Babai’s quasipolynomial-time algorithm for graph isomorphism from four years ago (see this post). I can’t wait to see details! (Not that I’ll understand them well.)

39 Responses to “Research (by others) proceeds apace”

  1. Bram Cohen Says:

    On the knot thing, isn’t knot comparison something where we have algorithms which work well in practice so it’s unsurprising, although certainly exciting, to have such a result? Also, isn’t one of the knot polynomials BQP complete to calculate making this have the odd result that it seems harder to calculate the thing which is meant to help with comparison than it is to do the comparison? Finally, does the algorithm work for comparing any two knots/links or is it only the unknot?

  2. I Says:

    Scott, maybe you should rephrase the “can’t be explained using any variant of QM that’s both local and secretly based on real amplitudes” into “can’t be explained using any variant of QM which is both local and secretly based on real amplitues via a hidden qubit entaglement”. You mention the extra qubit earlier, but it bears repeating that they are using a particular version of real quantum mechanics. It doesn’t seem like the arguement could work for a QM based off complexified real vector spaces, since those don’t introduce unecessary dimensions. At least, not in the same way as the construction in the paper/

  3. Scott Says:

    Bram #1: I don’t have any special information beyond what’s in the brief abstract! Having said that:

    – Yes, like with graph isomorphism, my personal guess would be that unknottedness is in P (“just untangle the thing with your hands, how hard could it be?” 🙂 ). But proving it is another matter entirely!

    – The problem that’s BQP-complete is estimating the Jones polynomial of a knot at a given root of unity. Yes, a priori this is neither necessary nor sufficient for determining unknottedness, even though the Jones polynomial is often useful for distinguishing knots in practice.

    – I assume the algorithm as it stands doesn’t do general knot isomorphism, because otherwise the abstract would’ve said so!

  4. Greg Kuperberg Says:

    Bram #1 – Even though people never knew any hard examples of unknottedness, the situation is indeed highly analogous to graph isomorphism. It was unanswered for so long that it would be surprising and fantastic to finally see a quasipolynomial time algorithm. My guess is that it is not a uniformly fast algorithm for any two knots. It might possibly generalize to an algorithm which is fixed-parameter tractable if you take one knot as fixed and make the other one the input.

    As for unknottedness and BQP, you should look at my old paper on this. It is BQP-complete to approximate the Jones polynomial, not calculate it. And, this is key, the approximation is already BQP-complete with exponentially bad error. That means (using Scott’s result that PostBQP = PP and some other ingredients) that it is #P-complete to approximate the Jones polynomial with any serviceable standard of error. I explain in the introduction that what was seen as a possible quantum advantage in computational topology, was in my view misinterpreted and is better understood as an obstruction to a quantum advantage.

    Anyway, Marc Lackenby’s claimed result would imply what I long thought might happen, that you can know whether a diagrammed knot is the unknot long before you can calculate or estimate its Jones polynomial.

  5. Jalex Stark Says:

    I #2: Their theorem is stronger than ruling out some particular “hidden qubit entanglement” model, although they do discuss this model in their paper. Mathematically speaking, they give a linear functional (Bell inequality) that witnesses a separation between two sets of tripartite correlations. These correlation sets correspond to two kinds of measurement scenarios, roughly speaking these are:
    1. Alice, Bob, and Charlie have subsystems which are in tensor product with each other. Alice and Charlie are not allowed to have any shared entanglement at the start of the experiment. In one round, with no communication, Alice, Bob, and Charlie all make measurements from some devices with finitely many measurement settings and report the results.
    2. The same as above, but the matrices describing the measurement devices have only real numbers in them.

    So if your theory of real quantum mechanics has subsystems carved out by tensor products, has measurement probabilities given by projection matrices and born’s rule, and allows for subsystem to be unentangled from each other, then it is ruled out by their theorem.

  6. maline Says:

    The version of “real QM” that is being ruled out in the paper is one where each “local system” has its own “extra qubit”, and two such systems can be prepared in such a way that their extra qubits are not entangled.

    This assumption is already kind of weird: how far would you need to go to get to a new qubit? Or are we going to put one at every point in space, making a Hilbert space whose dimensionality is the cardinal 2^C?

    If we want to consider the possibility that QM might actually be real, it would be much more natural to assume a single universal qubit, basically equivalent to the global phase in complex QM. The paper explicitly does not rule out this case.

    Such a universal qubit need not be considered problematically nonlocal, because we can just as well express the same idea by saying that there is a qubit at each point, BUT the relative angles between them are fixed. The only way the Schrodinger Hamiltonian acts on the extra qubit(s) is though the global matrix J that replaces i, so any such fixed conditions will be conserved. Furthermore, we are free to mentally these angles in any sufficiently smooth way – this is the electromagnetic gauge transformation!

  7. Vanessa Says:

    “At age 39, I already feel more often than not like a washed-up has-been in complexity theory and quantum computing research”

    That’s pretty depressing to hear when I am almost 38 and only starting my career in theoretical compsci! At least I’m pretty sure I still have burning motivation…

  8. Guan Says:

    Is Marc Lackenby’s speech publicly available to anyone like me following the link you gave?

    If not, would you please blog- summarize something new you heard about since your explanation is more human, enjoyable, and approachable? I am taking a crash course about knot and topology now, knowing very little about them except a few of Martin Gardner’s puzzle.)

    (If you feel that … well, in that case, I should have hung myself, as a 25-year-old, non-CS major layman)

  9. matt Says:

    Regarding unknottedness, there are examples of knot diagrams for the unknot (i.e., ways of drawing the unknot on the blackboard), such that in order to turn them into the unknot by Reidemeister moves you need to increase the crossing number before, ultimately, being able to reduce the crossing number to 0. I forget the details, though…. So, just untangling it with your hands might be hard.

  10. I Says:

    Jalex Stark #5
    Yeah, that’s seems like a much better way to phrase things after skimming the paper some more. Sorry to say this Scott, but Jalex’ post was a better summary of the paper’s result than your own. Or even the author’s abstract.

    Thanks Jalex. And thanks for sharing these interesting papers Scott.

  11. Nobody Says:

    Why not talk to someone who are undergoing similar difficulties, someone older and wiser, like your own professor Umesh Vazirani or someone close to you in MIT or Texas at Austin, they may have their own philosophy or practical tricks to cope with it and then you can try and share with us.

  12. Scott Says:

    Jalex Stark #5: Thanks for clarifying! On reflection, though, in a popular summary like this one, I’m fine to sacrifice generality for greater concreteness about what sort of thing we’re trying to rule out.

  13. Scott Says:

    Guan #8: No, I think you’d need to ask someone at UC Davis if you wanted to hear the talk. I’m hoping to get a report after it though! (It unfortunately conflicts with my teaching.)

  14. Jalex Stark Says:

    I #10, Scott #12:

    Thanks for the kind words. I didn’t mean to imply that the narrower explanation was wrong with respect to its intended audience. I am a little worried that the authors have undersold their results *to other experts*, who may believe that this is “just” a separation between two contrived models. Probably it would help their case if we can “upgrade” the model to a “normal” tripartite bell inequality.

    To pick on Miguel Navascues a little, I think that most working quantum theorists would say that [this 2014 paper](https://www.nature.com/articles/ncomms7288.pdf?origin=ppub) says very little about the nature of reality. But I would like those same theorists to recognize that the present paper says significantly more. (I really don’t want to disparage the linked paper. These “almost quantum correlations” are mathematically pretty interesting, and they were a good place to build up some of the tools used in the more recent work.)

  15. P Peng Says:

    With graph isomorphism, there is a “natural” way to represent a graph using its adjacency matrix. With just the number of vertex, the input “size” of the problem is known.

    For the unknot problem, how is the input size determined for discussing time scaling of the problem? What is the “standard” way to encode a 3 dimensional jumbled knot? This feels non-trivial to me. Almost to the point of choosing an “appropriate” encoding could possibly greatly simplify the problem.

    First attempt / starting point: Literally represent a 3D jumbled knot as a circular list of points giving a path in space. So this effectively “discretizes” the knot into a kind of polygon. Somehow need to decide what level of discretization is necessary to not accidentally change the knot topology. Unsure how to even show this can be done efficiently (and it should be efficient or we’d be cheating a bit by pushing non-trivial work into the encoding, right?). This makes the size complexity very unclear.

    Attempt at simplifying representation: Assuming we can somehow get such a 3D discretized path, create a flatenned/planar knot diagram (I’m assuming here that “flatenning” a knot can be done efficiently). Represent each crossing as two vertex in a graph, with a directed edge connecting them (allowing distinguishing the over/under parts of the crossing), then connect the vertexes as the knot connects the over/under parts of the different crossings. Now we have a directed graph. Chose an arbitrary ordering of the vertex (the very step that makes graph-isomorphism difficult, but I cannot see a way to avoid it), and represent the directed graph as an adjacency matrix with 0,1 entries. So this requires O(n^2) space for a knot diagram with n crossings.

    Further reducing encoding: The adjacency graph will be _very_ sparse, because each vertex will connect to exactly three other vertex, and one of those connections could even be implied in the order we label the vertices. So the space could be reduced with some kind of sparse encoding that is just the list of connections, which would require representing the vertex labels, so we’d now be at about O(n log n) space for a knot diagram with n crossings.

    The first step has so many unknowns, I’m unsure the rest is even fair game.

  16. Scott Says:

    P Peng #15: There are at least two natural ways to encode knots—as crossing diagrams in the plane, or as collections of line segments in R3—and one can clearly convert from one to the other in polynomial time.

  17. P Peng Says:

    Scott #16: I am probably missing something, but it feels like that answer is assuming away the very thing that makes the encoding issue difficult.

    There does not appear to be a single “obvious” / “natural” conversion from 3D encoding to a crossing diagram. For the very reason that the “trivial” way of losing a dimension via some kind of projection, may give exponentially more crossings than some other method, even just projecting along a different direction. After all it may be the unknot, and so it could require 0 crossings when projecting in one direction, but maybe order(number of 3D line segments) crossings in another direction. So different conversions can lead to input sizes differing by more than a polynomial amount.

    Actually, I think that helped clarify what is bothering me here. Both of us, in our initial attempts to describe the input size arrived at something that is polynomial in “number of crossing”. So our intuitions led us to treat “number of crossings” as the correct summary of input size for the “unknot” problem, in the same way that number of vertices is a summary of the input size for graph isomorphism. However …

    *** Crossings are not a good measure of input size for the unkot problem (unlike vertices for graph iso-morphism), because the number itself is not invariant to the morphisms we are interested in.

    That finally hits on what was bothering me. So writing this out has made me start to feel like specifying a 3D path with line segments is in some sense the “correct” input size. We could discuss an input path of N points connected by line segments (with points specified on a discrete grid cube of size at most exponential in N, so coordinates do not push the size past O(polynomial in N)), and we allow transformations that move the points as long as the line segments do not cross. Decision question: Can an input of N segments be transformed into a planar triangle (an unknot)? Now it feels like we could discuss polynomial time.

  18. Jalex Stark Says:

    P Peng #15: If you think that the field of knot algorithms is founded on nothing by people who don’t understand very simple things about specifying algorithmic problems, a good way to remedy your confusion would be to go to Google Scholar and type in “knot algorithms”. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C22&q=knot+algorithms&btnG=

  19. P Peng Says:

    Jalex Stark #18 : ?? I apologize if my questions from wanting to understand what is the relevant input size here, were interpreted as disparaging an entire field. That is of course not at all my intention.

    Very simple issues like this are often not discussed because they have long since been resolved, or are trivial when reworded in a more appropriate manner. I was never claiming the founders of a subfield are “people who don’t understand very simple things”; I am saying that __I__ don’t know how to resolve these very simple things… and would like to learn how to resolve them.

  20. Scott Says:

    P Peng: It’s known that the number of crossings and the number of line segments are polynomially related to each other (although I believe there are open problems if you want more precise bounds). Thus, either can be taken as “the input size” when you’re studying the complexity of knot problems. For more, see e.g. The Knot Book by Colin Adams.

  21. ike Says:

    that part about complex numbers is kinda funny, because you can rewrite the schrödinger equation to be a real equation in the fourth order – don’t take my word for it, schrödinger did it in “Quantisierung als Eigenwertproblem – Dritte Mitteilung”, in fact, he starts out with a real equation that he can’t solve, so he tries to simplify things and only manages to reduce the order by going complex, you’ll see that it’s all very straight forward.

    so who am i to believe? the straight forward real schrödinger equation equivalent to the complex case, or some construed case using highly esoteric methods one has to investigate first?

  22. Scott Says:

    ike #21: Whenever two serious results seem to conflict with each other, your first guess should be that you didn’t understand at least one of them, as is the case here.

    You can even write the Schrodinger equation as a linear real differential equation, by just separating out the real and imaginary components of each amplitude and doubling the Hilbert space dimension. But that’s not the relevant question: the problem with that separation is that it won’t respect the tensor-product structure of your Hilbert space.

  23. Jalex Stark Says:

    P Peng #19:
    Sorry about that! I think I misread “could possibly greatly simplify the problem” in your post #15 as something meaning “could render the Lackenby work meaningless”. I think talking about the “quantum physics needs complex numbers” result put me into a frame of defending innovative work from shallow takes that come from a lack of familiarity with the fundamentals.

    I see now that you were asking basic questions about the foundations of a result that scott mentioned in a good-spirited manner meant to enrich your own understanding and the understanding of other commenters. I would like to encourage rather than discourage such activity, so again: I’m sorry for reacting so harshly and so quickly to something you didn’t even say!

  24. Tamás V Says:

    Scott #22: I think ike’s point is that technically, you can rewrite quantum mechanics using only real numbers, but it would be just a pain in the a$$ to deal with such a construct, as there wouldn’t be such elegant things to utilize like the Hilbert space.

  25. Tamás V Says:

    Gauss had the opinion that all the fuss about the “reality” of complex numbers was due to naming. Unfortunately, his proposal to rename “imaginary” to “lateral” was not accepted back then.

    If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, √−1 positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness. — Gauss (1831)

    I think he had a point, because if we find it perfectly fine to name the numbers in the backwards direction “negative”, why can’t we name the numbers in the sideways direction “lateral”? Then we just have to define addition and multiplication in an elegant way (via geometrical vector addition and rotation), and we’re done, no confusion.

  26. bertgoz Says:

    Regarding Renou et al. paper, how does this affect the fact that the gate set Toffoli + Hadamard is universal? That is using an extra qbit to keep the imaginary part of complex amplitudes as a real amplitude

  27. Tamás V Says:

    Scott, may I ask why you don’t treat the Renou et al. paper like you did with other papers in the past that “proved” their fancy points by basically stating that the predictions of quantum mechanics are correct? (And then the natural reaction was: Ok, so what?) I don’t see what makes this paper different.

    To me it’s like when a banker is pondering whether the debts can be “simulated” by positive numbers, but in a way that the bank’s overall position can still be calculated using the “+” operator. And of course, the result is disaster (even when such a simulation is possible, due to the mess such a representation would cause in the bank’s IT systems).

  28. Scott Says:

    bertgoz #26:

      Regarding Renou et al. paper, how does this affect the fact that the gate set Toffoli + Hadamard is universal?

    It doesn’t! That gate set remains universal in the sense of being able to approximate any orthogonal matrix (which suffices for quantum computation), although not in the sense of being able to approximate any unitary matrix (since it’s limited to reals).

    As I said several times, Renou et al.’s result only becomes relevant once we also care about spatial locality. We’ve known for decades that real and complex QM are equivalent to each other from a computational complexity standpoint.

  29. Scott Says:

    Tamás V #27:

      Scott, may I ask why you don’t treat the Renou et al. paper like you did with other papers in the past that “proved” their fancy points by basically stating that the predictions of quantum mechanics are correct?

    When I complain about such papers, usually they’re either (1) hyping up the results of experiment as somehow “shocking,” when it was obvious to everyone that they’d simply reconfirm QM, or else (2) hyping up the theoretical discovery of a “new quantum effect” that’s actually an old or obvious effect.

    Whatever else is good or bad about it, this paper doesn’t do either of those things. It’s a theoretical proposal for a new experiment to distinguish complex QM from any local simulation via real QM. To be clear: I fully expect that, when the experiment is done, the result will be 100% with complex QM and will rule out the local real simulation, with no surprise whatsoever. But the mere fact that such a distinguishing experiment exists is something that I hadn’t known, and indeed kicked myself for not having thought about.

  30. Tamás V Says:

    Scott #29: thanks for answering, fair enough. However, this for me qualifies as “hyping up”:

    So, does the imaginary unit i “exist”? […] If we took the standard quantum formalism and restricted the Hilbert spaces to be real, possibly of larger dimensions, could we still explain the same phenomena? […] For years this was considered the final answer to our question: in quantum theory complex numbers are only convenient, but not necessary. Here we prove this conclusion wrong.

    First of all, the “conclusion” is obviously right (we don’t need complex numbers and Hilbert spaces to describe quantum mechanics, if we are masochistic enough).

    Second, complex numbers aren’t more mysterious than negative numbers (one can use Gauss’s “lateral unit” idea to argue that). So in the 21st century, why should anyone come to the idea to restrict complex amplitudes to just real ones in the first place? Go immediately for natural numbers, those are the ones almost everyone believes to “exist”.

    So my problem is that I feel this paper is trying to sell itself on the wrong philosophical grounds.

  31. fred Says:

    Wondering why quantum mechanics needs complex numbers is missing the point.

    Complex numbers are just a tool to represent rotations, i.e. something that has a phase in it.
    i is a 2d rotation of 90 degrees, and two rotations of 90 degrees is a rotation of 180 degrees, aka i*i = -1.
    Similarly you can represent 3D rotations with quaternions.

    The real mystery is about the wave/particle duality of reality: things have a built-in wave length in them, i.e. there’s a phase that depends on the traveled distance. And when different possibilities/paths exist and are indistinguishable, such possible paths interfere, until there’s some sort of interaction ruining the “indistinguishability” and the wave-like behavior is lost (the phases become noise-like).

  32. bertgoz Says:

    Fred #31 : implicit in your statement is the condition that things (particles, atoms) are identical which I also consider quite mysterious

  33. gentzen Says:

    Tamás V #27:

    Scott, may I ask why you don’t treat the Renou et al. paper like you did with other papers in the past that “proved” their fancy points by basically stating that the predictions of quantum mechanics are correct?

    One reason may be that Nicolas Gisin is a coauthor, and that Marc-Olivier Renou did his PhD in Gisin’s group. Scott may list as many good properties of this paper as he wants, it would still not explain why he did put in the time to evaluate this paper.

    So who is Gisin that his presence as a coauthor gives weight to a paper proposing an experimental test for a question related to the foundations of quantum mechanics? In 1997, he (and his group) could show violation of Bell’s inequality outside of highly controlled Laboratory condition (using standard optical fiber over a distance of more than 10 km). In 2003 he achieved something similar for quantum teleportation. But those achievements don’t properly explain his weight. His talk delivered at the first John Stewart Bell prize award ceremony does a better job: He showed how applications of quantum weirdness (like quantum key distribution) could be decoupled from the validity of quantum mechanics (or the abscence of backdoors in the equipment) and be based solely on the presence of the observable effects.

    But even this last part somehow doesn’t capture my impression (of him) while reading Nicolas Gisin’s short book Quantum Chance. He somehow goes beyond the notion of true randomness or “bit strings with proven randomness” and tries to capture the experimentally observable nature of quantum chance itself, independent of any interpretation of quantum mechanics (or even the validity of quantum mechanics). Something like that it is a nonlocal randomness, and because the nonlocal correlations of quantum physics are nonsignalling, it has to be random, because otherwise it would allow faster than light communication.

  34. Tamás V Says:

    gentzen #27: Sure, I also spotted Gisin as co-author. And recently he got into numbers it seems. It’s difficult to convince me by authority. Ok, Gauss is an exception 🙂

    My only memory about Gisin is that I froze in my chair when he said it’s only a matter of time that quantum computers will break any post-quantum crypto algorithm. (Sure, I wish he was right, of course, but still…)

  35. Clint Says:

    fred #31:
    It seems the Renou paper is saying complex numbers are (part of) the point.
    In classical mechanics, yes, complex numbers are “just a tool”.
    But try to explain the SG experiment and complex numbers become indispensable and inevitable for describing (two-state) systems with more than two incompatible observables.
    And it is not just that there is a wave structure but it is that it is a different kind of wave structure than classical because the complex exponential has a wavelength AND a constant absolute value – preserving, for example, that no information about position is knowable for a definite momentum.
    What is remarkable about the Renou et al result is that complex numbers (the structures that we call as such) are a verifiable part of those “real mysteries” – and not just a tool we use to work with those mysteries.
    I guess to me the fact that (the structure of the things we call) complex numbers is right at the heart of QM is something like the mystery of the fine structure constant. Complex numbers have all of these “perfect” qualities that are “just right” for what we need in a working model of the universe (see Scott’s paper on this).

  36. Tamás V Says:

    Clint #35: for me, complex numbers are a tool here (a special tool though, see later), invented centuries ago, and the mystery is that this tool seems to capture something deep and essential about the (structure of the) physical world. It’s like that a round ball is essential to the beauty of soccer. I’d never try to prove this latter point by replacing the ball with a Rubik cube, and say “See, it doesn’t work, no spectators anymore!”. It would be just so cheap. Complex numbers are beautiful on their own, and once I heard a physicist say that we feel beauty when we encounter something that reflects a deep law of nature.

  37. gentzen Says:

    fred #31, Clint #35:
    The connection of complex numbers to phase certainly is of physical importance, not least because it explains an important instance of gauge freedom in the equations. And those freedoms of the complex numbers (like that replacing every instance of i by -i would give identical predictions, or like that gauge freedom) is one reason to ask whether one can get rid of them. As long as they are there, the freedom they bring is most likely there too, and hence prevents a unique “best” formulation.

    Now you may think that this discussion is all academic, because there is no reasonable natural formulation of QM without complex numbers anyway. But things are not necessarily so clear cut. See my (rather long) comment on the formulation of QM in section “2.1 The Ehrenfest picture of quantum mechanics” via (6), (7), and (8) in A. Neumaier’s Foundations of quantum physics II. The thermal interpretation:

    … appreciate the beauty and depth of (6), (7), and (8). That beauty goes deeper than my parenthetical remark. Note for example that those equations don’t contain i. (How could they, given that they unify classical and quantum mechanics?)

    After appreciating the beauty, the difficult next step would be to understand why this is still not enough. Even so i no longer occurs explicitly in the equations, and all beables are real valued, complex numbers will continue to play a key role behind the scenes. How physical are those real valued beables? I once wrote: “I admit that it is often easier to compute with the vector potential instead of the actual fields. But the actual fields are measurable, at least in principle, while the vector potential is not.” Are those real valued beables more like the actual fields than like the vector potential? They still share properties with the vector potential, i.e. some gauge freedom is still left. Only the complex phase which explained the gauge freedom is more hidden now.

  38. fred Says:

    Clint #35

    “preserving, for example, that no information about position is knowable for a definite momentum.”

    My understanding is that any wave “packet” has that property, e.g. even a super localized spike-like wave packet (exact position) has to have an infinite frequency content (Fourier transform) so momentum/energy is unknown. And a wave packet with a pure frequency (known momentum/energy) has to be infinite in space (unknown position).
    And the actual wavelength of such waves comes from the mystery of quantization – if continuous spaces (position, energy, …) get turned into discrete spaces, an absolute scale appears.
    E.g. if electrons have stable orbits, their waves has to be such that they interfere positively with itself along the orbits, so it ties the wave number and the energy.

  39. asdf Says:

    Fred #31 or anyone:

    > The real mystery is about the wave/particle duality of reality

    Does the Hilbert space where the quantum state lives automatically fall out of this? That the state of a combined system is the tensor product of the states of its components? That is more mystifying to me than wave-particle duality.

    There are all kinds of simple models like lattice gases, the Ising model, etc. where you can get various classical equations like Navier-Stokes out by making the mesh fine enough. It’s been bugging me whether anything like that ends up having complex probability amplitudes as a continuum limit, like in quantum mechanics. But maybe they can’t, because in those models, the dimension stays the same as the system evolves, which means you can simulate it in polynomial time, but we believe BQP>P.

    So where does the proliferation of dimensions come from in QM?