Links, proofs, talks, jokes

For those who haven’t yet seen it, Erica Klarreich has a wonderful article in Quanta on Hao Huang’s proof of the Sensitivity Conjecture. This is how good popular writing about math can be.

Klarreich quotes my line from this blog, “I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.” However, even if God doesn’t know a simpler proof, that of course doesn’t rule out the possibility that Don Knuth does! And indeed, a couple days ago Knuth posted his own variant of Huang’s proof on his homepage—in Knuth’s words, fleshing out the argument that Shalev Ben-David previously posted on this blog—and then left a comment about it here, the first comment by Knuth that I know about on this blog or any other blog. I’m honored—although as for whether the variants that avoid the Cauchy Interlacing Theorem are actually “simpler,” I guess I’ll leave that between Huang, Ben-David, Knuth, and God.

In Communications of the ACM, Samuel Greengard has a good, detailed article on Ewin Tang and her dequantization of the quantum recommendation systems algorithm. One warning (with thanks to commenter Ted): the sentence “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n” is mistaken, though it gestures in the direction of a truth. In the black-box setting, we can rigorously prove all sorts of separations: sqrt(n) vs. n (for Grover search), exponential (for period-finding), and more. In the non-black-box setting, we can’t prove any such separations at all.

Last week I returned to the US from the FQXi meeting in the Tuscan countryside. This year’s theme was “Mind Matters: Intelligence and Agency in the Physical World.” I gave a talk entitled “The Search for Physical Correlates of Consciousness: Lessons from the Failure of Integrated Information Theory” (PowerPoint slides here), which reprised my blog posts critical of IIT from five years ago. There were thought-provoking talks by many others who might be known to readers of this blog, including Sean Carroll, David Chalmers, Max Tegmark, Seth Lloyd, Carlo Rovelli, Karl Friston … you can see the full schedule here. Apparently video of the talks is not available yet but will be soon.

Let me close this post by sharing two important new insights about quantum mechanics that emerged from my conversations at the FQXi meeting:

(1) In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming.

(2) It’s true that, as a matter of logic, the Schrödinger equation does not imply the Born Rule. Having said that, if the Schrödinger equation were leading a rally, and the crowd started a chant of “BORN RULE! BORN RULE! BORN RULE!”—the Schrödinger equation would just smile and wait 13 seconds for the chant to die down before continuing.

63 Responses to “Links, proofs, talks, jokes”

  1. Joshua B Zelinsky Says:

    The link that is supposed to be to Knuth’s comment instead links to Knuth’s homepage.

  2. Benjamin Says:

    I am intrigued by the Hilbert scream comment, but lack even the elementary background to understand it.

    Perhaps I am the wrong audience for your blog, but can you by chance explain (or point to an introductory reading list? My background is in CS, but with a bit of a quantum computation course)

  3. Paul Topping Says:

    I didn’t know you had already critiqued IIT. I am going to read your posts and, hopefully, the FQXi video if and when it surfaces. I never considered IIT seriously but I love a good scientific take-down.

    The whole idea behind IIT sounds crazy to me. How can some formula say anything much about something as complex as the human brain without being based on a theory of its operation? Just saying that it is complicated, and then offering a formula that associates a number with it, is just not nearly enough.

    Dare I suggest that Tononi is taking advantage of our, hopefully temporary, lack of knowledge about the brain? If so, I hope for his sake that he isn’t doing it consciously.

  4. Scott Says:

    Joshua #1: Thanks! Fixed.

  5. Scott Says:

    Benjamin #2: Try my undergrad lecture notes.

    It would sort of kill it if I had to explain it. If someone else wants to, though… 🙂

  6. James Cross Says:

    We do have one way of addressing some of the pretty hard problem. We can correlate subjective reports (heavens forbid!) and performance on tasks to brain states measured by fMRI, brain waves, etc. That will work to some degree for humans who can self-report as long as we agree humans are conscious. Progress can also be made with measurements of people suffering various brain injuries or abnormalities.

    That won’t give us a nice formula that can tell us if the computer on the desktop has become conscious or whether my cat is but it is something.

  7. James Gallagher Says:

    Cool to know Don Knuth is alive and well and still contributing.

    re Born Rule and Schrödinger equation

    The only reason this is still such a big confusion is that no one seems to want to accept the origin of fundamental randomness in the universe.

    We have the correct mathematical structure for Quantum Mechanics, yet few people want to agree that the evolution is generated by Nature-driven quantum jumps (at a rate close to planck time on average)

    Once you inject randomness into the evolution of the Schrödinger equation like this all other known results follow.

    (It’s a mathematical equation and the Nature driven random jumps give you the same mathematical universe as many worlds, except only one exists, the Nature we experience)

  8. Scott Says:

    James #7: Bohmian mechanics (or Bell’s beable theories) sound vaguely like what you want; the difference is that they propose actual equations for the evolution of the single world, instead of just words. In any case this has little to do with my remark, which was not about interpretation of QM per se, but about the different question of how to ‘derive’ the mathematical form of the Born rule from the Schrödinger equation. Further confident assertions about the interpretation of QM will be left in the moderation queue.

  9. James Gallagher Says:

    lol, stop being such a curmudgeon I mean here in the UK we just got Boris as Prime minister and you’re trying to be all serious and stuff about QM ideas.

  10. Scott Says:

    James #9: That’s actually about the best response you could’ve given, so bravo! Yes, you might be right that the world is going to hell so nothing matters anymore. Even if so, though, on this blog we’re still concerned with how much math and science can be correctly understood before the end—for the dignity of the species, if you like, a way to put on our best face for any future alien archeologists combing through the wreckage.

  11. Miquel Says:

    I saw the title of the post and I was expecting to read about Lev Gordeev’s 2nd attempt at proving that NP=PSPACE (learnt about this via Timothy Chow on FOM)

    https://arxiv.org/abs/1907.03858

    I must say that reading about Knuth’s still amazing intellectual contributions was pretty good too 🙂

  12. mjgeddes Says:

    Some solid leads about consciousness may finally be starting to coalesce.

    Japanese researcher Ryota Kanai has just published a paper

    “Information Generation as a Functional Basis of Consciousness”, offering similar ideas to my own! Author specifically mentions counter-factual information generation and temporal models as the basis for consciousness.

    https://osf.io/7ywjh/

    “we propose that a core function of consciousness be the ability to internally generate representations of events possibly detached from the current sensory input.”

    “consciousness emerged in evolution when organisms gained the ability to perform internal simulations using internal models”

    “we propose that information generation corresponds to top-down predictions in the predictive coding framework.”

    My own ideas are somewhat similar , at least, I’m following the same general lines as Kanai ! I proposed:

    Consciousness=TPTA (Temporal Perception & Temporal Action)

    TP (Temporal Perception , Bottom-up , Discriminative)
    TA (Temporal Action, Top-Down, Generative)

    Subjective model of time

    TP – Past, Memory, Perception
    TA -Future – Planning, Imagination

    Predictive Coding = {TP,TA}

    So rather than IIT, perhaps what we need is TIG (Temporal Information Generation) I think this may really be starting to get somewhere…

  13. The boy from the emperor's new clothes story Says:

    Until there is an actual derivation of the Born Rule quantum mechanics cannot be considered a full scientific theory because “observer” and “measurement” are too fuzzy.

  14. ppnl Says:

    (1) In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming.

    Reality is just a ray in Hilbert space.

  15. Ted Says:

    In the Communications of the ACM article, when Greengard says “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n. Proofs of stronger separation between classical and quantum are relative to an oracle”, which theorem is he referring to? If it’s the optimality of Grover’s algorithm for black-box search, then that quote seems misleading to me, because that result also assumes an oracle.

    (Also, why does he say that IBM “unveiled the first commercially available quantum computer in January 2019”? IBM launched the Q Experience back in 2016, and D-Wave released the D-Wave One all the way back in 2011 (putting aside questions of whether it counts as a “real” quantum computer).)

  16. Scott Says:

    Miquel #11: O ye of little faith! 🙂 Discussion of yet more claimed proofs of NP=PSPACE, etc. is a perfect example of what you should not expect to find on this blog—not unless a claim has received massive attention or done something else that gives me no other choice.

  17. Scott Says:

    TBFTENCS #13: You don’t understand what “scientific theory” means.

    Was natural selection not a scientific theory before Mendelian inheritance was understood? Is it still not one, since we can’t derive from first principles why a few billion years is enough time to get humans, etc. from the primordial soup?

    Was Newtonian gravity not a scientific theory before Einstein explained how gravitational influence can get transmitted at a finite speed?

    Is GR not a scientific theory until we fully understand what happens at black hole singularities?

    Is the Standard Model not a scientific theory until we can explain the values of the coupling constants?

    Science is never finished. Every scientific theory leaves massive gaps in understanding, to be hopefully elucidated by future theories. In the case at hand, decoherence theory and other later developments let us say substantially more about the relation between unitary evolution and the Born rule than the founders of QM could’ve said, though still not as much as everyone would like. (People vehemently disagree on the extent to which there’s still a problem, but everything I’ve said in this comment goes through if there is still a problem and even a severe one.)

    Welcome to the business! 🙂

  18. Scott Says:

    Ted #15: You’re right, that’s a mistake in the article, which I’d noticed but then forgotten about. I’ll add something about it in the post.

    (As for who had, has, or will have “the first commercial QC,” maybe I’ll pass on relitigating that one? 🙂 The D-Wave machine doesn’t give real quantum speedups, if you do a fair comparison against Quantum Monte Carlo simulations. The IBM machine might give real quantum speedups, but we don’t publicly know yet, and if it does then probably not yet useful ones.)

  19. T Says:

    I thought Grover search lower bound was unconditional and we definitely know quantum computers are a tleast quadratically better than classical. No??

  20. T Says:

    1. In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming.

    2. It’s true that, as a matter of logic, the Schrödinger equation does not imply the Born Rule. Having said that, if the Schrödinger equation were leading a rally, and the crowd started a chant of “BORN RULE! BORN RULE! BORN RULE!”—the Schrödinger equation would just smile and wait 13 seconds for the chant to die down before continuing.

    Can you mathematically quantify those?

  21. Scott Says:

    T #19: Yes, we do know that—but the reason we know it is that it’s a black-box separation (i.e., only about the number of queries the quantum and classical computers make to the input bits). And if we’re talking about black-box separations at all, then we also know larger ones than quadratic. (For partial functions—those with a promise on the input—superpolynomial separations go all the way back to the work of Bernstein-Vazirani and Simon. Even for total Boolean functions, superquadratic separations were achieved a few years ago.)

  22. Scott Says:

    T #20:

      Can you mathematically quantify those?

    Absolutely. The first joke is only ~150 milliyuks, but the second is nearly a full yuk.

  23. Job Says:

    In the black-box setting, we can rigorously prove all sorts of separations: sqrt(n) vs. n (for Grover search), exponential (for period-finding), and more. In the non-black-box setting, we can’t prove any such separations at all.

    I was going to ask why the use of Grover’s to evaluate OR isn’t an example of a quadratic speedup without a blackbox.

    But i remember that using AND/OR trees you can get O(n^0.75) with bounded error, so the quantum speedup over classical isn’t quadratic. Is that right?

  24. Scott Says:

    Job #23: No, that’s not the point. The point is that the Grover speedup is a black-box speedup. That doesn’t mean something otherworldly or alien. It just means that a fast classical algorithm wouldn’t even have time to read the entire input, and that that (as opposed to some deep insight about the difficulty of processing the input once it has been fully read) is why we know how to prove a separation.

  25. Bram Cohen Says:

    Another link: We’re doing a proof of space programming competition with $100,000 in prize money, which has some very interesting and new underlying CS theory https://www.chia.net/2019/07/07/chia-network-announces-pos-competition.en.html

  26. fred Says:

    To cover all the topics at once:

    Schrödinger wrote extensively about the mystery of consciousness from the point of view of an apparent breaking of symmetry (similar to the measurement problem):

    “Assume two human bodies, A and B. Put A in some particular external situation so that some particular image is seen, let us say the view of a garden. At the same time B is placed in a dark room. If A is now put into the dark room and B in the situation in which A was before, there is then no view of the garden: it is completely dark (because A is my body, B someone else’s!). This is a flagrant contradiction, for there is no more adequate ground for this phenomenon, considered in general and as a whole, than there would be for one side of a symmetrically loaded balance to go down. […] For philosophy, then, the real difficulty lies in the spatial and temporal multiplicity of observing and thinking individuals. If all events took place in one consciousness, the whole situation would be extremely simple. There would then be something given, a simple datum, and this, however otherwise constituted, could scarcely present us with a difficulty of such magnitude as the one we do in fact have on our hands.”

  27. fred Says:

    A recent article about deriving the Born rule – https://www.quantamagazine.org/the-born-rule-has-been-derived-from-simple-physical-principles-20190213/ .

  28. AdamT Says:

    Hi Scott, can you please shoot me an email next time another one of these FxQi forums comes up or anything similar? Judging from the speakers and the titles of the talks… I desperately want to be a fly on the wall… 🙂

    Can’t wait for the videos!

  29. Pavel Says:

    Hi Scott,

    in the closing remarks of IIT slides you write that any theory in the form of “sufficient complicatedness” will be failure.
    I know your arguments published on this blog years back, but I don’t understand how the statement from above follows.

    Can you elaborate little bit more? Thanks.

  30. Bill Jeffries Says:

    Sorry to be a little slow on such a intelligent blog comment section but I don’t quite get the scream comment other than to suppose/guess it contrasts deterministic interpretations: Bohm(this less sure about) vs MWI interpretation(more clear). Any elucidation will be appreciated.

  31. T Says:

    I feel I am humorously challenged.

  32. Scott Says:

    Alright, alright, everyone who didn’t get my scream-based restatement of the no-cloning theorem: screaming differently in different branches will cause decoherence.

  33. Scott Says:

    Pavel #29: All I meant was that my reductio ad absurdum of IIT seems generalizable to any theory whatsoever that claims that once something is sufficiently complicated, or interconnected, or whatever, then it’s conscious. For it’s easy to think up examples of systems whose complicatedness, or interconnection, or whatever would vastly exceed that of the human brain, yet that do nothing that anyone would want to call intelligent, let alone conscious. Unless, of course, we want to follow Tononi’s route of wildly redefining terms like “consciousness” to fit our theory, even to the point of severing any connection with how people originally used the terms.

  34. AdamT Says:

    Scott #33,

    “For it’s easy to think up examples of systems whose complicatedness, or interconnection, or whatever would vastly exceed that of the human brain, yet that do nothing that anyone would want to call intelligent, let alone conscious.”

    I get that this was somewhat tounge-in-cheek, but doesn’t this just mean IIT or whatever advocates just need a more *complicated* def. of complicated ness, interconnectedness, or whatever??

    More seriously, the point is that mere definitions is not what what progress on the pretty hard problem would look like. Rather, it is some theory that would do well at both explaining which physical systems in which configurations give rise to conciousness that matches our intuitions AND also illuminates the problem enough to give guidance on constructing such physical systems or giving insight on where in the universe to look for unknown examples of consciousness.

  35. Scott Says:

    AdamT #34: No, it wasn’t tongue-in-cheek. I personally don’t understand how any proposed consciousness measure could possibly capture what’s been understood by the term for centuries, if it said nothing about—-to take two examples—

    (1) intelligent behavior (including but not limited to what the Turing Test tries to measure), and
    (2) unpredictability and ability to surprise external observers.

    Note that no sort of “complicatedness” or “interconnectedness” seems to imply either of the above.

    On the other hand, I have much less confidence in this than I do in the narrower statement that it’s absurd to treat Tononi’s Φ as a measure of consciousness rather than graph expansion.

  36. James Cross Says:

    Scott #35

    I can’t see why even intelligent behavior and unpredictability cannot be done by something that is not conscious.

    It’s hard for me to envision any functional capability that can only be done by a conscious being and that could never be done by an unconscious machine.

    Can you think of one?

    That means that even if we thought we created a conscious machine we would have no way to verify it was conscious.

  37. Scott Says:

    James #36: Necessary ≠ sufficient

  38. Adam Treat Says:

    Scott #35,

    “… proposed consciousness measure could possibly capture what’s been understood by the term for centuries”

    Ok, maybe we *do* have to start with some definitions since the term “consciousness” in Western Philosophy is usually too ill-defined to talk meaningfully about your Pretty Hard problem let alone Chalmer’s Hard Problem.

    Specifically, Western Philosophy usually fails to delineate two very different aspects of what is usually labeled consciousness:

    1) The ability to experience, to perceive, to have qualia.

    2) The ability to use #1 to divide the world into subject and object ie, subjectivity/sentience.

    I think Chalmer’s Hard Problem is mostly interested in characterizing and investigating #1 in terms of physical systems. And as your original post on IIT pointed out so well, it is always possible for opponents of any purported answer to the Hard Problem to dismiss it by invocation of Philosophical Zombies. That is why it is such a *hard* problem after all!

    Your refinement of Chalmer’s effort into the Pretty Hard Problem aims to take away this rebuttal by dismissing Philosophical Zombies and saying that look, if we could come up with a physical theory that correctly categorizes systems such that it satisfies our general intuition of “consciousness” *and* also has explanatory power of some kind (it goes about illuminating things beyond our intuition in some way) that this would be Good Progress^TM and Philosophical Zombies can be damned!

    Your newest set of slides indicate you think *any* approach that follows IIT into defining “consciousness” as some sort of statement about the complexity of “information flow” in a system will be incapable of satisfying two key aspects of what *your* intuition says about “consciousness” ie, that it somehow has to do with intelligence and unpredictability. I can see why you’d think that if these are indeed two key ingredients of your intuition’s def. of “consciousness,” but I doubt that others intuition’s will agree these are necessary (let alone sufficient) criteria. I for one do not.

    No, I am much more interested in the Pretty Hard problem as applied to #2. Why? Because I think any physical system that demonstrates #2 will have a distinct *behavior* that for me is highly correlated with my intuition of *sentience* if not consciousness. That is, any physical system that demonstrates #2 will be biased towards behaving in the world with agency such that it minimizes its own suffering and maximizes its own happiness.

    On this planet alone we have an abundance of a variety of non-human physical systems that seem to satisfy/not-satisfy our intuitions for #1 and #2 to varying degrees. A rock vs a dog. A chicken vs a pool of water. An ant vs a pile of dirt. We can (and do) argue endlessly the extent to which these systems exhibit #1 and #2.

    And precisely *this* is what I want to see progress on the Pretty Hard problem give insight to! Intelligence and Unpredictability might be high on your list, but I have no problem granting sentience if not “consciousness” to seemingly unintelligent and/or predictable behavior that nevertheless seem to exhibit various amounts of #1 and #2. It is not totally clear to me that some other scheme along the lines of IIT’s ‘complexity of information flow’ or whatever might *not* be responsible for #2. Perhaps it is. Perhaps if you take any system that has enough “complexity of information flow” or an internal representation of ‘self’ vs ‘other’ that you’ll get #2 and sentience.

  39. James Cross Says:

    #38 Adam

    The problem is how to prove that an entity experiences qualia or divides the world into subject/object since both of those things are part of subjective experience.

    I would probably be willing to grant consciousness to any entity – machine, power grid, or amoeba – that could be proven to have those things but fundamentally I don’t see how it is possible to prove consciousness exists outside my own.

    That was why I called the “hard” problem an “unserious” problem because it isn’t really amenable to scientific investigation.

    https://broadspeculations.com/2019/07/14/the-hard-but-unserious-problem-of-consciousness/

  40. Adam Treat Says:

    James #39,

    Of course you are right that without the ability to directly experience the qualia of another there is no way to 100% verify the subjective experience of another. But we already make this adjudication in our day to day lives to a great extent, in our legal system to a lesser extent and in our religious/spiritual lives to a mixed extent. And we do so by looking at the behavior of physical systems. I see the answer to your objection in adjudicating consciousness/sentience rooted in judgement based upon behavior.

    We look at the behavior of other physical systems (human, non-human, inanimate) and ascribe consciousness/sentience or not. In order to make progress on the Pretty Hard problem, first I think we need to identify what about physical systems gives rise to this distinguishing behavior.

    It is my hypothesis that there are precisely three necessary and sufficient conditions that give rise to such behavior in physical systems that we feel compelled to confer the label sentience:

    #1 qualia as said above
    #2 ability to use #1 to divide the world into self vs other
    #3 ability to distinguish biased states that subjectively increase own happiness and decrease own suffering

    The biggest question I have about sentience among living organisms is whether plants/trees exhibit #2. My current hypothesis is they do not. Although some plants/trees seem to exhibit #3 I do not think they have a mind/brain/nervous system that allows them to subjectively divide the world or have an internal representation of self vs other. But I don’t honestly know for sure. What I *do* think I know is that many non-human animals seem to exhibit precisely #1, #2, and #3 and that rocks utterly do not or at least lack the capacity to act upon the world in accordance.

    Let’s say that tomorrow researchers in AI make a big breakthrough and we are confronted by a neural net that seems to hold an internal representation of itself and begins to act in ways that seem selfish or preferential to itself. If it tries to preserve or extend its own existence or to avoid destruction or avoid things we could empathize as painful for it. Personally speaking, I would be hard pressed to not grant the label sentient to it.

  41. James Cross Says:

    Adam #40

    I can generally agree with some of what you write.

    #3 may need to be rephrased or rethought a little bit. If I rush towards an armed gunman in a school, I’m not likely to be increasing my happiness or decreasing my suffering but it would be a supremely conscious action.

    Personally I tend to think you need a brain for consciousness in biological entities. I think the brain evolved, and with it consciousness, to map, monitor, and control the body itself, its environment, and the relationship of the body with its environment. The self arises from this basic function. I like the radical plasticity theory to account especially for the higher levels of consciousness in mammals and birds.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3110382/

  42. James Cross Says:

    Adam #40

    One other thing I see rarely if ever commented on is the fact that biological beings manifest consciousness with amazing little amounts of energy expended. It strikes me that there may be some clue in that. Namely, that consciousness may have been a cheaper solution in terms of energy than the zombie alternatives to evolve complex and adaptive behavior.

  43. mjgeddes Says:

    Excellent points about IIT Scott, and I think your own idea that consciousness is somehow connected to the flow of time is intuitively very strong.

    If you at Ryota Kanai’s CIG (Counter-Factual Information Generation) , this is an information-theoretic approach that already looks way way more convincing to me than IIT, and it’s barely even been developed much yet. Whereas IIT is vague and fuzzy , CIG is sharp and clear. Whereas IIT has no connection to intelligent behavior and free will/predictability, CIG *does* naturally connect to these points.

    CIG slideshow here:
    https://www.slideshare.net/ryotakanai/creating-consciousness

    Look at the 3-fold subjective time division: Past, Present, Future. Notice how it naturally correlates with 3 types of counter-factuals:

    Past: Correlational counter-factuals
    Present: Causal/Interventional counter-factuals
    Future: Logical Counter-factuals

    I think consciousness is a generative model about one’s own time-evolution (past > present > future), represented as trees of counterfactuals. And there’s some kind of participation in the arrow of time generating the information….

  44. Jacob Says:

    So this is slightly off topic but I feel like anything tangentially Born rule related is fair game in this comment section, so here it goes:

    https://www.quantamagazine.org/quantum-darwinism-an-idea-to-explain-objective-reality-passes-first-tests-20190722/

    Thoughts?

  45. Ted Says:

    I’m afraid I’m still a bit confused by your edit’s comment “In the non-black-box setting, we can’t prove any such separations at all.” Doesn’t e.g. Bravyi, Gosset, and Konig’s paper “Quantum advantage with shallow circuits” give an unconditional, non-black-box proof of a (admittedly very small) separation between constant-time quantum and logarithmic-time classical?

    Sorry, I’m not trying to nitpick – I just want to make sure that I understand these subtle results correctly.

  46. Anonymous Says:

    What if the finite universe is a being that is conscious with free will and of course very intelligent and what if particles are not just building blocks but immature consciousness that may over a very long time grow up to be a universe like its parent?

    The binding problem is such a big problem for consciousness theories and even bigger problem for free will theories that maybe consciousness and free will is centralized in the brain in a high energy, high mass rare fundamental particle.

    Maybe this homonculi particle communicates with the brain using an electromagnetic code like a more complicated low range Bluetooth.

    If something like that was found and soul particles turn out to be real then soul particles could be moved out to a more durable, capable body engineered for almost any environment in the universe. Virtual reality will be easy too but most importantly pain and death will be mostly a thing of the past!

  47. Scott Says:

    Ted #45: The Bravyi et al result is specifically about extremely low-depth circuits—and there we again do know how to prove separations. What we don’t know how to prove is that, with no depth restriction, some natural, explicit problem requires (say) n2 classical or quantum gates rather than only O(n).

  48. fred Says:

    The problem when talking about consciousness is that we’re reaching the limits of language/science.

    Words are tags for conceptual symbols in our brains.
    But words are only defined in terms of other words (dictionaries are directed graphs), which in itself is a paradox – i.e. how does it get bootstrapped? There has to be words which can’t be defined in terms of others (like nodes with all outgoing edges in the dictionary graph), mapping to rock bottom truths/perceptions about being, which everyone shares to some degree, and language slowly builds on top of those leaf nodes.
    We may try to describe those special words in terms of other words, but can’t succeed (such descriptions are always circular when examined closely).
    It’s like trying to describe to a blind person what we mean by “shape” and “color” in the sentence “the shape and color of objects are obviously distinct, yet we can’t separate them!”.

    Is the ability of language tied to consciousness?
    Can non-conscious systems ever come up with their own language? If so, would it be a hint that consciousness is a basic building block of the world?
    If there’s such a thing as talking zombies, how did their language get bootstrapped?
    Can a language that’s perfectly circular (no leaf symbols) ever appear? How? All at once?

    And do our own creations implicitly inherit our own dictionary, so AIs may be able to always mimic consciousness convincingly?
    E.g. going back to blindness, both a seeing and a non-seeing person have an internal symbol for the color blue. For a blind person, ‘blue’ is a total mystery (but the irony is that ‘blue’ is also a total mystery for a seeing person!), a blind person only knows of blue because color-seeing people have supplied him/her with their own (imperfect) definitions for it. With enough such definitions, could a blind person pass a “blue” Turing test?

    Regardless of all that metaphysical stuff, we can learn a lot by training our own mind in the art of observing itself, without assuming anything about the origins of consciousness.
    Even a modest amount of mindfulness training quickly reveals that our sense of self (as in “I was feeling so self-conscious after they caught me with my hand in the tipping jar!”) is just an appearance made up of perceptions rising in the space of consciousness – the feeling that we’re located in a particular point in space behind our eyes and our ears, feelings in our face, etc. All these perceptions add up to creating a belief that there is some permanent center of “I” as the author of thoughts and volition (themselves appearances in consciousness). But that belief can be lifted by observing all those perceptions for what they are.
    That’s not to say that consciousness is not related with some sort of feedback loop (e.g. Doug Hofstadter’s book “I am a strange loop”).

  49. STEM Caveman curious Says:

    Scott, when you asked Tang (according to the article) to prove a lower bound on classical algorithms for the recommendation problem, what did you have in mind, e.g., what computation model and what complexity measure? Unconditional runtime lower bound doesn’t sound like something assignable to a student so I’m guessing you meant a conditional bound (as in fine grained complexity or hardness of approximation) or a measure different from runtime.

  50. Scott Says:

    STEM Caveman #49: I meant a lower bound on number of queries to the input data. (Or if you like, a lower bound on runtime, but a sublinear one, ~√n or something, keeping in mind that Kerenidis and Prakash’s quantum algorithm needs only ~log(n).) Such lower bounds, when true at all, are almost always unconditionally provable.

  51. Michael Says:

    Scott, I’m curious- as a mathematician, what do you think of the 8/2(2+2) controversy:
    https://heavy.com/news/2019/08/viral-math-problem-solution-answer/

  52. Mateus Araújo Says:

    I heartily agree with the comment about the Schrödinger equation and the Born rule. In fact, in the very paper where Schrödinger introduced his equation he also discussed at length the mod-squared amplitudes. He didn’t get the correct interpretation for it (Born did), but it was already obvious to Schrödinger that the mod-squared amplitude was the meaningful physical quantity.

    This historical accident supports the idea that, mathematically speaking, the Born rule is obvious. The non-obvious part is the definition of measurement and probabilities, which, ironically enough, is often ignored in the derivations of the Born rule.

  53. George McKee Says:

    Sounds like FQXi was a fun conference that didn’t report any breakthroughs. Maybe a few little insights are all anyone could ask for.

    But I wouldn’t go so far as to say “ANY THEORY of the form “sufficient complicatedness / interconnection / etc. ⇒ consciousness” is doomed to failure”. There are theories that contain emergent properties, such as cycles in random graphs, where the property emerges with more and more likelihood as the complicatedness of the graph increases.

    Arguably this is exactly how consciousness evolved, as random genetic mutations caused random patterns of connectivity to evolve in brains, most of which were killed off by natural selection. At some point in the surviving giant family tree of species a pattern appeared that supports consciousness in adult animals, and here we are.

    I don’t think we have a good way to characterize theories that contain this kind of emergence. For example, what’s the difference between the class of finite state machine components of a Universal Turing Machine that makes the full device universally powerful, and the class of FSMs that don’t yield Universality when equipped with the other TM parts? There’s some magic going on in the FSM’s state transition table that I’ve never seen described.

    There are many ways to enumerate FSMs, and each enumeration method generates a “complexity” measure on its associated TM. Some enumeration methods will generate UTMs sooner or later, and some won’t. Likewise theories of brain-behavior evolution will generate their own complexity measures. All of them should predict no consciousness for worms and protozoa (although Rupert Glasgow’s “Minimal Selfhood” theory would disagree), while some of them will predict the emergence of consciousness at some point and assign a complexity level to that point.

    If you refine the statement to say that “complexity is fundamental” theories are all doomed, I wouldn’t totally disagree, but complexity measures on theories where consciousness is emergent, like closing a switch in a circuit causes all kinds of important dynamics to suddenly emerge, can provide lots of important insights, and those complexity theories are not doomed at all.

  54. Scott Says:

    Michael #51: That’s one of the most aggressively stupid “controversies” I’ve ever seen! It’s exactly like demanding the “true” parsing of one of those ambiguous newspaper headlines—e.g., “Complaints Over NBA Referees Growing Ugly.” Division and multiplication, like human language, are non-associative.

    Having said that, 1. For if it were 16, then it seems inexplicable that the 8/2 wasn’t parenthesized. 🙂

  55. STEM Caveman abstracts Says:

    @Michael #51, it’s clearer if one replaces numbers by letters. If “a/b(c+d)” were meant to equal a(c+d)/b it would have been written in that order, or (as in Scott’s reply) with additional parenthesis as (a/b)(c+d). If forced to parse without further information, the default reading is either a/(b(c+d)).

    The heuristic is that in a ratio of products, (xyz…)/(abc…) the parts belonging to the numerator (resp., denominator) are consolidated unless specifically indicated otherwise, i.e., by extra parentheses.

  56. STEM Caveman thanks Says:

    @Scott 50, thanks. I thought it was query complexity (since in that setting nontrivial unconditional bounds exist) but that wasn’t obvious from the CACM article.

  57. marshall flax Says:

    Scott, I’m wondering if you could prevail upon Prof. Knuth to publish the TeX source of his Huang paper. Normally, talk of “best practices” is silly — but this would be a counterexample, I’m sure.

  58. Scott Says:

    marshall #57: I’ve had one conversation with Knuth in my life (15 years ago); no particular “in” with him. You can ask him for the TeX as well as I…

  59. Wes Hansen Says:

    mjgeddes #12:

    I, too, think information generation is very important and there exist studies to back it up. Published in PLOS/ONE in 2015, Neurocognitive and Somatic Components of Temperature Increases during g-Tummo Meditation: Legend and Reality is a study of g-tummo yoga as practiced in Tibet. These monks and nuns are able to elevate core body temperature to the point of drying freezing wet bedsheet – three in succession, while embedded in freezing environments. The key point here is the effect the “neurocognitive component” has on core body temperature! From the concluding remarks of the paper:

    “However, the results of Studies 1 and 2 also suggest that that the neurocognitive component (“internalized attention” on visual images) of the MFB practice may facilitate elevation in CBT beyond the range of normal body temperature (into the fever zone), whereas the CBT increases during FB vase breathing alone were limited, and did not exceed the range of normal body temperature.”

    So it seems as though focused imagination, which is what the “neurocognitive component” really is, can effect changes in core body temperature above and beyond the somatic component; this links consciousness to both physics and information – information incorporated into the Gibbs free energy equation. Two other things I would point out:

    1) The authors of the study tend to make light of the temperature increase needed to dry the sheets, but these meditators are heat-generators embedded in a massive, massive heat sink; considerable temperature increase is required simply to maintain core body temperature, let alone dry freezing wet sheets draped over their torso;

    2) I would also point out that these are emotionally generated imaginings grounded in the Tibetan Buddhist imaginaire, which correlates nicely with Will Tiller’s PsychoEnergetics model.

  60. Tommaso Says:

    Hi Scott, I was wondering if you had a look at https://arxiv.org/pdf/1908.02499.pdf and if you have any comment about it.

    (I don’t want to flame, just I’m curious about whether you think Gil Kalai really makes a point)

  61. David Pearce Says:

    James #39, you say, plausibly,
    “I don’t see how it is possible to prove consciousness exists outside my own. That was why I called the ‘hard’ problem an ‘unserious’ problem because it isn’t really amenable to scientific investigation.”

    But we can (in principle) test the sentience of our fellow creatures by rigging up reversible thalamic bridges and doing a partial “mind-meld”. Compare the craniopagous Hogan sisters:
    https://www.youtube.com/watch?v=VGslJPaxbD8
    Testing the (in)sentience of classical digital computers may be more if a challenge. I think they’ll always be micro-experiential zombies that can’t solve the phenomenal binding problem; but that’s another story.

  62. Sniffnoy Says:

    And, certificate’s expired again…

  63. Bennett Standeven Says:

    So the crux of Kalai’s argument seems to be that it is impossible to use asymptotically low-level components to build a superior computer. (Called assumption (B) in the paper.)
    He writes that it “requires special attention and it can be regarded as both a novel and a weak link of our argument. There is no dispute that we can apply asymptotic computational insights to the behavior of computing devices in the small and intermediate scale when we know or can estimate the constants involved. This is not the case here. The constants depend on (unknown) engineering abilities to control the noise. I claim that (even when the constants are unknown) the low-level asymptotic behavior implies or strongly suggests limitations on the computing power and hence on the engineering ability.”

    The problem is that this is known to be false; classical computers are supposed to be described by P, but the intermediate-scale components supposedly lie in LDP, a provably smaller class. As far as I can tell, Kalai has never even attempted to argue that this premise should hold in the case of quantum computers, when it fails for most other computational models.

    Of course, there is the additional problem that LDP contains, for example, the problem of evaluating the permanent of an “intermediate-scale” (say, 500 x 500) matrix, because this is just a polynomial of constant degree (500^2 = 250,000). In general, an asymptotic argument requires having an asymptotic variable; so “intermediate-scale” should be defined as, say, log(n) instead of some fixed constant.

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.