Talk, be merry, and be rational

Yesterday I wrote a statement on behalf of a Scott Alexander SlateStarCodex/rationalist meetup, which happened last night at MIT (in the same room where I teach my graduate class), and which I’d really wanted to attend but couldn’t.  I figured I’d share the statement here:

I had been looking forward to attending tonight’s MIT SlateStarCodex meetup as I hardly ever look forward to anything. Alas, I’m now stuck in Chicago, with my flight cancelled due to snow, and with all flights for the next day booked up. But instead of continuing to be depressed about it, I’ve decided to be happy that this meetup is even happening at all—that there’s a community of people who can read, let’s say, a hypothetical debate moderator questioning Ben Carson about what it’s like to be a severed half-brain, and simply be amused, instead of silently trying to figure out who benefits from the post and which tribe the writer belongs to. (And yes, I know: the answer is the gray tribe.) And you can find this community anywhere—even in Cambridge, Massachusetts! Look, I spend a lot of time online, just getting more and more upset reading social justice debates that are full of people calling each other douchebags without even being able to state anything in the same galactic supercluster as the other side’s case. And then what gives me hope for humanity is to click over to the slatestarcodex tab, and to see all the hundreds of comments (way more than my blog gets) by people who disagree with each other but who all basically get it, who all have minds that don’t make me despair. And to realize that, when Scott Alexander calls an SSC meetup, he can fill a room just about anywhere … well, at least anywhere I would visit. So talk, be merry, and be rational.

I’m now back in town, and told by people who attended the meetup that it was crowded, disorganized, and great.  And now I’m off to Harvard, to attend the other Scott A.’s talk “How To Ruin A Perfectly Good Randomized Controlled Trial.”


Update (Nov. 24) Scott Alexander’s talk at Harvard last night was one of the finest talks I’ve ever attended. He was introduced to rapturous applause as simply “the best blogger on the Internet,” and as finally an important speaker, in a talk series that had previously wasted everyone’s time with the likes of Steven Pinker and Peter Singer. (Scott demurred that his most notable accomplishment in life was giving the talk at Harvard that he was just now giving.) The actual content, as Scott warned from the outset, was “just” a small subset of a basic statistics course, but Scott brought each point alive with numerous recent examples, from psychiatry, pharmacology, and social sciences, where bad statistics or misinterpretations of statistics were accepted by nearly everyone and used to set policy. (E.g., Alcoholics Anonymous groups that claimed an “over 95%” success rate, because the people who relapsed were kicked out partway through and not counted toward the total.) Most impressively, Scott leapt immediately into the meat, ended after 20 minutes, and then spent the next two hours just taking questions. Scott is publicity-shy, but I hope for others’ sake that video of the talk will eventually make its way online.

Then, after the talk, I had the honor of meeting two fellow Boston-area rationalist bloggers, Kate Donovan and Jesse Galef. Yes, I said “fellow”: for almost a decade, I’ve considered myself on the fringes of the “rationalist movement.” I’d hang out a lot with skeptic/effective-altruist/transhumanist/LessWrong/OvercomingBias people (who are increasingly now SlateStarCodex people), read their blogs, listen and respond to their arguments, answer their CS theory questions. But I was always vaguely uncomfortable identifying myself with any group that even seemed to define itself by how rational it was compared to everyone else (even if the rationalists constantly qualified their self-designation with “aspiring”!). Also, my rationalist friends seemed overly interested in questions like how to prevent malevolent AIs from taking over the world, which I tend to think we lack the tools to make much progress on right now (though, like with many other remote possibilities, I’m happy for some people to work on them and see if they find anything interesting).

So, what changed? Well, in the debates about social justice, public shaming, etc. that have swept across the Internet these past few years, it seems to me that my rationalist friends have proven themselves able to weigh opposing arguments, examine their own shortcomings, resist groupthink and hysteria from both sides, and attack ideas rather than people, in a way that the wider society—and most depressingly to me, the “enlightened, liberal” part of society—has often failed. In a real-world test (“real-world,” in this context, meaning social media…), the rationalists have walked the walk and rationaled the rational, and thus they’ve given me no choice but to stand up and be counted as one of them.

Have a great Thanksgiving, those of you in the US!


Another Update: Dana, Lily, and I had the honor of having Scott Alexander over for dinner tonight. I found this genius of human nature, who took so much flak last year for defending me, to be completely uninterested in discussing anything related to social justice or online shaming. Instead, his gaze was fixed on the eternal: he just wanted to grill me all evening about physics and math and epistemology. Having recently read this Nature News article by Ron Cowen, he kept asking me things like: “you say that in quantum gravity, spacetime itself is supposed to dissolve into some sort of network of qubits. Well then, how does each qubit know which other qubits it’s supposed to be connected to? Are there additional qubits to specify the connectivity pattern? If so, then doesn’t that cause an infinite regress?” I handwaved something about AdS/CFT, where a dynamic spacetime is supposed to emerge from an ordinary quantum theory on a fixed background specified in advance. But I added that, in some sense, he had rediscovered the whole problem of quantum gravity that’s confused everyone for almost a century: if quantum mechanics presupposes a causal structure on the qubits or whatever other objects it talks about, then how do you write down a quantum theory of the causal structures themselves?

I’m sure there’s a lesson in here somewhere about what I should spend my time on.

55 Responses to “Talk, be merry, and be rational”

  1. JonW Says:

    About the hemispherectomy question (“what it is like to be a severed half-brain”): that is amazing, my young son had a hemispherectomy for intractable seizures (he’s doing great now, 5 years on) and I had nightmares about this exact scenario–is there *another* person we’re ethically responsible for, who lives in my son’s now disconnected hemisphere (they leave the tissue in, connected to a blood supply, to prevent the remaining hemisphere from moving within the cranium). At first, any neurosurgeon or neurologist I talked to was unable to understand, they waffled about how my question was really about the soul (it’s not! If we know that a single hemisphere connected to a brain stem can support consciousness–since post-hemispherectomy patients are conscious like the rest of us–then it should be a natural question to ask what about the disconnected hemisphere.) Or they dismissed it as not something that will affect my son. (I know! It’s the (potential) other person in there that I’m worried about.) That hemisphere would not be bereft of all sensory input as the peripheral visual field from the opposite side is routed through there.

    Eventually at a conference a neurosurgeon explained to me that there were structures responsible for consciousness deeper in the brain stem than the cerebral hemisphere itself that is disconnected, and without these structures the disconnected hemisphere could not experience anything subjectively–and that’s not even taking into account the fact it *does* get beaten up during the surgery, with parts of the temporal lobe actually removed to allow the surgeon access to perform the rest of the disconnection–plus it is severely epileptic in the first place, and it will only get worse after being disconnected from most meaningful input. (An EEG shows only disorganised electrical activity in the disconnected hemisphere.) I’d be lying if I said I still never thought about it though. Thanks for posting the link Scott, I had never encountered anyone before who had had the thought independently.

  2. wolfgang Says:

    if you want even more philosophical issues with split brain patients, this youtube clip is about an interesting case
    http://www.youtube.com/watch?v=PFJPtVRlI64

  3. Raoul Ohio Says:

    You can never be too busy to read up on the plans for an outpost on Pluto:

    http://www.forbes.com/sites/brucedorminey/2015/11/23/will-nasa-ever-send-astronauts-to-pluto/

  4. Rahul Says:

    Dumb question: What’s a rationalist meetup? Is a rationalist different from a rational person, which I assume all of us are to various degrees?

  5. Scott Says:

    Rahul #4: I hope my update answers your question! I’d define it as a community, centered around blogs like LessWrong and SlateStarCodex (but also holding various meatspace events), that’s interested in topics like cognitive biases, skepticism, applied rationality and “lifehacking,” effective altruism, and transhumanism. The community is sort of on the fringes of academia, and is open to anyone, though its membership tends to be drawn very heavily from students and recent graduates of major universities.

  6. Jay Says:

    JonW, Wolfgang,

    Actually this question is not the best of SA. First, I don’t know what source he consulted to credit Carson for inventing functionnal hemispherectomy (Rasmussen and Villemure are usually credited for that). Of course that’s a detail, but still..

    More substancially, there’s a lot of confusion between split-brain (Wolfgang), hemispherectomy (JonW), and functionnal hemispherectomy (the procedure detailed by the neurosurgeon in #1).

    *hemispherectomy, as the name indicate, is the complete removal of one cortical hemisphere. Of course no disconnected consciousness (unless one think that soul is both immaterial and composed of two parts, each one having a specific access to a specific cortical hemisphere).

    *split-brain, is a condition which follows corpus callosotomy. As the name indicates, it consists in removing the corpus callosum, which is the largest pathway connecting the two cortical hemispheres. In this case both hemispheres keep access both to sensory input and subcortical nucleus (including the reticular activating system that regulate awakeness), and we can then evidence what looks like two streams of consciousness, as discussed in Wolfgang #2.

    *functionnal hemispherectomy, as the name does not indicate, is (for the affected hemicortex) both removal of large portion of the parietal and temporal lobe plus cutting the white tractus that connect what remains of the frontal and occipital lobe with the rest of the central nervous system. It is very unlikely that an organised stream of consciousness could survives this. But even if it could, there’s all reason to think it would be at best a minimally conscious state (no access to the reticular ascendant system), with no working memory (no access to hyppocampus), asomatognosic (no awarness of the body parts) and anosognosic (unable to percieve a functionnal deficit) because of the parietal damages, with completly flat mood (because deprived from most source of neuromodulators).

    One could describe the latter as a little buddha in a deep meditative state rather than comatous, but the point is there is no indication for a hidden, suffering, awareness, which is what one could fear from the question as asked by SA.

  7. Rand Says:

    So I’m reading the update as a “Hey Scott, I’d like to join the mainland”? (http://slatestarcodex.com/blog_images/ramap.html)

    I still hope you’re shooting for Mathematics country, rather than Social Justice land. (Though there you get to hang out with Kate and Rob Bensinger, Miri Mogilevsky and possibly Chana Messinger, who are all cool people.) I mean, I still somebody to give me arguments that P = BPP. (And apparently Babai believes that Graph Isomorphism isn’t in P?)

  8. Scott Says:

    Rand #7: Really? That’s all you want, is the arguments for believing P=BPP? The central argument has always been this: suppose there exists a really good pseudorandom number generator, which for any fixed k, stretches an O(log n)-bit seed into an n-bit output, in such a way that no nk-time algorithm can distinguish the result from a truly random n-bit string (with good bias). Then clearly P=BPP, since looping over all the possible O(log n)-bit seeds takes only polynomial time, and by assumption, the average result of feeding your randomized algorithm the PRNG’s output strings, will be close to the average result if you had fed your algorithm truly random n-bit strings like you were supposed to.

    So then, all that remains is to give arguments for why such good pseudorandom generators should exist. Here, work in the 80s and 90s showed that the requisite PRGs can be constructed from any sufficiently hard problem (more precisely: any uniformly-computable function with a strong circuit lower bound). For example, Impagliazzo and Wigderson proved that, if there’s a function computable in deterministic exponential time that also requires nonuniform circuits of size 2Ω(n), then the requisite PRGs exist and hence P=BPP. But no one seriously believes that, say, simulating a Turing machine for 2n time steps could be done by subexponential-size circuits.

    So, that’s the main reason for believing P=BPP.

    Where does Babai say he believes graph isomorphism is not in P? Is it in the video (which I still haven’t watched)? If not, can you give me a link?

  9. Rand Says:

    That’s not all I want, but it’s of a class with the other things I come to this blog for. (Generally speaking, intuitions on interesting open problems in Complexity Theory and a place to turn when somebody claims a major result in theoretical CS.)

    I don’t quite follow your argument. I know that a good pseudorandom number generator would show that P=BPP (I think that’s shown in detail in the Arora-Barak book?) but my intuition certainly says that such a PNG shouldn’t exist. (I assume no indistinguishable PNG can exist – that should be a trivial problem of stretching possible log(n) bit seeds – and the complexity is the issue here.)

    You’re saying that the “function computable in deterministic exponential time that also requires nonuniform circuits of size 2^Ω(n)” *must* exist?

    Claim about Babai’s view on GI \in P was posted on Reddit here: https://www.reddit.com/r/math/comments/3sdixw/babais_breakthrough_on_graph_isomorphism/
    (I can’t vouch for it’s accuracy, but I have no reason not to trust the poster.)

  10. Scott Says:

    Rand #9: No, I didn’t say such a function “must” exist, but it would seem like a crazy world if it didn’t. After all, we already know the function requires 2n time to compute by any uniform algorithm. So, all you need to believe is that allowing different algorithms, depending on the input length, isn’t suddenly going to give you some subexponential way to simulate a Turing machine for an exponential number of steps. As I said, if you accept that, then [IW] showed that you have a hard function that you can use to construct the requisite PRG. (Certainly if you had a strong cryptographic one-way function, that would suffice; [IW]’s contribution was to show that a much weaker assumption suffices as well.)

    Regarding GI, I’d prefer a direct quote from Babai. E.g., maybe he only said that his particular approach hits a barrier at quasipolynomial? (It might also depend on exactly when you ask him… 🙂 ) In any case, based on CS theory’s experience with primality and other problems, I suspect that GI is in P, but not with any great knowledge or conviction; it’s certainly plausible that quasipolynomial would be the truth.

  11. Jay Says:

    >But no one seriously believes that, say, simulating a Turing machine for 2n time steps could be done by subexponential-size circuits.

    I don’t catch this one. You’re talking about *non-uniform* circuits, e.g. circuits that can take superpolynomial time to construct, right? Why would it be surprising that spending exponential time (to construct polynomial-size circuits) allows matching functions that takes exponential time to compute?

  12. JonW Says:

    Jay #6: not sure if you thought that *I* was confused, but in fact contra your description a functional hemispherectomy is a kind of hemispherectomy, in the everyday language of neurosurgeons. (The variety you refer to as simply “hemispherectomy” is usually called “anatomical hemispherectomy” to distinguish it from the functional variant–which in turn can be subdivided into various surgical techniques, not all following the same path towards disconnection.) You’re correct that it is completely wrong to credit Carson with inventing this surgery, though many early pediatric patients had the surgery in his hands (I’ve met several). And you’re correct that the discussion is not at a very high level–my amazement was that I had found people who had had the same worry (though whether or not it occurred to them through the surgery of a loved one, or it was an abstract philosophical game, is not clear).

  13. Rahul Says:

    Scott #5:

    I’ve never gotten the rationalist movement. Isn’t most of the generally good bloggers we read e.g. yours, or Andrew Gelman’s etc. all essentially “rationalist”?

    e.g. The same sort of statistical debunking that the other Scott A. does also happens on a lot of decently good Stat. Blogs like Gelman’s etc.

    My point is, I can grok tags like TCS or CS or Statistics or Genetics but “rationalist” seems so broad. Doesn’t it subsume all the previous tags & more, at least the good ones.

    Or do you mean “rationalist” is an epistimilogical movement in the sense it’s trying to analyze and deconstruct our cognition / biases etc. i.e. The problem of what makes us rational or not so rational and how we can change our cognitive processes?

    Maybe I still need to read more to understand the rationalist movement. But to me it comes out as cultish and sort of puzzling. It’s like “hey we are “rational” and hence better than the rest of you” whereas in reality there’s a lot of people out there who are indeed rational but just advertise whatever they do as “TCS” or something instead of “rationality” per se.

  14. Scott Says:

    Rahul #13:

      Maybe I still need to read more to understand the rationalist movement. But to me it comes out as cultish and sort of puzzling. It’s like “hey we are “rational” and hence better than the rest of you” whereas in reality there’s a lot of people out there who are indeed rational but just advertise whatever they do as “TCS” or something instead of “rationality” per se.

    Well yes, that was exactly my thinking for a decade or so! All “rationalism” adds to the mix, is explicitly trying to build a community around the nerdy people who are interested in the topics I mentioned earlier (cognitive biases, effective altruism, transhumanism…), using a combination of blogs, summer programs, group houses, and meetups in places like the Bay Area and Boston. What changed in the last year or two is that I’ve been fairly impressed by this community—not by the abstract idea of one, but by the actual community that exists now. (Similar to my evolution regarding Wikipedia and MathOverflow: they weren’t obviously good ideas, but damned if they didn’t work…)

    As for cultishness … well, Eliezer seemed to deliberately cultivate a guru persona in many of his writings (back when he was blogging), writings that are nevertheless great. But I don’t get that vibe at all from (say) Scott Alexander’s blog, or from the real-life meetups I attended. It’s more like a dorm room bull session vibe—just “kids” (can I call them that? I’m 34 🙂 ) talking about interesting things, more willing than most to change their minds if you give them a good argument.

  15. Scott Says:

    Jay #11: It’s a fair question, and obviously there’s no proof it’s impossible (or else P=BPP would be a theorem rather than a conjecture!). But we have very little experience, in complexity theory, with nonuniformity dramatically changing the status of “natural” computational problems. Yes, you’re right, you can invest exponential (or even more) precomputation time in constructing your polynomial-size circuit. But then there’s a fearsome bottleneck: that polynomial-size circuit needs to work to decide the accepting or rejecting of exponentially many exponential-time Turing machines. Intuitively, this seems to require either exponentially many bits (to cache the behaviors of exponentially many machines), or else exponential time (to simulate the machines from scratch)—but in this case you have neither.

    Note also that, by the interactive proof results, if EXP is in P/poly then EXP=MA, which is almost (not quite) as massive a complexity class collapse as you can have without contradicting a known hierarchy theorem.

  16. Rahul Says:

    The foodie in me can’t help asking what was for dinner?! 🙂

  17. Rahul Says:

    Is the dinner discussion statement “in quantum gravity, spacetime itself is supposed to dissolve into some sort of network of qubits.”, or any of the other points in the discussion (e.g. “a dynamic spacetime is supposed to emerge from an ordinary quantum theory” ) fasifiable?

    i.e. Is there empirically testable content in this sort of conjecture?

  18. Scott Says:

    Rahul #16: Salmon, broccoli, rolls, + chicken nuggets for Lily.

    Rahul #17: Yes, I think these ideas are testable in principle. For example, if you could engineer a black hole to extreme precision (knowing the complete quantum state of the infalling matter), then wait ~1067 years for the hole to evaporate, collecting all the outgoing Hawking radiation and routing it into your quantum computer for analysis, then it’s a prediction of most quantum theories of gravity that you’d observe the radiation to encode the state of the infalling matter (in highly scrambled form), and the precise way in which the state was scrambled might let you differentiate one theory of pre-spacetime qubits at the Planck scale from another one. (Note, however, that some experiments would also require jumping into the black hole as a second step, in which case you couldn’t communicate the results to anyone else who didn’t jump into the hole with you.)

  19. Rahul Says:

    Scott #18:

    Thanks. So sounds like these theories are pure speculations.

    Unlike most of core physics which has empirical tests or even TCS where rigorous logic provides a way to “prove” a conjecture like P=NP the sort of hypotheses Mark Van Raamsdonk is making here are impossible to verify in any practical or logical sense (ignoring experiments needing one to wait for 10^67 years)?

    PS. Cosmology operates on huge time scales & QM on very tiny length scales but they both provide abundant indirect tests for most of their theories. Tests that are well accessible in human time scales.

  20. Rand Says:

    Note also that, by the interactive proof results, if EXP is in P/poly then EXP=MA, which is almost (not quite) as massive a complexity class collapse as you can have without contradicting a known hierarchy theorem.

    At which point Graph Isomorphism \in coMA becomes trivial and you concede to Babai that Graph Isomorphism probably isn’t in P?

    (Regarding the massiveness of the collapse: Do we know that NEXP can’t collapse to NP as well?)

  21. Rand Says:

    (Correction to above: We know that GI \in coAM. GI \in coMA would be a stronger result, unless I’m mistaken. [The rest should still work.])

  22. James Cross Says:

    It’s interesting to combine the rationalist movement and quantum entanglement in the same discussion. Isn’t what QM tells us is that the world is quasi-rational at best or perhaps not rational at all?

    My own rationalist side has led me to buy into the Give Well approach and I recently committed to a regular monthly contribution. I highly encourage others to take a look at it.

    Regarding the severed hemisphere I wrote something related to this a while back.

    http://broadspeculations.com/2015/08/30/blindsight/

    The severed hemisphere is somewhat analogous to some one suffering from blind sight.

    It seems consciousness does require structures deep in the brainstem to be working; however, that doesn’t seem to be sufficient to produce consciousness. In the case of blind sight the damaged part of the cortex is processing visual information but it doesn’t reach consciousness. I would guess the severed hemisphere is too damaged to be conscious but that is mostly a guess.

  23. Jay Says:

    >polynomial-size circuit needs to work to decide the accepting or rejecting of exponentially many exponential-time Turing machines.

    Really? Sure for each n there are exponentially many machines, one for each instance, but can we take for granted that a significant proportion of these machines would take exponential time to compute?

  24. Scott Says:

    Rahul #19: Three points.

    (1) Again and again in the history of science, people took ideas that were “testable in theory but obviously never testable in practice,” and then found incredibly clever ways to test them in practice after all. Examples: the existence of atoms, the existence of quarks, the reality of Big Bang, etc. That’s why to me, it makes all the difference in the world whether people can explain how you’d test something given a mere 1067 years and alien-civilization level control over a galaxy.

    “So you’re saying there’s a one in a million chance you’ll go out with me? But that means … there’s a chance!” 🙂

    I worry more about ideas that people still wouldn’t know how to test, even if you gave them a googol years, a particle accelerator the size of the universe, and whatever else.

    (2) Part of the appeal of physics has always been that it lets you make unbelievably bold-seeming extrapolations. (E.g., Archimedes’ “give me a lever long enough and I will move the earth,” which was entirely correct in the sense he meant it.)

    A modern example: No one has ever been near a black hole, let alone waited the hyper-mega-astronomical amount of time it would take to observe its Hawking radiation. Yet physicists “across the political spectrum” (string theorists, anti-string-theorists, and everyone in between) all agree that the Hawking radiation exists, and that we know its exact temperature and spectrum, and exactly how long a black hole of a given mass will take to radiate away (up to low-order corrections).

    What makes them so confident is that Hawking’s prediction relied only on basic principles of GR and quantum field theory that are extremely well-understood, in a regime outside the event horizon where there’s no reason for those principles not to apply.

    And the prediction of Hawking radiation actually had a practical consequence! It was why physicists were confident that, even if the LHC produced miniature black holes, those black holes wouldn’t gobble up the earth but would radiate harmlessly away.

    So, do you think physicists are right about this? Do you agree that Hawking radiation is real? If so, then it seems to me you’ve already agreed to the basic principle, and whether we might also someday gain an equal confidence in theories about the exact quantum state of the Hawking radiation, etc. is just haggling over details. 🙂

    (3) There actually is a part of your worry that I agree with. Namely, I believe that when you have neither actual experiments nor rigorous math to guide you, you’re constantly in danger of lapsing into nonsense, and constantly need to be on guard against it happening. On the other hand, history shows that it’s possible to make progress with experiments but no rigorous math, and also to make progress with rigorous math but no experiments.

    It’s not a coincidence that I got (peripherally) involved in this quantum gravity stuff precisely when it became closely connected to quantum information, which is rigorous math that I understand. (Which is emphatically not to say that nothing rigorous happened until a few years ago! But I didn’t understand the relevant rigorous theories, and therefore had nothing to contribute.)

  25. Scott Says:

    Rand #20: I actually don’t know a relation either way between the question of whether EXP=MA, and the question of whether GI is in P.

    NEXP≠NP is ruled out by the Nondeterministic Time Hierarchy Theorem.

  26. Scott Says:

    Jay #23:

      Really? Sure for each n there are exponentially many machines, one for each instance, but can we take for granted that a significant proportion of these machines would take exponential time to compute?

    The proportion of the machines for which it’s actually a hard problem could be vanishingly small, but as long as it itself grows exponentially (and there seems to be no good reason why it shouldn’t), that’s enough. E.g., it would suffice if out of the 2n machines, 1.001n of them were effectively unpredictable without their behavior being hardwired into the circuit.

  27. Jay Says:

    Ok, thanks for the instructive discussion.

  28. fred Says:

    Scott #24
    “No one has ever been near a black hole”.

    Maybe a dumb question, but do we now have indisputable evidence that black holes do exist as described by the theory?

    I know we’ve observed the effects of bodies at the center of the galaxy with trajectories and masses that make them candidates for being black holes, but my understanding is that for any distance bigger than the event horizon the gravity field is just what it would be if the object wasn’t “collapsed”.

  29. Michael Musson Says:

    Speaking of hierarchy collapse, I have often wondered if some of the proofs might require steps that cause a quasi-indeterminacy due to the need to squash exponentials and worse. For example, a proof for P vs. NP that requires jumping into a black hole at some step to see the outcome, so that the answer is definitely there but inaccessible to the outside universe.

  30. Scott Says:

    fred #28: What astronomers can observe are the accretion disks around objects such that, if they’re NOT black holes, then general relativity must be wildly wrong. But GR predicts exactly on the money, for example, the loss of energy due to gravity waves when two neutron stars orbit each other, so there’s good reason to trust it even in situations of high gravity.

    Sometimes people get into weird debates over terminology — e.g., if it turned out people were massively wrong about the black hole interior for quantum gravity reasons (say, there was a firewall at the event horizon), would the objects still be “black holes”? I would say that even then, we could be extremely confident that the objects would BEHAVE like black holes from an external observer’s standpoint, in the sense that the surrounding spacetime is incredibly curved, stuff goes in, nothing but Hawking radiation comes out, etc. So it would still seem fair to call them “black holes,” and just say that we’d been wrong about their interiors.

  31. Rahul Says:

    Scott #24:

    Those are great arguments.

    I don’t really have a good response to those except to sheepishly admit that now I’m even more confused as to how to tell apart a genuinely novel physics idea versus sheer crackpottery.

    In the absence of rigorous logic nor empirical falsifiability what are we really left with as gold standard of a scientific theory.

    PS. Is Van Raamsdonk’s theory really “derivable” from the known knowns in the same spirit as Hawking derived Hawking Radiation from the well tested GR and quantum field theory?

    i.e. Independent empirical verification is one thing but does coming up with Hawking Radiation based on accepted empirically tested theories require any leaps of faith. Is there a logical gap or speculation involved?

    Somehow it seems to me that statements like “spacetime itself is supposed to dissolve into some sort of network of qubits” are far more fantastic and speculative than Hawking coming up with his radiation argument following from what else was known in the 1970’s. It sounded more like a natural consequence of what we knew just that it needed Hawking’s genius to see it. In some sense the laws of logic predicted it.

    Do Van Raamsdonk’s ideas really evolve naturally from the body of knowledge that we do know? Or are they more speculative.

  32. Douglas Knight Says:

    And the prediction of Hawking radiation actually had a practical consequence! It was why physicists were confident that, even if the LHC produced miniature black holes, those black holes wouldn’t gobble up the earth but would radiate harmlessly away.

    Well, it shouldn’t have. As discussed on this blog, miniature black holes are too small to eat anything significant, even if they don’t evaporate.

    And I don’t think your claim is historically accurate. My memory is that people emphasized the empirical argument that cosmic rays have more energy than LHC and they don’t do any harm.

  33. Scott Says:

    Douglas #32: OK, you’re right of course. While Hawking radiation is one conclusive reason why mini black holes wouldn’t be a danger, the facts that they would’ve already arrived in cosmic rays and swallowed the earth if they were, and that they would take trillions of years to reach a reasonable mass, are additional conclusive reasons. Here is a pretty good article that gives all three arguments.

  34. Scott Says:

    Rahul #31: The current ideas about spacetime emerging out of networks of qubits are speculative, not established science—I don’t know anyone who disputes that. But they’re not crackpot speculations. They grew out of things like AdS/CFT, which in turn grew out of what we know about GR and quantum field theory. And I think the general idea that the smooth spacetime manifold of GR needs to be replaced by SOMETHING when you probe it at the Planck scale is extremely solid.

    The day there’s a mechanical procedure for distinguishing good scientific ideas from bad ones, is probably the day we can all retire and be replaced by AIs. 🙂 But some obvious criteria include whether the proponents can clearly explain the established theories that they’re building on, how they deal with criticism and problems with their theory, and whether they can at least explain what it would take to test their theory.

    I don’t know van Raamsdonk’s ideas in particular well enough to comment, but in general, quantum gravity research can (sometimes) be pretty rigorous in determining what would follow from given starting assumptions, but is rarely free from speculation in the assumptions themselves.

  35. Scott Says:

    Michael #29: If it were possible to see why P≠NP, but only by jumping into a black hole, so that I couldn’t tell anyone else and would die right afterward, I suppose I’d do it, but only if I’d lived a long and satisfying life beforehand.

  36. fred Says:

    Scott #35
    I would only jump into a black hole if I could see Chaitin’s constant on the other side!

  37. Scott Says:

    fred #36: Why? It’s just going to look to you like a meaningless string of 1s and 0s. (Unless you can violate the Extended Church-Turing Thesis, but in that case, maybe you could compute Omega for yourself anyway.)

  38. fred Says:

    Scott #37,
    Well, after reading the wiki
    https://en.wikipedia.org/wiki/Chaitin's_constant#Relationship_to_the_halting_problem
    I was under the impression that knowing (one) Chaitin’s constant is equivalent to solving the halting problem (although not very efficiently), therefore I’d be able to get the answer to any math conjecture or NP hard problem?

    I’m just fascinated by the idea that a sequence of 1s and 0s that has such a compact definition can be so powerful (and therefore uncomputable).

  39. Scott Says:

    fred #38: Sorry, edited my comment to clarify. It’s true that knowing Omega would make the halting problem “computable,” but only because it would tell you when you could stop dovetailing over all possible Turing machines and conclude that your particular machine must run forever. In other words: extracting the relevant information, for an n-state Turing machine, would in general take something like BusyBeaver(n) time. So it would still be “uncomputable for all practical purposes.”

  40. asdf Says:

    The Mark Van Raamsdonk video “Gravity and entanglement” linked from the Cowan article is really good, and not too technical for people who only know some basic QM:

    https://www.youtube.com/watch?v=WQU9yOtWrQk

    The Brian Swingle one is a little harder to follow and I’ve only watched a few minutes of it, but I’ll try watch the rest when I get a chance. ( https://www.youtube.com/watch?v=wLOwjfblJGI )

  41. Tim May Says:

    I’m all for rationality.

    But my experience has been that people who argue claiming that they are “rationalists” are either Marxists or Libertarians. (I’m the latter, have been since 1967 or so, and know a lot of the Transhumanist/Less Wrong/Extropians/etc. “milieu.” Doesn’t change my opinion.)

    I have pretty strong political and social views. But Science was almost my first love, and remains so.

    I loved Scott’s book. His apologies for being a male to the SJWs not so much.

    Lastly, though I remain skeptical of “everything is a vibrating string” stuff, the newer results loosely described as “ER = EPR” is really interesting. As someone said, this is not just crackpot stuff, nor is it woo-woo stuff. There seems to be something there about entropy, entanglement, causality, horizons, and quantum mechanics.

    Someone mentioned that QM seems to show reality to be _less_ rational, even non-rational, or irrational. Scott makes one approach to showing that QM is the only thing that actually makes reality rational. And examples abound about how Newtonian/Galilean/etc. rationality would not allow the universe to actually work.

    On this day of the 100th anniversary of Einstein’s prevention of GR, where some in the press are calling it the greatest and most beautiful theory of all, I am more persuaded than ever that quantum theory is the overall winner. Shouldn’t call it “the winner,” but it seems to be more basic to reality than anything else.

    Maybe it’s the difference between the L1 norm and the L2 norm. Scott has the best discussion of this I’ve seen.

  42. luca turin Says:

    “I’m sure there’s a lesson in here somewhere about what I should spend my time on.”

    Funny, I was talking to a theoretical physicist colleague just yesterday who reveres you and ardently hopes you will turn your 16-inch naval guns to something “other than TCS and complexity”.

  43. Aula Says:

    Scott #34: “I think the general idea that the smooth spacetime manifold of GR needs be replaced by SOMETHING when you probe it at the Planck scale is extremely solid.”

    I don’t doubt that’s true, but I don’t think it’s a very good way to understand the issue. I would rather say that since there is a completely classical (ie. non-quantum) argument that special relativity is fundamentally incompatible with gravity (meaning every-day-observable effects of gravity, not something specific to general relativity) and since ordinary quantum field theory is entirely dependent on SR, it’s obvious that any possible theory of quantum gravity can’t be based on QFT. And I really do mean any possible theory of QG; any theory of QG that matches Newtonian gravity in the non-relativistic non-quantum limit, no matter how badly it contradicts GR in the relativistic non-quantum limit, would be an enormous breakthrough.

  44. Chris Blake Says:

    When I read Scott Alexander’s piece on hypothetical hardball questions for the next Republican debate, I found it funny to interpret the entire piece as an elaborate setup for the final question he proposed asking of Donald Trump:

    “My question for you is: WHY DIDN’T YOU CALL IT THE TRUMP CARD?!?!!!!111111111asdfdf”

    That was a genuine laugh out loud moment for me.

  45. Raoul Ohio Says:

    luca turin #42:

    Interesting suggestion. Scott perhaps knows as much about TCS + Complexity as anyone, and is not not likely to be looking for a new field to take up. Nevertheless, I can guess how the unnamed physicist was thinking, perhaps something like the following:

    TCS+C is a fascinating extrapolation of basic concepts, following the path of discovering “What’s out there”. The intellectual excitement is palpable.

    You can say the same thing about TLC (Topology of Large Cardinals), ST (String Theory), MWI, etc. TLC and TSC+C have in common starting with key useful facts (basic functional analysis; analysis of algorithms) and going off into the ozone. There is absolutely a chance that something useful will be found in the ozone. In contrast, ST has a remote chance of ever being worthwhile; MWI has no chance.

    Current students might wonder; can a really smart person find happiness in boring old practical science, full of real world complications? Or is all the excitement in speculative stuff, where anything you can dream up goes?

    It is possible. A great example is Subrahmanyan Chandrasekhar. Many times he studied up on a major topic of the day, figured out much of what everyone was trying to do, organized and summarized the material in a book, and moved on the something new. Each time the entire field took a big jump forward; now grad students could read his book, and be prepared for the next generation of problems. See the obituary in “Physics Today”, or https://en.wikipedia.org/wiki/Subrahmanyan_Chandrasekhar

  46. AdamT Says:

    There is something very self-referential about Scott’s question re: qubits and infinite regress and in your paraphrase of it. Kind of like how can information theory describe information theory and it makes me think of formal mathematical systems being insufficient to prove their own consistency, etc. You need a more powerful formal system or a higher order one. Maybe we need a higher order quantum description… ??

  47. Tim May Says:

    luca turin #42,

    About hoping that Scott turns his guns towards physics over complexity theory, Lenny Susskind said at his recent Stanford lecture that his main focus in the big program is more complexity theory than some of the other areas.

    (I was able to catch lectures by Polchinski, Preskill, Van Raamsdonk, and Susskind at Stanford in the last couple of years. What a great place and time I find myself in.)

    A lot of the stuff about tensor networks and “cuts” in sketches of Escher-like diagrams is beyond me, but there seems to be something potentially there that is not just woo-woo.

    Exciting times. Complexity theory seems unrelated, except as Scott has hinted at in his book, and utterly impractical (10^89 years, come on!), but then entropy looked similarly difficult to actually consider a century or so ago.

  48. luca turin Says:

    Tim May #47

    Exciting indeed! You lucky man. It pains me to think that I will never properly understand this stuff (I am 62 and not a physicist) but even the simplified versions convey a strong feeling of something both beautiful and fundamental. I’ll keep watching those guys on YouTube…..

  49. Rahul Says:

    If Van Raamsdonk’s work is not wacky / crackpottery any thoughts over why multiple scientific journals turned it down? These are respectable journals, right so I’m sure the peer reviews were by knowledgeable people.

    I mean sure lots of turning down happens in the routing process of scientific publishing but this sort of work seems very high impact just so long as it is legit.

    I mean it doesn’t even have to be right but just has to meet the bars of legitimate scientific discourse to get published when the potential impact is so revolutionary.

  50. James Cross Says:

    #41 Tim

    Actually it just means we need irrationality to make the world work.

  51. Rahul Says:

    James #50

    Great point. A world based on Rationality alone might be a scary place.

    Aren’t emotions like sentimentality, attachment etc. at odds with pure rational decision making?

  52. Tim May Says:

    #50 James Cross, I assume this is a joke, but of course QM is anything but irrational. Not “local realism,” of course, but this is not the same thing as not rational or irrational.

    Somewhere along the line, years after I was in college, I “internalized” the point that local realism (for instance, a box either has X in it or does not have X in it, be it a marble or a cat or a spin state) works at human scales and is deemed to be part of our “rationality.” But it manifestly DOES NOT WORK at smaller scales (nor at any scales, actually, but at large scales or long times, decoherence dominates).

    And when one realizes that it is this QM-type of non local realism that makes for stable atoms, to name but one of a hundred such example, one realizes that QM is really more “rational” than the classical world is.

    (Another way to put this is that really are only two choices for the “norm” of reality. The L1 “Manhattan geometry” norm or the L2 (“shortcut through the diagonal”) norm. (There are reason s why other norms can’t work. Scott’s book alludes to them, and real mathematicians no doubt can elaborate.) Both Lucien Hardy and Scott have made this point poignantly (and pointedly), and maybe it’s buried somewhere in John von Neumann’s papers, but this seems to be key. So, L1 is classical physics, L2 is quantum physics. There are no other “physical realities.” A really, really bright mathematician/physicist could have (perhaps) “predicted” a lot of quantum physics 200 years ago. Especially with just the results of Young’s light diffraction results. But the best we had, Newton, Lagrange, Laplace, Maxwell, Einstein, etc. did not.

    The revolution of QM made this much clearer.

    (And, in my own opinion, the “second revolution” of Bell, quantum computing, entanglement, which was largely a realization of the implications already laid-out by around 1930, has led to all this cool new stuff. Some say it was already known by way back then in the 1930s, but I think it too a while for people to think about the implications. And even probably wrong ideas like the AMPS “firewall” theory have served to trigger this huge flurry of papers and theories. Oftentimes a “wrong” theory stimulates fresh analysis. Einsteing, a critic of QM, was a major contributor just through is ER and EPR, separate analysis.

    Lenny Susskind had a hilarious line at his Stanford lecture several weeks ago. About ER and EPR, “they didn’t call him Einstein for nothing.” The whole lecture hall erupted with laughter.

  53. Douglas Knight Says:

    Scott 33,
    A few comments on that article and then a general complaint about the whole discussion.

    I think it is a serious problem with the article that it suggests that the results of LHC would be stationary with respect to the Earth.

    The article mentions the rate of evaporation in 4 macroscopic dimensions. That is a very weird detail to throw in. We obviously don’t have 4 macroscopic dimensions. And if we’re going to talk about extra dimensions at all, why 1 extra, rather than 6? But the situation with 10 macroscopic dimension is much worse, both for Hawking radiation and for the rate of growth of a black hole. Probably not reassuring. [Maybe the idea is that the evaporation time is so small that the extra dimension is macroscopic on that scale?]

    I also have a general objection to this whole discussion, including my original comment. Why are we talking about micro black holes at all? Because someone once suggested that they might be a problem? If we are to seriously engage with the possibility of danger from LHC, we should generalize from micro black holes to all exotic new physics. The argument about cosmic rays is robust to this generalization, while the other two arguments are specific to black holes.

    Thus if we are going to talk about micro black holes, we should not pretend that LHC is the reason, and so we should not restrict to LHC-sized ones. But then argument about the rate of growth, aside from being much more elementary, has the advantage of applying to astronomically larger sizes of black holes than those that would evaporate.

  54. Anonymous Says:

    #50 and #51: You might find Julia Galef’s talk about this interesting, especially the part around the 30 minutes mark.

  55. James Cross Says:

    #54

    My comment isn’t based so much on rationality vs. emotions.

    It is based more on the limits of rationality, that ultimately at the bottom the world will never be completely understood. There is something that will always escape the ability of our rational mind to grasp.

    It is the fundamental idea of existentialism.