Me interviewed by John Horgan (the author of “The End of Science”)

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

115 Responses to “Me interviewed by John Horgan (the author of “The End of Science”)”

  1. For Your Reading Pleasure | Not Even Wrong Says:

    […] John Horgan has a wonderful, very long, interview with Scott Aaronson. Highly recommended as a way to avoid work and learn all sort of interesting things from and about Scott, whose blog you should be reading anyway. If you want to discuss this, you likely can do so with the man himself here. […]

  2. Peter Allenbach Says:

    You say Fermat could have been unsolvable before Wiles. Is this true? See Beauty of Doing Mathematics, Serge Lang, page 54.

    Great blog, btw.

  3. Scott Says:

    Peter #2: Well, in some sense we now know that it “couldn’t have been” unsolvable, because Wiles solved it! All I meant was that we had no theorem telling us there must be either a proof or a disproof of FLT, until Wiles came along and produced the proof. As a practical matter, anyone knowledgeable would presumably have bet at strong odds that FLT was both true and provable—CERTAINLY after Frey’s and Ribet’s work in the 1980s.

  4. Kevin Stangl Says:

    This is not relevant at all to the current discussion…but whatever ended up happening with the australian advertisment that plagiarized your lecture to sell printers? did you end bringing a lawsuit?

    Also I love your blog and am currently procrastinating machine learning homework by reading these notes
    http://www.scottaaronson.com/democritus/lec9.html

  5. Sniffnoy Says:

    I have to wonder if your answer to “Do you believe in the Singularity?” was a bit misaimed. Like, have most of the people reading this heard of Eliezer Yudkowsky or I. J. Good or the intelligence explosion hypothesis? I suspect that Ray Kurzweil and the accelerating change hypothesis are probably more familiar to most readers, and that they’re not going to know what you’re talking about when you talk about the “Singularity community” being “weirdo nerd cult that worships a high-school dropout and his Harry Potter fanfiction”.

    Or maybe Eliezer Yudkowsky is more well-known now than I realize, I don’t know. (Though I’d bet that if they do know who he is, most readers are still not going to distinguish between the different singularity hypotheses, and will just lump him in with Kurzweil.)

    By the way, regarding prediction machines and what free will means when they exist — there’s an interesting short story on that subject by Paul Torek…

  6. RN Says:

    As a follow-up question regarding quantum-computation in physics: Could theoretical computer science help achieve a unified theory at an even more fundamental level than quantum physics? (If that level exists.) As an intuition pump, perhaps particular structures such as 4-dimensional Hilbert space might be computationally efficient in some sense (e.g. for propagating information through space). That might allow subsequent algorithms which operate on these structures to have a lower Kolmogorov complexity which would make them more likely to give rise to what we can observe by Occam’s Razor/conjunction of probabilities.

  7. mjgeddes Says:

    Fantastic interview Scott! I rate you as the second greatest living philosopher of science today – behind only myself 😉

    Regarding consciousness, and the Mind-Body, I think I’ve cracked it in a purely *practical* sense (in the sense that I think I know how to write a computer program that I’m reasonably confident would be conscious), but I admit I’m still far from clear about the theoretical/philosophical issues surrounding the topic.

    My best guess about consciousness is that it’s an information processing system for self-modelling and control. That is to say, I think the mind forms a rough model of itself in order to coordinate all the different sub-agents in order to carry out action plans, and that model is what we interpret as consciousness. The action-plans take the form of narratives;I think ‘narrative’ ‘is the basic data-structure used by consciousness to express our high-level plans and goals.

    I think the attention-schema theory of Michael Graziano (which is similar to my own ideas) is the best theory of consciousness in the field of cog-sci so far, link summarizing the theory here:

    http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00500/full

    The above ideas may or may not be compatible with IIT. I think IIT does have much to recommend it as well – it deals with information, it’s simple and mathematically precise. I think the problem with ITT is that it may be too ‘low-level’ – integrated information – well – may be – I think your own objections seemed reasonable as well. The problem with IIT is that it seems a bit *too* simple – it’s so general.

    Is consciousness physical? A very tough question! 😉 I do have a great deal of sympathy with your own view that although consciousness can be investigated scientifically, consciousness will never be reducible to a physical explanation. We may simply have to accept that consciousness is a fundamental property of the universe, as David Chalmers suggests.

    On the other hand, if consciousness is non-physical, then you have the problem if how it could exert causal influence. So that seems like a reasonable argument that it might be reducible to a physical explanation after all.

    I think it’s still an open question – like I say, personally, I do have a great deal of sympathy for the position of yourself (Scott) and David Chalmers.

    But I don’t think these philosophical issues affect the correctness of my basic ideas of consciousness as a self-modelling and control system though.

  8. Scott Says:

    RN #6:

      Could theoretical computer science help achieve a unified theory at an even more fundamental level than quantum physics? (If that level exists.)

    Well, I think one of the main reasons to be interested in quantum computation is to test quantum mechanics itself in the “regime of computational hardness,” where it’s never been directly probed before. Personally, I expect that QM will pass with flying colors—but if there were a more fundamental level underneath QM, as people like Fredkin and ‘t Hooft and Wolfram expect, then this would be one obvious way to discover it.

  9. Sean H. Says:

    There were so many different topics covered that these comments could be another version of “ask me anything,” so I’m curious what your readers will find most interesting.

    For Section 14 of the interview on the singularity, I have to say that I didn’t follow the discussion at all. It does seem like a lot of the information paradox problem goes away if quantum effects prevent black holes from becoming exact singularities, and instead there’s just a bunch of matter collapsing and bouncing around. My question is whether any evidence against black holes being exact singularities is provided by the announcement this week that the LIGO black hole collision gravitational wave came with a gamma ray burst less than a second later. If two singularities collide in empty space it’s hard to see how this would happen, but if there is still matter and no finite event horizon then I could imagine how the collision might be powerful enough to eject it at high energy.

  10. Scott Says:

    Sean #9: As far as I know, essentially everyone in this field proceeds on the assumption that black holes aren’t “exact singularities”—i.e., that what GR calls a singularity actually gets resolved at the Planck scale into something else. But crucially, if GR is even approximately true—say, true in the regions far from the not-quite-singularity—then the information “paradox” remains in full force. I.e., you still have the problem of explaining how the same quantum information can do “double duty”: how it can hang around the not-quite-singularity for as long as the black hole exists (?), as GR far from the not-quite-singularity appears to require, but then at the same time, also get emitted in the Hawking radiation. You still need something like black-hole complementarity to answer questions like that, which then leads to the firewall problem.

  11. Scott Says:

    Sniffnoy #5: Yeah, you might be right. But certainly, it was on my mind that John Horgan’s last Q&A, the one immediately preceding mine, was with Eliezer!

  12. Jaffe Says:

    “For one thing, P vs. NP is the only one of the seven Clay problems that has obvious practical implications.”

    What about the Navier–Stokes existence and smoothness problem?

  13. Scott Says:

    Jaffe #12: We had a detailed discussion in the comments of this post, but briefly: I don’t think so. We’ve long understood exactly how and why the Navier-Stokes equations break down at the microscopic scale—so even if there were a finite-time blowup in the equations, it probably wouldn’t be physically relevant. By contrast, it’s weird (though admittedly possible) to imagine a proof of P=NP that wouldn’t be relevant to practical algorithm design.

  14. anon Says:

    But what is the answer to that great question that John Horgan didn’t ask: what are your favourite soda flavours?

  15. Anonymous Says:

    You said in the interview:

    For example, there are certain kinds of error-correcting codes that we know not to exist only because, if they did, then there would be even better quantum error-correcting codes—but the latter we know how to rule out.

    Care to expand a bit? What kinds of codes are you talking about? Where can I read more about these results?

  16. Scott Says:

    anon #14: Root beer, black cherry, sarsaparilla, Inca Kola (sold in South America), egg creams, almost anything suitably exotic (coconut soda, mint soda, lychee soda)

  17. Scott Says:

    Anonymous #15: The result I was talking about comes from this 2002 paper by Kerenidis and de Wolf. For more results of the same kind, even if it’s slightly outdated by now, you can’t do better than this 2009 survey by Drucker and de Wolf.

  18. xerxes Says:

    “…the Allies still could’ve bombed the train tracks to Auschwitz and Sobibor, had they wanted to.”

    Could they? Please explain how, with the technology of the 1940s, when getting a bomb to land within 500 yards of its target was a triumph, it was feasible to cause damage to a 5 foot wide train track that could not be repaired in a day.

  19. Scott Says:

    xerxes #18: I concede that the degree of Allied guilt for the failure to disrupt the Holocaust is a complicated question, with serious military historians who argue both sides. So for example, maybe you’re right that bombing the train tracks would have caused at most minor damage, and one would instead need to bomb the trains or the camps themselves, killing some Nazi victims in order to make it harder to kill future ones. Maybe even if the death camps had been successfully disrupted, the Nazis would’ve found it easy to respond by just shooting the remaining Jews and Gypsies under their control (or poisoning them in mobile vans), as they did in the East.

    To me, though, the striking fact is that the Allies never even tried to disrupt the Final Solution, despite having reliable information about it from 1942 onward. It’s not that they tried one tactic, it didn’t work or the Nazis took countermeasures, so then they tried a second, etc.—it’s that the process never even got to Step 1. The idea of bombing the infrastructure of the Holocaust was repeatedly considered, but rejected, for reasons that weren’t at all limited to technical difficulty: they also included “irrelevance to the war effort,” and incredulity by Churchill and others that the reports of mass killings were actually true and not exaggerated. At one point, the Allies actually bombed an IG Farben factory 5 miles east of Auschwitz-Birkenau—the inmates at Auschwitz saw the explosions—while leaving Auschwitz itself untouched. In 1944, FDR finally caved and created the War Refugee Board, which was credited with saving about 200,000 Jews, even though its members expressed remorse that the board was created so late and empowered to do so little.

    Meanwhile, this all happened in a context where the US, UK, and other Allied countries had repeatedly refused calls to accept Jewish refugees in the 30s and early 40s, and where the British had also turned hundreds of thousands of Jews away from then Mandate Palestine, consigning them to their deaths in Europe. (The Grand Mufti of Jerusalem, who the British considered the leader of the Palestinian Arabs at that time, was a Hitler ally who enthusiastically supported the Holocaust while it was happening, asked Adolf Eichmann “what he could do to help,” and threatened additional murderous riots if the British honored the Balfour Declaration by letting more Jews into Palestine.) Also, the New York Times systematically buried news about the Holocaust on its back pages: the propaganda value of proving just how monstrous the Nazis actually were, was outweighed by the fear of an antisemitic backlash if people got the idea that Hitler was being fought “for the Jews’ sake.”

    All in all, it seems obvious that the sane world could have done a lot more. Implications for today are left as exercises for the reader.

  20. Raoul Ohio Says:

    Scott:

    Totally great interview. I am about 30 minutes in, and had to stop and write down this gem:

    “In fact, QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software.”

    Because you don’t waste time editing your remarks to avoid exposing yourself on contentious issues, you will no doubt be attacked by all sorts of people about whatever. I suggest also not wasting any time debating them.

  21. Cat Freak Says:

    Scott? This may have been answered elsewhere but is there video of the conference? I want to see the IIT people squirm!

    Also, how about a requiem for Prince? Or weren’t you a fan? 🙁

  22. Scott Says:

    Cat Freak #21: No, sorry, I don’t think there was video at that conference! But what I said in my talk is pretty much entirely contained in my two blog posts about IIT.

    It’s always sad when a beloved musician dies at an early age, but I can’t honestly say that I knew anything about Prince, other than that he was a guy who at one point changed his name to an unpronounceable symbol that’s not even in the Unicode character set. I’m just learning more about him now that it’s all over the news…

  23. Jon Lennox Says:

    The closest Unicode approximation to the Prince symbol is apparently Ƭ̵̬̊. (Quality depends on your browser’s Unicode renderer.)

  24. Raoul Ohio Says:

    R.O. add-on to “9. Do you ever worry, like some theoretical physicists, that our universe is a simulation created by superintelligent aliens?”:

    If that is the case, then the superintelligent aliens might be a simulation run by supersuperintelligent aliens, and so on, recursively. In summary: “It’s Supersimulation all the way down”.

  25. Raoul Ohio Says:

    Re 10. Could quantum-computation research help physicists achieve a unified theory? and “There are a few reasons why I think quantum computing ideas have been showing up lately in fundamental physics.”

    The R.O. take is that QC ideas are basic mathematics, and they show up in UFT in accordance with what Eugene Wigner called the “Unreasonable Effectiveness of Mathematics in the Natural Sciences”, see https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html

  26. Raoul Ohio Says:

    Scott:

    Being someone who struggles to write out things (that I am somewhat of an expert on) in a coherent fashion, I was wondering how long it took you to write all those answers.

  27. Donald Says:

    One of the central themes in complexity theory appears to be that it’s nearly impossible to tell the difference between data coming from (to use crude terms) a truly random process and a fake random process. So how are Bell experiments able to circumvent this apparent complexity-theoretic barrier, and imply that randomness in quantum mechanics must be genuine randomness rather than some form of classical deterministic pseudorandomness?

  28. Scott Says:

    Raoul #26: From when John Horgan sent me the questions to when I sent my responses was almost three months—though once he’d prodded me into seriously working on it, it was “only” a couple weeks on top of teaching and all the other things I need to do.

    Incidentally, thanks for the sage advice!

  29. Scott Says:

    Donald #27:

      So how are Bell experiments able to circumvent this apparent complexity-theoretic barrier, and imply that randomness in quantum mechanics must be genuine randomness rather than some form of classical deterministic pseudorandomness?

    Please check out my American Scientist article “Randomness Rules in Quantum Mechanics”, which is about exactly that excellent question!

  30. James T. Says:

    If you skip to the bottom of my long comment I have a rather crass specific question about P versus NP, which surprisingly I haven’t seen expressed elsewhere on the web, but comes down to thinking about whether living in Russell Impagliazzo’s Algorithmica would really be such a huge deal.

    One reason for occasional humility among computer scientists is that in the postwar American period the really big gains in productivity growth occurred in the first three decades after WWII, before anyone had a personal computer. (Contrary to Davos Man claims that technology and skill gaps are the cause of structural employment and inequality this period was marked by generally low unemployment and shared economic growth, but that’s another topic.) Of course the big gains in life expectancy came in the first half of the 20th century. Humanity has made all kinds of amazing inventions and discoveries that no longer seem magical like cars, airplanes, TV, radio, antibiotics, vaccines, microwaves, nuclear power, air conditioning, etc., but that greatly increased how much economic output we could produce. Productivity decreased in the 1970s for reasons that still aren’t fully explained, and increased again (although not to postwar levels) in the mid-90s through what’s usually attributed to information technology and the internet. Today, counter to singularity proponents claims that robots will soon take all the jobs (i.e., a massive spike in productivity) and that we’ll live forever, productivity growth is low by historical standards and life expectancy is actually decreasing for large demographic groups in America.

    It’s very hard for me to see how a perfect 3SAT algorithm would override all these trends, because I don’t see how it could be as much of an advance as the invention of the computer itself, let alone the collective invention of everything above that collectively superseded the computer. Let’s say for the sake of argument that we discovered P=NP. It would be awesome to replace all our approximation algorithms for scheduling, routing, and machine learning with exact algorithms, and automate mathematical discovery so that math geeks would need to be just as humble about their special prowess as chess geeks and trivia geeks.

    Here’s my crass economic question: do you think that the discovery of a polynomial time algorithm for 3SAT (of course with a small exponent and constant) would increase annual productivity growth by a full percentage point?

  31. Scott Says:

    James T. #30: I don’t feel like I have a good handle on what a fast 3SAT algorithm would do to GDP growth—not because I don’t understand what we could do if we could solve 3SAT efficiently in the worst case, but just because I don’t understand economics! I.e., the world would seem to me like a different place, one where computers would never need to resort to brute-force searches, computational crypto no longer existed, Bitcoin immediately collapsed (there’s an economic implication for you… 🙂 ), neural nets could quickly get set with the optimal weights, software could be super-rapidly debugged, and we could do data compression down to the time-bounded Kolmogorov complexity limit. But how would those things, in turn, affect GDP or productivity or other metrics economists care about? Dunno … anyone else care to field that one?

  32. Douglas Knight Says:

    Speaking of blowing up train tracks, I love this otherwise irrelevant WWII training video.

  33. Anonymous Programmer Says:

    In 1999, John Horgan followed up The End of Science with The Undiscovered Mind: How the Human Brain Defies Replication, Medication and Explanation as if he realized there really still is a fundamental gaping hole in science — the theory of mind.

    Galaxies and brains are somewhat similar — one has about hundred billion stars — one has about a hundred billion neurons.

    Until recently, understanding galaxies was not thought to be too difficult, just use classical physics on a computer to predict the movement of stars — just like planets in the solar system.

    Except it didn’t work out way, stars on the edge of the galaxy were moving too fast. Unseen and unexpected players like dark matter and a central black hole had massive influence in galaxies.

    Maybe the brain also has unseen and unexpected players instead of just imagining neurons with just ordinary matter with no quantum entanglement.

    Ordinary matter decoheres rapidly at room temperature but dark matter, being WIMPs, maybe decoheres a lot less rapidly given it interacts a lot less and could go for long stretches of coherent entanglement at room temperature. Maybe there is a WIMP in every neuron.

    A lot of entangled WIMPs and maybe a entangled mini black hole too could turn the brain into a awesome real quantum computer at room temperature.

    And maybe WIMPs or mini black holes are conscious with real free will and can partially control their own quantum evolution.

    Going the other way maybe black holes at the center of galaxies are conscious with free will and are quantum entangled with dark matter all throughout the galaxy for faster than light control of the entire galaxy.

    Finally, of course, maybe all the black holes at the center of galaxies are all quantum entangled to form one huge consciousness with free will — God — controlling our entire finite universe — although there may be other universes that our god doesn’t control.

  34. mjgeddes Says:

    Anon#33

    Trying to equate physical properties to consciousness is looking in the wrong place. Quantum physics has got nothing to do with consciousness. It’s not only the wrong type of thing, it’s on the wrong scale as well.

    Scott’s point was that physical properties are just the wrong ‘type’ of stuff for explaining consciousness. The physical world is about causal processes in space and time, whereas the phenomenal world is about information and teleology (goals and values). There’s a 3rd type of existence, and that’s mathematics, which is about logical relationships.

    I tend to agree with Scott that the ultimate origins of mathematics, matter and mind can’t be explained by science. This is because they constitute the 3 primitive ‘types’ or ‘states’ of existence itself. Since we are inside existence, we can’t reason without implicit or explicit reference to the primitive properties of existence. So we simply have to take them as a given and reason from there.

    If we think of the universe as a self-modelling system, then 3 levels of recursion manifest themselves. We note the principle that ‘infinite recursion is always at most 3-levels deep’. The 3 levels of the universe are none other than…

    (huge drum-roll)

    Mind (Object level),
    Matter (Meta-level)
    Math (Meta-Meta level)!

    Everything terminates in Math (the meta-meta level), since mathematics can fully represent any aspect of existence (it’s the 3rd recursion).

  35. James Cross Says:

    #30 and #31

    I can’t see how faster algorithms would have much impact at all on productivity except in some marginal areas. Current algorithms and processors are plenty fast enough for most tasks.

    “It would be awesome to replace all our approximation algorithms for scheduling, routing, and machine learning with exact algorithms…”

    Sure. However, most of what business does isn’t really these things. It is mostly moving data from one bucket to another. There would probably marginal economic impact in areas heavily reliant on that type of processing but even then it might not be much if the approximations were good enough to begin with.

    I am not sure what measures for productivity you are looking at but productivity will grow more slowly in economic downturns because there is a reduced market for product.

  36. xerxes Says:

    “…the Allies never even tried to disrupt the Final Solution…”

    What was WW2 in Europe, if not a total effort to disrupt every single thing that Nazi Germany was doing? Total, that is, in the sense of total war.

    But I do acknowledge that you have withdrawn from your unconsidered views about bombing the train tracks.

  37. Scott Says:

    xerxes #36: Yes, I’m aware that the Allies fought and defeated the Nazis in Europe. 🙂 My grandfather was part of it, clearing mines in North Africa and Italy in the US Army. Most of his company was killed, but he made it, which I find difficult to explain except by reference to the fact that had he not, I wouldn’t be here to write about it!

    But my basic point stands: the Allies entered the war for reasons that, while excellent ones, had essentially nothing to do with the Nazi extermination campaign against Jews and other undesirables. Then, having done so, they refused repeated pleas to divert even an ε2 fraction of the war effort to directly disrupting that campaign. I conceded that I don’t know whether bombing the train tracks would’ve been effective—it was probably worth trying (alongside other disruption tactics); it might have helped and might not have. I didn’t “withdraw” more broadly than that.

  38. Vadim Says:

    Great interview, Scott. I’ve nothing clever to add, but I enjoyed reading it.

  39. xerxes Says:

    Let’s leave our fathers’ various achievements out of the discussion.

    Different countries entered the war for different reasons at different times. In particular the notion of Allies makes no sense until the very end of 1941, while France and Great Britain didn’t so much enter the war, as begin it, by declaring war on Germany in September 1939. It’s unsurprising that this had nothing to do with the Nazi extermination campaigns: those campaigns did not begin until later.

    Given your concession, which is not a withdrawal, that you have no reason to believe that bombing the tracks would have been effective, why should it have been done? Casualty rates among RAF and USAAF bomber crews were ghastly; it would have been criminal for their commanders to have sent them out on missions that were expected to achieve nothing. And what other disruption tactics should have been tried? You don’t say, so let me point one out: bombing IG Farben, manufacturers of Zyklon B.

    Oh, wait a minute, they did that.

  40. Scott Says:

    xerxes #39: I have limited patience for a discussion in which you’ve repeatedly engaged in “defense attorney / debate team behavior”—i.e., trying to score points on technicalities without needing to address the actual substantive questions. So maybe one more exchange and then let’s both spend our time more usefully.

    You triumphantly announce that the Allies bombed IG Farben, but of course, the reasons why they bombed IG Farben facilities had nothing whatsoever to do with the manufacture of Zyklon B—they targeted oil works, rubber works, etc. I’ve been inside an IG Farben building in Frankfurt involved in the manufacture of Zyklon (there’s a whole memorial in front of it and all), which wasn’t bombed at all.

    You also say I conceded I have “no reason to believe” that bombing the train tracks would’ve been effective, which is a slippery mischaracterization: I equally have no reason to believe that it would’ve been ineffective. Even if it had set back the gassings by a few weeks as the tracks were repaired, that would’ve meant tens of thousands of lives. Better yet, how about carpet-bombing Auschwitz and its surroundings—trying to hit the crematoria, the commandants’ quarters, the train platform, the trains, the tracks, anything you could? Just as important as the physical damage, it seems to me, would be the psychological effect: there were many lower-level Nazis who were ambivalent about the extermination program, even as they participated in it. If they got the feeling—which they never did—that their own lives were in danger, and that the Allies would do whatever they could to disrupt, harass, and endanger them, that might have tipped many of them into active non-involvement.

    As for the risk to the pilots, very well: I say offer the mission to the pilots (including the Jewish ones, who existed), explain it, and ask who’s willing to accept it voluntarily. I bet you’d get takers.

  41. Scott Says:

    Incidentally, for those who are interested, there’s a Hacker News thread discussing my interview.

    FWIW, I was amused by how much of the discussion there centers less around what I said than around my “standing” to say it at all (the same issue that plagues Eliezer Yudkowsky, as I mentioned in answer #14). I.e., since I’m just a quantum-computing theorist, what right do I have to offer an opinion about questions in social science or philosophy—even if I’m directly asked for my opinion?

    I couldn’t help wondering: would it have helped if I’d name-dropped the respected social scientists and philosophers who agree with the various positions I defended? Like Steven Pinker, or Mark Balaguer, the philosopher who wrote a book entitled Free Will As An Open Scientific Problem? Like, obviously there are other social scientists and philosophers who disagree with those positions—but am I not allowed to be sufficiently engaged and interested in what my colleagues in other fields argue about, that I agree with some of them more than others?

    If John Horgan had asked me whether I preferred Hillary Clinton or Donald Trump, and I answered Hillary, would I also have been chewed out for not having sufficient expertise in politics and economics to be entitled to an opinion?

  42. John Sidles Says:

    Scott (in comment #19) asserts (in agreement with many) “Failure to disrupt the Holocaust is a complicated question …”

    Such questions are sufficiently tough as to have many good answers — and there’s no reason (that is obvious to me) why everyone’s answer should be the same.

    For me, a particularly illuminating, challenging, up-to-date account is Sanford L. Segal’s Mathematicians under the Nazis (2014). The question that Segal’s account motives us (me at least) to ask is this: Given that the 20th century — like every previous century — witnessed holocausts of suffering that were largely ignored even by highly intelligent people (and commonly even denied outright)

    What holocausts of suffering are presently occurring, whose remediation might reasonably be accelerated by a greater expenditure of effort and treasure, if only the required appreciation and public will-to-sacrifice could be summoned?

    One answer is provided by surveys like McGrath, Saha, CHant, and Welham’s survey “Schizophrenia: A Concise Overview of Incidence, Prevalence, and Mortality” (2018, PubMed finds this work free-as-in-freedom). This is a devastating disorder with a (median-estimate) lifetime morbid risk of 7.2/1000, this is, equivalent to a population of about fifty million people worldwide.

    Painful parallels  Schizophrenic people are easy to ignore, for reasons that sound familiar: they are genetically inferior; their appearance among one’s ancestors is to be denied; their high mortality is of little concern to healthy folk; they dress and talk oddly, all-in-all they are ‘not like us’, so why should we care about them? Ouch.

    Where will the paths to quantum supremacy lead us?  These parallels were on my mind at Lee Hood’s recent symposium “Emerging Technologies for Systems Biology”, in particular Pamela Sklar’s survey of “Networks and Schizophrenia”.

    Is there overlap here with discussions of arcane topics like “Paths to Quantum Supremacy”? Yes, inarguably so (as it seems to me), in that any path that takes us toward understanding and demonstration of Quantum Supremacy, concomitantly takes us toward understanding of the networks underlying schizophrenia, and demonstrations of healing of those networks.

    Conclusion  We have ample humanitarian reason to proceed as swiftly as feasible along the paths to Quantum Supremacy, while equipping our mathematical and physical understanding with the broadest feasible (yes, Gil Kalai-style!) appreciation of the obstructions that we are likely to encounter.

    And we should appreciate too that in overcoming these obstructions — or in technologically routing around them! — the various paths toward Quantum Supremacy may lead us to some surprising and hopefully wonderful destinations.

    If so, that’s good!

    Because is there anything really unusual, in foreseeing that the destination of a journey &mdash like the journey toward Quantum Supremacy &mdash should be a surprise, and foreseeing too, that the obstructions to that journey will turn out to be the most valuable aspect of it? 🙂

  43. Anthony Says:

    Great interview, Scott!

    Any thought on this recent preprint that seems to shot down Mulmuley’s approach to the P vs NP question:
    http://arxiv.org/abs/1604.06431
    ?

  44. Scott Says:

    Anthony #43: As it happens, I’m currently finishing a huge survey article about P vs. NP, “coming soon to a blog near you”! And I had some advance warning that the important paper you link to was coming out, and was patiently waiting for it, in order to incorporate it into my section on GCT.

    Alongside other recent work (much of it by the same authors) in the same direction, this new paper basically kills the idea of using GCT to prove strong complexity class separations via occurrence obstructions. (E.g., irreps that occur one or more times in the representation corresponding to the permanent orbit closure, but don’t occur at all in the representation corresponding to the determinant orbit closure.) This is very bad news for the program that Mulmuley and Sohoni originally outlined.

    It remains possible that one could use GCT to separate complexity classes by finding multiplicity obstructions (that is, irreps that occur in both representations, but with higher multiplicity in the permanent one). The trouble is, people don’t really know how to do that: if it’s possible at all, it will be probably much harder than it would’ve been had occurrence obstructions been available.

    Meanwhile, experts tell me that GCT “construed more broadly”—i.e., the effort to apply algebraic geometry and representation theory to complexity theory in whatever way—remains alive and well, with a whole community of mathematicians now working on it, and with an agenda now broader than Mulmuley and Sohoni’s original vision (even though Mulmuley and Sohoni’s evangelism played a crucial role in getting the whole thing started).

  45. Anthony Says:

    Thanks a lot for the explanation!
    Looking forward to reading the review.

  46. BHP Says:

    Was Godel’s letter really a precursor to P vs NP and not EXP vs NEXP? Assuming your input is a true-or-false mathematical statement x and a maximum allowed proof length t (of course represented in binary rather than unary, as it always would be in practice) it intuitively seems like the reduction for the problem “determine whether there’s a ZFC proof of x of length at most t” should be for the NEXP-complete “bounded halting problem.”

  47. Scott Says:

    BHP #46: Yeah, that’s one why I wrote that Gödel’s question was “essentially” P vs. NP. To get an NP-complete problem, you’d want the input to be an encoding of a mathematical statement, together with an upper bound on the proof length encoded in unary notation. And then the problem will be NP-complete assuming the formal system is consistent (if it’s inconsistent, then the problem is solvable in O(1) time!).

  48. BHP Says:

    Hi Scott, great blog! I was wondering if you saw this preprint from earlier in the month claiming new lower bounds on the amount of heat generated by quantum computations, which can exceed the tight classical bounds? Intuitively it’s hard for me to see how this could be possible if the quantum computer could always just run in “classical mode,” but if true would this place any important new limits on quantum computers?

    http://arxiv.org/abs/1604.03749

  49. Scott Says:

    BHP #48: No, I hadn’t seen that paper, and from a quick read, I confess I don’t understand it at all. Couldn’t we imagine (say) a quantum cellular automaton, which (given an appropriate setting of its initial state) performs universal quantum computation in a completely unitary fashion, with no thermal bath, no auxiliary systems, no nothing? And wouldn’t that be a counterexample to this paper’s claim? Or better: what assumptions about the need for auxiliary systems, etc. is this paper quietly making, which would rule out the completely unitary universal QC? Again, this might just be my own confusion—I don’t know!

    But some people have told me that there’s a system—a relic from an bygone age—that can occasionally be invoked to evaluate papers like this one, without the customary need for everyone to instantly render judgment in blog comments or Slashdot or Twitter on the basis of 10-second scans of the title and abstract. Apparently this “peer review” system is still used in a few obscure corners of math, CS, and physics, just like there are a few people who still use fountain pens. I hesitate to sound like a fuddy-duddy, but this might be one of the rare cases where that ancient system could serve as more than a formality, a stamp confirming what everyone on social media already knew.

  50. James Gallagher Says:

    Free-Will? There has to be another potential (ie another mathematical term) in the Schrodinger equation or otherwise no.

    Maybe evolution discovered some weakly interacting thingy that modern physics hasn’t otherwise detected?

  51. John Sidles Says:

    Scott reminds Shtetl Optimized readers (#49):

    “[Peer review] can occasionally be invoked to evaluate papers like this one [arXiv:1604.03749, “The thermodynamic cost of quantum operations”].”

    Shtetl Optimized commonly concerns itself with scientific questions for which the peer-reviewed literature is simultaneously (1) the best available public guide to the worth of an idea, and (2) sufficiently inconsistent and incoherent as to provide only marginal guidance.

    Vacuous examples  Considering the thermodynamic properties of field-theoretic vacua, and restricting our attention to the peer-reviewed literature, we have as examples:

    (1) QED vacua  Liang and Czarnecki’s “Photon-photon scattering: a tutorial” (2012, arXiv:1604.03749) show us how errors both great and small afflict even peer-reviewed conclusions reached by motivated teams of well-qualified researchers, while articles like Vaccaro and Barnett’s “Information erasure without an energy cost” (2011, arXiv:1004.5330) reminds us that loopholes can be found even in fundamental thermodynamic principles that have been studied for centuries.

    (2) QCD vacua  Still greater difficulties in regard to the QCD vacuum, that remain even after many thousands of peer-reviewed analyses, are the focus of the Clay Institute’s Millennium Prize Problem “Yang-Mills Existence and Mass Gap”.

    (3) Gravitational vacua  The dozens of recent peer-reviewed articles that tackle the difficulties of reconciling QED vacua with Black Hole horizons can’t all be right, can they?

    Peer-reviewed fictions  Ted Chiang’s one-page Nature essay “What’s expected of us” (2005) — which discusses “Predictor Machines” — tackles multiple issues that have been central to Shetl Optimized discourse, including complexity, simulatibility, and informatic coupling via the QED vacuum.

    Superficially Chiang’s “What’s expected of us” appears to be “mere” speculative fiction, and perhaps this is why works like Mark Balaguer’s Free Will As An Open Scientific Problem (of comment #41) commonly do not cite Chiang’s article.

    Yet in scientific fact, the “Predictor Machines” of Chiang’s “What’s expected of us” are real — as the Nature editors no doubt were well aware! For up-to-date details, see for example Rigoni et al “Happiness in action: the impact of positive affect on the time of the conscious intention to act” (Frontiers in Psychology, 2015) and Schultze-Kraft et al “The point of no return in vetoing self-initiated movements” (PNAS, 2016).

    Existing as they do at the nexus of multiple Shtetl Optimized concerns, these works are great fun to read (for me anyway).

    For which appreciation, thank you Ted Chiang! 🙂

    And it is evident too, that themes of Ted Chiang’s essay (both fundamental and applied) overlap naturally with the medical and humanitarian themes of Pamela Sklar’s survey “Networks and Schizophrenia” (of comment #42).

    Open questions  How far into the future can Predictors foresee Actors (both in principle and in practice)? How effectively can Predictors be decoupled from Actors (both in general quantum mechanics frameworks and in the restriction of the those frameworks to shared QED/QCD/gravitational vacua)? Do Predictor/Actor relations alter fundamentally, when Actors are granted access to random quantum processes, and Predictors are required merely to indistinguishably sample from Actor actions (a case that encompasses Actors who BosonSample)?

    Conclusion  Considered as basis for rational deduction, the peer-reviewed literature amounts to a set of axioms for a deductive system that (regrettably) is very far from consistent and very far from complete. Yet these same axioms are fundamental to the best deductive system we have, and so we must proceed as best we can.

    In particular, it is prudent to respect both the Preskill/Aaronson ideal of Quantum Supremacy, and the well-founded reasons (that peer-reviewed quantum field theory literature supplies) for Kalai-style skepticism of that ideal’s realizability.

  52. Raoul Ohio Says:

    Scott #44:

    A review of P vs. NP will presumably address the question of “Would P = NP have any ** practical ** applications?”. This comes up all the time, and has been argued in this blog.

    I know that you do not agree with the obvious arguments for “Probably Not”, but I don’t think they can be dismissed out of hand. I think something like “Can’t Say” or “No Clue” are the best answers.

    For anyone who has not given this issue any thought, contrast the following possibilities:

    1. P = NP and NP is O(n^4)

    2. P = NP and the important parts of NP are \Omega(n^1000).

    Keep in mind that 10^200 is a likely upper bound for all the computation cycles that can occur in the history of the universe.

    A common refutation of this argument is that “Usually it turns out that something like O(n^6) occurs” or “I don’t think case 2 would happen”. I also don’t think case 2 would happen, but furthermore, I also don’t think P = NP. So if it turns out that P = NP, it is reasonable to conclude that our collective intuition is not so good.

    P.S. John Sidles: Can you help me out and cast this argument into more elegant terms?

  53. Rahul Says:

    It’s interesting that peer review in Comp Sci is still fairly robust whereas in disciplines like Psych. or Medical Trials etc. we see daily horror stories of what sort of crap makes it to top journals in spite of peer review.

    Andrew Gelman’s blog is a good place to know about the huge problems in peer review.

    http://andrewgelman.com/

  54. mjgeddes Says:

    James #50

    You’re committing the ‘type error’ I called about earlier in the thread. You simply won’t find any insights into mind-body problems in physics. The physical world is about symmetries, transforms and fields (geometrical operations basically), whereas the mental world is about goals, values and information.

    The free will question actually strongly supports the standard ‘strong AI’ view that the mind is a classical Turing machine (computation).

    If you could in principle simulate a person without the simulation being conscious, then I would say that the person could not be said to have free will. They would be proven to be ‘mere mechanism’.

    But, if simulating a person *necessarily* generated consciousness (that is to say, the simulation *is identical to* the person), then I would say that the person has free will. This is because you could not predict what the person would do next without actually instantiating the person.

    Also relevant is Stephen Wolfram’s idea of ‘Computational irreducibility’.

    “Wolfram terms the inability to shortcut a program (e.g., a system), or otherwise describe its behavior in a simple way, “computational irreducibility”. The empirical fact is that the world of simple programs contains a great diversity of behavior, but, because of undecidability, it is impossible to predict what they will do before essentially running them. The idea demonstrates that there are occurrences where theory’s predictions are effectively not possible. Wolfram states several phenomena are normally computationally irreducible.”

  55. Pascal Says:

    To Sean #9, Scott #10 and anyone who knows about black holes: how is the gamma ray burst compatible with the theory that nothing can escape from black holes? If black holes aren’t actual singularities as Scott #10 suggests, would that allow stuff to espace from them?

    Another question I had about theses two black holes and others is, how do we know they’re black holes and what do we mean exactly by that? I thought that the “no escape” clause was the defining feature of black holes (with the exception of “quantum evaporation”), but if there are other exceptions as with this gamma ray event things get confusing…
    I know that if a star is too massive, it will not be able to form a neutron star at the end of its life because neutrons would be crushed by its gravitational force. But after all, neutrons are not elementary particles: they could conceivably be crushed into their quark constituants. According to wikipedia, it is not known whether such “quark stars” exist.

    I don’t want to sound conspirationist, but how do we know that the so-called “black holes” that have been discovered so far really are black holes rather than quark stars or some other strange thing?

  56. Faibsz Says:

    Scott #19:
    “The Grand Mufti of Jerusalem, who the British considered the leader of the Palestinian Arabs at that time, was a Hitler ally”

    In fact, the Mufti was in exile in Lebanon (sucking up to the French) and then in Iraq, on the run from the British after the Arab revolt in Palestine (1936-39).
    He was, indeed, friends with Hitler and responsible for recruitement and training of local Muslims for Waffen SS in Bosnia.

  57. Scott Says:

    Pascal #55: You’re asking about extremely well-studied questions (i.e., not ones where it’s likely that anything simple has been overlooked).

    When people talk about gamma rays, etc. escaping from black holes, they always mean escaping from the accretion disk around the black hole—i.e., not from inside the event horizon! Which, of course, is perfectly consistent with GR. The only thing that gets emitted by the black hole itself is Hawking radiation, but that happens so slowly that it would take 1067 years for a black hole the mass of our sun to evaporate.

    The reason astronomers are confident that they’re black holes, is basically that they know (from the objects’ interaction with other nearby objects) how much mass is concentrated in how small of a region—but GR is very clear that anything that dense must be a black hole, since anything else would just collapse to a black hole. In any case, whatever residual doubt remained was laid to rest by the recent LIGO announcement, of two orbiting black holes emitting gravitational waves with precisely the characteristic signature for two orbiting black holes (a signature that GR doesn’t predict for anything else).

  58. Pascal Says:

    Scott #57:

    > but GR is very clear that anything that dense must be a black hole,
    > since anything else would just collapse to a black hole.

    I suppose that do this computation you would need to know not only about GR but also about the forces that oppose collapse? This has surely been studied in detail for neutron stars, but I was wondering what was known about forces opposing collapse for denser and stranger objects such as the hypothetical quark stars.

    My apologies if this is an extremely well-studied question, you may take this as an opportunity to give a non-controversial answer for once!

  59. Scott Says:

    Pascal #58: Neutron stars—and “quark stars,” if they exist—are two examples of things that can happen when your mass density is not quite at the Schwarzschild limit, but getting close. Once you’ve passed the Schwarzschild limit, however, by definition that means that the mass density warps spacetime so much that even a light ray from the interior can’t escape to asymptotic infinity. And that, in turn, is what’s meant by a black hole: the theorem of Hawking and Penrose from the 1960s says that once you have that, you’ll also get a singularity in the interior (or rather, a “singularity as far as classical GR can say”), which is a black hole’s other characteristic feature. No detailed consideration of the “forces opposing collapse” is needed for these conclusions—just the basic principles of GR, and the values of G and c.

    I’m sure someone who actually knows GR can add more and/or correct whatever I got wrong!

  60. Pascal Says:

    Thank you Scott #58! I understand from your answer that the “forces opposing collapse” are not relevant to the existence of the black hole, but may still be relevant to understand the state of the matter trapped inside the black hole.

  61. Scott Says:

    Pascal #60: No, because once you have a black hole, the Penrose-Hawking singularity theorems tell you that whatever state of matter you started with, it will quickly evolve into “pure hairless geometry” (just an event horizon and a singularity). Of course this assumes GR continues to hold at a macroscopic scale in this situation, but we have no good reason to think it doesn’t.

  62. John Sidles Says:

    mjgeddes (in $54) argues for

    “The standard ‘strong AI’ view that the mind is a classical Turing machine (computation).”

    This view has serious moral implications and even practical research consequences as follows.

    The behavior of such Turing-Actors is predictable even by simple machines such as Ted Chiang’s ‘Predictor’ devices (of #51), provided that the cognitive resources available to the Actor are upper-bounded.

    This requires not only that the internal memory of the Actor be bounded, but also (and very significantly) that the Actor is strictly forbidden to communicate with other Actors, and strictly forbidden even to look at a completely dark night sky (the latter is forbidden on the grounds that quantum noise in retinal cells exposed to the cosmic microwave background is a forbidden computational resource).

    The peer-reviewed literature provides a plain word for the level of cognitive isolation that is required to ensure predictability, and that word is torture. This imples:

    Corollary to the Classical Strong AI hypothesis
    Human-level cognition and behaviors become predictable solely under conditions of physical isolation and sensory deprivation that are sufficiently strict as to constitute torture.

    This corollary is a strong reason why (as it seems to me) discussions that associate free will to biological brains perforce must grapple with the very same “vacuous” quantum field theory considerations (of #51) that make scalable quantum computing and BosonSampling so exceedingly difficult, and Kalai-style quantum skepticism so very credible.

  63. Pascal Says:

    Scott #60: are the Penrose – Hawkings theorems capable of taking into account, say, the existence of the strong nuclear force?

    I see from the wikipedia entry that there are in the hypotheses of these theorems some “energy conditions” which make it possible to take non-gravitational fields into account, but I don’t know if that includes nuclear forces (those must surely be relevant to the study of the stability of neutron stars and other dense objects).

  64. Scott Says:

    Pascal #63: This discussion will very rapidly exceed my knowledge, but yes, the nuclear forces (like all other known forces) satisfy the energy condition.

  65. fred Says:

    Scott,

    ““utopia” could only mean an infinite number of sentient beings living in simulated paradises of their own choosing, racking up an infinite amount of utility.”

    That’s actually probably way harder to achieve than it seems.
    Because without first experiencing challenge and pain, one can’t possibly have dreams and aspirations.

    E.g. if we put a new born human mind in a simulation that was maximizing his happiness, moment to moment, it’s very unlikely he/she would developed into a full grown human mind.
    Then there’s the issue of even defining our own happiness… especially hard if we’re presented with no constraints at all.
    Seems like the probability of degeneration into some “bad” local “optimum”, like drug addiction, or destructive OCD, is pretty high. In the end we’re social creatures, very few would be happy alone… so in most cases the utopias would be just indistinguishable from the real world (I think what drives us is the quest to happiness, not happiness itself).

    So this would probably only work in small doses, and isn’t it what video games are for? Now being redefined by practical virtual reality?
    And there are already plenty of examples of people getting lost in those “perfect” worlds and developing serious psychosis/addictions.
    Then there is also a reverse trend to “gamify” the real world instead – many unpleasant aspects of reality can be re-mapped into customizable gaming systems, for the sake of driving up productivity and consumerism.

  66. John Sidles Says:

    In regard to Pascal’s #63, Vaccaro and Barnett’s article “Information erasure without an energy cost” (2011, arXiv:1004.5330, see #51 above) constructs a class of thermodynamic parameters that do NOT carry an energy cost, but DO carry an entropy cost. Such parameters are a natural candidate for extended classes of “black hole hair”.

    Regrettably (especially for nonexperts like me) Vaccaro and Barnett’s article does not explicitly consider the implication of their construction for “black hole hair” theorems, and I join with Scott (#63) in wishing for expert commentary upon this exceedingly tricky class of (field theory) X (gravity) X (thermodynamics) problems.

    Conclusion  Even when we don’t look beyond Standard Model field theories, we find reasons to anticipate that the last word on black hole thermodynamics problems remains to be said.

  67. alto Says:

    Hi!

    I’ve just read the interview they just made you, very long, but very interesting.

    You mentioned the work of Judith Rich Harris. I’ve read some interviews with her, and i have to tell you that i don’t think that her work is the next big thing.

    What i mean is that i agree and i disagree with her.

    I agree with the idea that the way we are is mostly genetically determined.

    I struggled with this idea a lot of time, but i think is mostly true.
    I used to believe un the past that nurture was the most important thing in the developement of a person, but now i think genetics is the most important thing.

    What made me change of opinion, is seeing a lot of similar people with very different backgrounds.

    People use to say that smartphones, computers and tablets are making children less sociable.
    I used to believe that, but then i realized that when i was a child i didn’t have a pc or an smartphone or a tablet and i prefered to read books than to hang out with other children of my own age. Also a lot of people younger than i am have grown with all these things and yet they are more sociable than i am. So i came to believe that sociability is mostly an inherited thing

    I also agree with the part of divorce being very bad for children because I’ve seen it very often. But this seems to be against her idea that parents cannot influence the children.

    Now, i disagree with her in many things, for instance, when she says that the group of children that sorround someone in chilchildhod are more important than parents in the development of children. If parents can’t alter the development of children, then how can some children with which the person only spends a part of his time change him?

    It also doesn’t seem right, when i was a child, all of my classmates liked to play soccer, yet i never felt the impulse to play soccer with them, i didn’t like it.
    I was not good at it.
    I was a very lonely child in school and i used to hang out with a group of outcasts like me. We used to play soccer aparte from the other children. That was the only thing we had in common. One of them wanted to be a bullfighter, i never wanted to be a bullfighter. The other one was a gipsy and i never felt interested in gipsy culture. None of the two liked to read, but i never wanted to stop reading books just because they didn’t like it. We were very different and we became even more different as time went by.

    Another example is how Scott did become an investigador and keep doing “nerdy” things even despite the fact that most of his classmates in the school didn’t like “nerdy” things.

    Another thing that seems to go against her ideas is the recent study of epigenetics. In a study, some researchers replaced the anxious mother of a new born rat with another more confident mother and the rat grew to be a confident adult.

    So it seems that, although nurture changes very little in the behaviour, it changes certain important things about someone. Another important thing is that although children don’t learn anything from being given prizes and punishments, they learn certain things from the example of parents. From the parents just being themselves. If the parents tried to fake it, however, i think it will not work. For instance, if the parent doesn’t like to read, but tries to read in front of the child to make the child want to read a book; the child will see that the parent doesn’t like to read and he will determine that reading is boring

    So, to make the story short, most social researchers that Judith criticizes are wrong and have an agenda.

    Judith is also wrong and has an agenda.

    Two wrongs don’t make a right.

    The social scientist that can look at things in an unbiased way and has no prejudices and can revolutionize the field is yet to come.

  68. James Gallagher Says:

    mjgeddes #50

    I don’t think the “type” error argument is so strong as it was when Gilbert Ryle wrote the compelling “Concept Of Mind” in 1949. I have studied some philosophy and it does help to straighten out one’s thoughts now and again, even more easily these days when there are many competing views immediately available, such as the many online discussions of Searle’s Chinese Room and similar, rather than being limited to the few (but deep) academic papers in the past and seminars.

    However, I think I am making a fundamentally simpler point in this debate – that to have free-will (and/or be conscious) there has to be at least two independent but interacting systems. This is just not just a flaky restatement of the old mind-body duality, this is an attempt at an actual scientific assertion – that there must be an extra mathematical term(s) in our most well understand scientific theory, quantum mechanics. I don’t think an isolated system can be aware of itself, I’m pretty sure of that – I think that you need two independent but interacting systems.

    And let’s face it, if I don’t have free-will but can type this stuff – the world is pretty crazy! 🙂

  69. Pascal Says:

    Scott #64 (and John #66) : you have enough knowledge to convince me of the existence of black holes. Thanks!

    By the way, wikipedia says that the energy conditions have been experimentally shown to be violated in certain circumstances such as the Casimir effect. But I suppose that effect is not relevant to what’s going on inside a black hole (?).

  70. Anonymous Programmer Says:

    If panpsychism and libertarian free will is true then in our universe every quantum is conscious with free will. The more massive and complicated the quantum, the more it has consciousness (sensations per second or time perception) and free will (decisions per second) abilities.

    Quantum theory would have extra terms in both quantum evolution (free will thought) and measurement (free will action). For a low energy quantum like an electron standard quantum theory would do just fine because it is just like a dreaming infant just starting to develop its mind and isn’t awake a lot doing purposeful things.

    But a very massive quantum with a lot of entanglement with high frequency and low wavelength would deviate more from standard quantum theory because it would be a more mature being with more understanding and memory able to do increasing difficult feats of free thought and free will deviating from random.

    The goal of all such quantum souls would usually be to be allowed to become a universe of their own eventually like their godparent, the universe, which might take billions of years and then also to be the godparent of a googol newbie souls also aspiring to be their own universes.

  71. jonathan Says:

    Scott,

    I wonder if you could go into more detail about your Singularity “skepticism.”

    Your view seems to be that the idea makes sense in principle, and is even very likely to happen if we make it that far; but you don’t think it’s of first-order importance because it’s a long way off, there are many more immediate (even intermediate) problems to solve first, and we’re not even sure how we could productively make progress on it at this point.

    These all seem to follow from your belief that AI is a long way off. But I really don’t understand why you think this is the case. I’m no expert, but it does seem that we’ve made lots of progress on (components of) the AI problem recently. Also, I know several people who work in AI who are much more optimistic. Even if we’re talking about (say) 100 years, instead of 20, that still seems soon enough to worry about right now, particularly given the stakes and the importance of getting it right the first time.

    I think the main arguments you’ve advanced to defend your near timeline skepticism are (1) the slow progress made thus far, which extrapolated into the future suggests many centuries or even millennia, and (2) your extreme uncertainty about the problem and what it would take to solve.

    But I’m not sure that (1) is consistent with recent observations, or that (2) is cause for complacency. If we don’t understand the problem enough to come up with a reasonable estimate of how much time it will take, then we also don’t know whether there are just a few theoretical innovations that will get us there in a hurry.

  72. luca turin Says:

    “The first half of the twentieth century is and always will be “the present,” and whatever this is now is the future!

    Thank you Scott, I’ve obscurely felt that way all my life (born in 53), and never put into words. What a wonderful insight!

  73. Anonymous Bosch Says:

    Scott, I think your answer to the simulation question was a bit too quick with respect to the first case. The gods of most religions are almost always interested in being worshipped by humans, and this makes the lack of evidence of their existence quite puzzling (and cause for skepticism). One would think that a god who wanted love and devotion would be a little less coy about it, especially when the penalty for lacking devotion is often destruction/hellfire. By contrast, there are many reasonable explanations why simulation builders may want to hide evidence that may nevertheless be, in principle, discoverable – maybe the simulation is an experiment, or a form of non-interactive entertainment.

    The other big difference between religion and the simulation argument is, to put it bluntly, that religion is inconsistent with science, while the simulation argument seems to be consistent with science. That of course doesn’t mean it’s true, but it’s scientifically plausible, unlike religion. The fact that we haven’t identified any evidence of it so far doesn’t distinguish the simulation argument from any other scientific hypothesis that has been neither proven nor disproven.

    As to your second case, in which you don’t care about empirically inaccessible simulating aliens – do you feel the same way about the various theories that posit an empirically inaccessible multiverse?

    My personal view of the simulation argument is pretty much my view of the Singularity (and I think you agree on the latter) – it’s not impossible, but we’re not scientifically advanced enough to start worrying about it in earnest yet, so any commentary on it is for entertainment purposes only, for now.

  74. Scott Says:

    jonathan #71: You did a very good job of summarizing my view.

    I don’t have strong grounds for my skepticism about human-level AI in the near future—just, like, enough grounds to justify to myself not immediately dropping everything I do to worry about this. As such, while I’ll share my personal hunches if asked, I’m totally fine if those hunches aren’t compelling to others, and if others do want to drop everything else to worry about AI risk.

    On the other hand, I would strongly contend that the “AI soon” side doesn’t have compelling grounds for its views either—that we’re both just trying to extrapolate from wildly-inadequate data points to an event that there’s never been anything like in human history, and that’s also not governed by any well-established physics or math (which makes it very different from, say, human-caused climate change).

    Sure, one could say that there are impressive advances in AI in recent years, especially Watson and DeepMind and other systems that exploit deep learning. But one could point to almost any time in the last 65 years and say that impressive advances in AI were happening then (regardless of whether people appreciated it at the time). So this doesn’t really give much evidence about “how much longer until human-level”: could another thousand years of similarly-impressive advances be needed?

    Personally, I find that skepticism about “AI soon” emerges as a more-or-less inevitable byproduct of trying to maintain consistency across all my beliefs. E.g., I think of humanity’s future as involving the interplay of many different factors: continued innovation in software and other industries, degradation of (what’s left of) the environment and flooding of coastal areas, demographic changes, the continued rise of Islamism and continued economic ascendance of China, probably some unforeseen scientific breakthroughs, etc. I also think of scientific challenges like building a scalable quantum computer, proving P≠NP, or understanding the dark matter as being near the limits of what I can imagine happening in my lifetime, and I think of human-level AI as vastly harder than any of those. And I’m well-aware that human-level AI is a single factor that would render everything else irrelevant. And I have a meta-heuristic that tells me to be extremely skeptical of all single-factor hypotheses about the future, to treat them as science-fiction plots or very-low-probability events rather than as plausible.

    Now admittedly, if I were an Inca around 1520, it would no doubt seem to me like there would be many factors affecting the future of the Inca empire: wars against neighboring peoples, the corn harvest, internal Inca power struggles, further development of the Khipu ‘writing’ system, etc. When in reality, there would soon be a single factor—namely, the arrival of the Spanish—that would render everything else irrelevant.

    So then the main issue is simply that I don’t yet see any Spanish ships on the horizon! And while I do know a few Incas who speculate about the Inca empire being conquered by another civilization, those people disagree among themselves whether the conquerors will be men or beasts or gods, and whether they’ll come up from the ground, or down from the mountaintops, or from the sun or the moon, and no one seems to have any idea what countermeasures to take. Maybe these white-skinned Spanish sailors could be essential allies protecting us Incas from the conquerors?

    I will say this, though: if I ever become genuinely convinced that a Singularity is imminent, I’ll drop everything else I’m doing and work on it! I can’t compartmentalize at all, and would never be able to handle the cognitive dissonance of agreeing intellectually that a coming Singularity would render everything else irrelevant, while still spending my own time doing something else.

  75. fred Says:

    Scott #74

    “the continued rise of Islamism and continued economic ascendance of China”

    I’m quite surprised you put those two things in the same sentence…

  76. Scott Says:

    fred #75: I wasn’t expressing a value judgment about either of them (or trying to tie them together)—just mentioning them as two sociopolitical developments that are happening and that one can plausibly predict will continue to happen.

  77. Scott Says:

    Anonymous Bosch #73: Thanks! I don’t particularly disagree with anything you say.

    I’d stress, though, that if someone admits their hypothesis has zero empirical evidence so far, but insists that it could have evidence in the future, it helps if they can at least specify what sort of evidence they have in mind, and what we’d need to do to acquire it. So for example, the hypothesis that the dark matter is the lightest supersymmetric particle, Roger Penrose’s hypothesis that sufficiently large masses trigger an objective reduction of the wavefunction, the hypothesis that a “pure” libertarian society (or pure communist society) would be a paradise—all have obvious testable implications.

    By contrast, even assuming that the world is a simulation being run by superintelligent aliens, and that this is testable, I have hard time seeing how to extract a “robust prediction” from that hypothesis, without any auxiliary assumptions about the aliens’ motivations. Peter Shor likes to joke that we’ll confirm the hypothesis when we try to test quantum gravity, and instead “crash the universe,” since quantum gravity is just an inconsistent edge case that the universe’s programmers never bothered to account for. It’s hard to design a test that doesn’t sound like a joke!

    And yes, absolutely, I feel the same way about the multiverse. As this blog’s archives will confirm, I’ve maintained for years that multiverse speculation is fine, but it needs to play by exactly the same rules as any other kind of science. I.e., if there are novel, non-obvious observable consequences for this world, then wonderful, go and check them! If you can’t find any, try to simplify the theory by cutting the other worlds out, and don’t ascribe them a great deal of reality in any case.

    One final remark: in talking about the deep differences between gods and simulating aliens, you might be biased toward “modern” religions, or Abrahamic ones! The Greek and Roman vision of the gods, as “toying with humans for shits and giggles,” actually seems like an excellent fit to a literal reading of the simulation hypothesis.

  78. fred Says:

    Scott #77

    “By contrast, even assuming that the world is a simulation being run by superintelligent aliens, and that this is testable,[…] we’ll confirm the hypothesis when we try to test quantum gravity, and instead “crash the universe,””

    I sure hope they’re running the world in debug mode and not in release mode.

  79. jonathan Says:

    Scott:

    Thanks for the detailed response! I’m mostly in the same boat as you, though I’m much less confident in my belief that “the singularity is far”, and correspondingly more liable to wonder whether I should drop everything and work on (F)AI.

    Incidentally, given that you think AI is pretty far off, what do you think about alternative routes to the Singularity? I’m thinking mainly of ways to directly improve human cognition, with the potential to produce intelligence explosion style feedback effects; things like genetic engineering, augmenting mental capabilities through cybernetics, and possibly brain uploads. Do you see all of these technologies as beyond our current planning horizon?

  80. Scott Says:

    jonathan #79: We could already increase human intelligence, a lot, were it not for the obvious ethical difficulties! Even if we leave aside old-fashioned selective breeding—which could plausibly produce some pretty dramatic results within a few generations—we’ve already identified plenty of genes that seem implicated in intelligence somehow, and we could edit them today into a human embryo, were we willing to accept the sorts of “failure rates” that we accept with rats and mice (as we’re not, and shouldn’t be).

    My guess is that gene therapy on humans will become more and more of a reality within our lifetimes—starting with treating terrible genetic impairments (where the moral case is the strongest and the counterarguments are weakest), and then slowly creeping from there into “enhancement” territory (as more and more traits get redefined as “impairments”), as we’ve seen happen with many ordinary drugs.

    The “bioethicists,” of course, will fight this process every step of the way, and seem likely to succeed in slowing it down by at least several decades (probably longer in some countries than in others). But it’s far from obvious to me that it’s worth fighting! Yes, I’m scared about tinkering this directly with human nature, but I’m even more scared about humans as they currently exist continuing to destroy the planet. And gene therapy available to everyone seems infinitely preferable to the cruder eugenics that most educated progressives advocated in the early twentieth century, before the Nazi horrors (understandably) swung the pendulum about 10 quadrillion light years in the opposite direction. And if there existed a completely safe, routine, effective intervention that could give my kid (say) the intelligence of Alan Turing, I’d consider myself a terrible parent if I didn’t use it—my inaction would seem little different from letting my kid chug lead paint.

    So yes, absolutely, these are issues that we as a society should be paying attention to and debating! (The answers are far from obvious.)

    Brain uploads are a different matter; that seems at least as hard as AGI (almost tautologically so, since brain-uploading would be a way to achieve AGI).

    As for cybernetics, do you mean like Google Glass? 🙂 If there existed a less clunky version—say, one that could be installed inside your skull, and give you direct neural access to Google and Wikipedia and a calculator, etc.—I’d certainly be interested in having that installed. And such a thing could certainly increase my “effective intelligence,” although “only” in the same sort of way that Google and Wikipedia themselves increased my “effective intelligence.”

  81. John Sidles Says:

    Scott foresees:

    The “bioethicists,” of course, will fight this process [of gene therapy] every step of the way.

    With comparable justice, mightn’t bioethicists foresee:

    The “rationalists,” of course, will employ scare-quote criticism [of bioethics] every step of the way.

    Seriously, is there any real shortage of high-quality bioethical research? How do “scare-quote” critiques contribute positively to bioethical discussions that already have plenty of overlap with real-world applications of quantum science … with more applications to come, no doubt?

    In my reading of history, a more scrupulous attention to ethics would have been no bad thing even in Turing’s generation! 🙂

  82. mjgeddes Says:

    Scott,

    I think the solution to AGI is what I call ‘the ontological approach’ (or ‘the automated philosopher’). We are simply looking for a general method of ‘reality modelling’.

    This can be precisely defined. By ‘reality modelling’, I mean an ONTOLOGY or DOMAIN MODEL (including CLASS DIAGRAMS) of all reality – a specific technical example of a modelling language would be UML (Unified Modelling Language). So we are looking for a modelling language general enough to encapsulate all of reality.

    As you should know from programming, once you have the domain model, the ‘hard bit’ is actually done! The rest is just ‘cranking the handle’ .

    So in my view, nearly all AGI researchers (including MIRI etc.) are looking at the problem on the wrong level of abstraction. The only bits human should be involved with are the ‘interfaces’ (the *symbolic representation* of the things we want, expressed in terms of a high-level modelling language). The rest of the process should be handled by automated software.

    For example, take two scientific fields that are very important for AGI research: ‘decision theory’ and ‘Bayesian inference’. My view is that it’s the AGI program itself that should be working all that out, not humans. This is because the level of abstraction is too low! These fields are not on the symbolic level.

    Instead, we want a symbolic modelling language in which we can encapsulate our desire to have decision theory and Bayesian inference solved. Attempting to solve these problems ourselves won’t work (we’re just not smart enough).

    Over the previous decade, I fully completed the top-level domain model of reality. It decomposes into 27 super-classes. There are three key principles:

    *Reality should be considered to be a self-modelling system
    *Infinite recursion is always at most 3-levels deep
    *The structure of knowledge is a fractal.

    Any given knowledge domain of non-finite complexity can be viewed in this fashion. There are always 3 levels of abstraction:

    *The structural level: The intrinsic properties of an object. ‘What’s the object made of?’
    *The functional level: Extrinsic properties, relations between the object and other objects, ‘What does the object do?’
    *The representational level: How to symbolically represent the object, ‘How can we talk about the object?’

    It is only the representational (symbolic) levels that humans should be concerned with. These are the ‘interfaces’ (or maps) of knowledge. We want to express our (human) desires in terms of the maps or interfaces, and leave all the technical (lower-level) details to our automated software.

  83. Luke G Says:

    mjgeddes,
    “As you should know from programming, once you have the domain model, the ‘hard bit’ is actually done! The rest is just ‘cranking the handle’ .”
    What about go? The domain model is easy, but writing a program good at go is extremely hard. One would expect AGI to be strictly more difficult than go.

  84. mjgeddes Says:

    Luke #83

    Surely the lesson from AlphaGo is that writing a program to play strong Go turned out to be far easier than everyone expected?

    AlphaGo perfectly illustrates the points I made in my above post! Virtually all the hard work was done on the symbolic level – a high-level planning method (MonteCarlo Tree Search) and getting the right initial representations for machine learning.

    Once the lower-level machine learning methods got to work (‘cranking the handle’ pattern recognition and prediction), it did most of the learning itself, even with very little prior knowledge of Go.

  85. Raoul Ohio Says:

    In the above discussion of “The Singularity”, I missed exactly what singularity was being discussed. Can anyone fill me in? A couple potential singularities that have been receiving press in recent years include:

    1. When we eat enough vitamins, we will all live forever.

    2. The computers taking over. (In an old SF story, this caused all the phones in the world to ring. As it turns out, we just wake up with Windows 10 installing itself. I don’t think any SF author predicted that.)

    3. As AI passes some “tipping point”, everything will zoom through the stratosphere.

    It is not clear that these will all be a “good thing”, and I will probably keep plugging away at stuff that I am good at rather than jumping on board.

    Here is some “singularity info” for those without a tech background. I assume the word singularity is borrowed from function theory and/or differential equations.

    In function theory, singularities are classified (from not bad at all, …, to major bummer) as jump, pole, logarithmic, and essential. I think poles are the picture people talking about singularities have in mind.

    The simplest mathematical model of whatever it is you are yammering about is a scalar function of time; x(t). The evolution of x(t) is likely to be controlled by an ordinary differential equation. Linear ODE’s have no singularities, so nonlinear is de rigueur. When wearing your physicist hat, one is required to try the easiest thing first, the simplest nonlinear ODE:

    x’ = x^2,

    with initial condition x(t_0) = x_0.

    Defining t_s = t_0 + 1 / x_0, the solution to the IVP (Initial Value Problem) is

    x(t) = 1 / (t_s – t).

    If you graph |x(t)|, you will see why this called a “pole”. The pole is at t = t_s. In this old days, this was illustratively called “finite time to blowup”.

    If you have some data, say x(t_1), x(t_2), x(t_3), you can try to fit these to a curve for 1 / (t_s – t), and thereby estimate t_s, the time of the singularity. Usually it turns out that the quantity x is loosely defined, so it is hard to get good data.

    Here is one that might be doable: Can anyone estimate the relative difficulty of a computer beating the best human in chess and the best human in go? These have well defined times, so maybe you can predict when this particular singularity will occur. For example, if mastering chess (1997) is 1 unit of hard, and mastering go (2016) is Z units of hard, than

    t_s = 1997 [(2016/1997)Z – 1] / [Z – 1],

    so if Z = 5, the singularity is predicted for late 2020. Of course, if Trump or Cruz is President, there might be another kind of singularity about then.

  86. John Sidles Says:

    Lessons from the past for the present (#74):

    If I were an Inca around 1520 […] there would soon be a single factor — namely, the arrival of the Spanish — that would render everything else irrelevant. […] So then the main issue is simply that I don’t yet see any Spanish ships on the horizon!

    Recent genomic research suggests that *FAR* worse than depredations of spanish soldiers was the devastation wrought by introduced diseases. High-resolution genome surveys like Llamas et al. “Ancient mitochondrial DNA provides high-resolution time scale of the peopling of the Americas” (2016) point toward widespread extinction-level population losses:

    None of the 84 lineages they [Lamas and colleagues] found are even traceable past contact because not a single living person who belongs to any of them has been found.

    This level of complete population extermination is far beyond what a few hundred Spanish soldiers could feasibly achieve; and indeed, far beyond what any of the technology-assisted genocidal holocausts of the 20th century ever did achieve.

    Implication  The first Horseman of the Apocalypse, namely Pestilence, is still abroad in the 21st century, and humanity’s emerging geonomic history suggests that this particular Horseman may yet appear as “a Spanish ship on our horizon”.

    A personal note in the spring of 2015 my son and I were hiking in the still-snowy high Cascades, where we came upon an obviously sick out-of-hibernation bat fluttering weakly on an icy lake. My son and I reported the encounter to the state Wildlife Commission, but we did not attempt to capture the bat for biopsy (a decision that we subsequently regretted).

    Now in the spring of 2016, biopsy of a similar out-of-hibernation bat has sadly confirmed that the deadly plague of White-Nose Syndrome (SNW) has spread to pacific northwest bat populations. 🙁

    So yes, we mammals remain highly vulnerable to pestilence-driven extinction events.

    Respecting diversity  Does this mean we should all “drop everything else we’re doing and work on this problem” (in Scott’s phrase of #74)? Especially because quantum/nanoscale phenomena are so intimately bound-up in the development of next-generation biomedical research technologies?

    Here it is best (as it seems to me) for a diversity of opinion to prevail. But the plain lesson of genomic history is that “Spanish ships of pestilence” are real.

  87. jonathan Says:

    Raoul:

    I think of the “Singularity” as referring to the intelligence explosion idea — i.e. that if a species figures out how to directly increase its intelligence, it can then use its greater intelligence to figure out further ways to increase its intelligence, etc.

    For instance, suppose that the intelligence of our computers is x, which is increasing at growth rate g, which is a function of the intelligence of AI researchers y:

    dx/dt = g(y) * x

    Where y is fairly constant over time, and g(y) is probably quite small, given the difficulty of the AI problem to human-level intelligence.

    Now suppose that at some point, x rises above y. Then the computers would take over the AI research, and their intelligence would start increasing at rate:

    dx/dt = g(x) * x

    Of course, what happens next depends on the function g. Suppose that g is linear in x, i.e. g(x) = ax. Suppose we normalize x(0)=1. Then:

    x(t) = 1 / (1-at)

    So x blows up as t -> 1/a. That’s your singularity.

  88. jonathan Says:

    Raoul:

    Ah, I see that I neglected to read the second half of your comment. It seems we are thinking along similar lines 😉

  89. Raoul Ohio Says:

    Following up on jonathan:

    That is a good stage 1 for math modeling (try to solve the simplest system that plausibly captures the key elements). Here is a slightly better version;

    p(t): intelligence of AI programs
    r(t): intelligence of (human) AI researchers.

    This system is likely controlled to first order by

    dp/dt = Ap + Br
    dr/dt = C + Dp

    for positive constants A, B, C, and D, and an initial condition

    Ap muchLessThan Br

    (don’t try writing any lessThan signs here). This system is linear, so it will not have a singularity, but the growth is controlled by exp(\lambda t) with

    \lambda = (1/2) [A + sqrt(A^2 + 4BD)].

    I will think about this some more and attempt to identify the leading nonlinear term.

  90. John Sidles Says:

    This Shtetl Optimized discourse having enjoyably touched upon the intersection of science, morality, and politics, perhaps Shtetl Optimized readers will enjoy the addition of judicial components to the mix.

    Plaintiffs KELSEY CASCADE ROSE JULIANA; et al. — namely 24 children and climate scientist James Hansen — alleged prolonged and systematic violation of:

    Constitutional rights to life, liberty and property, and their right to essential public trust resources, by permitting, encouraging, and otherwise enabling continued exploitation, production, and combustion of fossil fuels.

    Last week, in a far-reaching decision, U.S. Magistrate Judge Thomas Coffin ruled FOR the plaintiffs, and AGAINST fossil interests and the US Government who sought to dismiss the case (PDF decision here).

    Judge Coffin’s closely-reasoned science-respecting opinion is well-worth reading by all who perceive in anthropogenic climate change a ‘Spanish ship on the horizon’; people who therefore seek, in science and law, relief from that risk

    Good on `yah, Judge Coffin! 🙂

    On the other hand, perhaps these risks may be/will be effectively addressed without legal action … by arch-conservative politicians? … or by the invisible hand of unregulated markets? … or by rational self-interest of individual citizens? … or by the prophesied return of the Messiah? Ouch.

  91. Raoul Ohio Says:

    John Sidles #90:

    Excellent development!

    Who wants to join me in a class action lawsuit against the makers of smart phones for making it too dangerous to drive in cities due to hoards of people walking in front of cars due to 100% of their attention being devoted to poking at their phones?

    I also plan to sue Stephen Wolfram, because he has figured out how to predict the future using cellular automata, but failed his civic responsibility to prevent electronica music and e-cigarettes! That is just evil!

  92. fred Says:

    Scott,

    regarding “free will” – if determinism is obvious and trivial (i.e. all our actions follow from the universe initial conditions and randomness, none of which we can ‘control’) and it’s a matter of finding whether a mind can be simulated, what does this all mean in terms of holding someone responsible for his/her actions?
    Is it just posturing from the people who believe in “free will” and who tap themselves in the back for their capacity to correct their own behavior and resist temptation to commit evil? (never mind that this capacity is itself something they can’t control).
    Holding people responsible, even if it’s a farce, still may have social value – like bringing solace to the victims, serving as example (to sway people who are lucky enough to have the capacity to be influenced by it), and isolate dangerous predators from the rest of society.

  93. John Sidles Says:

    Raoul Ohio (#91), in regard to US Constitutional Law, there is a fundamental characteristic that the law shares with the STEM professions, namely the objective of replicability.

    Approximating replicability is in turn dependent upon judicial temperament, a trait that is strikingly similar to concepts like mathematical maturity and scientific discipline and medical standard of practice.

    A particularly fine discussion (as it seems to me) of the Jeffersonian origins of modern-day conceptions of judicial temperament is Annette Gordon-Reed and Peter S. Onuf’s just-issued “Most Blessed of the Patriarchs”, Thomas Jefferson and the Empire of the Imagination (2016).

    I am reading this book whenever I can pry it from my wife’s hands — because she’s enjoying it to! 🙂

    The Most Blessed’s account of the origins of American constitutional practices dovetails nicely with the David Fischer’s survey of Quaker thought that Scott Alexander praises in this weeks Slate Star Codex column “Book Review: Albion’s Seed”

    The Quakers really stand out in terms of freedom of religion, freedom of thought, checks and balances, and the idea of universal equality.

    It occurs to me that William Penn might be literally the single most successful person in history. He started out as a minor noble following a religious sect that everybody despised and managed to export its principles to Pennsylvania where they flourished and multiplied. Pennsylvania then managed to export its principles to the United States, and the United States exported them to the world.

    These various reflections by Gordon-Reed, Onuf, Fischer, and Alexander alike are striking (to me) for their shared omission two topics: (1) the shared 17th-century origins of the Quakerism and the Spinozist Enlightenment, and (2) the sustained 21st century objectives of Quakerism and the Spinozist Enlightenment.

    After all, the Quakers are still with us (aren’t they?) and the Enlightenment isn’t over (or is it?).

    Conclusions  Quakerism and the Spinozist Enlightenment alike remain alive and vital in the 21st century, and these movements presently are focusing their energetic attentions largely upon environmental stewardship and medical equality, with Constitutional Law among their primary forces for change.

    Counter-reactions  Needless to say, in the 21st as in previous centuries, a countervailing and intrinsically reactionary “outcry of the Boeotians” (as Gauss called it) is resounding loudly in the land! And the lesson of Enlightenment history is that this reactionary opposition is destined to prove as futile in the 21st century, as it has proved in the previous four centuries.

    Hopefuly so, anyway! 🙂

    Quantum roles  And is there destined to be crucial role for quantum principles in the evolving emergence of the 21st century’s enlightened “empire of the imagination”? Oh yes, assuredly so. Our physical human nature, as evolved QED-governed hot-wet-biological beings, requires this.

  94. AdamT Says:

    Scott,

    Your comments on free will when viewed from the outside looking in: a god who knew the initial state could precisely know the probabilities at least has one big assumption: that the universe is finite or can be described in a finite fashion. That there exists any such initial state. It it is beginning less, with no initial state then I think free will survives even an omniscient person.

  95. AdamT Says:

    Also, it it is impossible to describe a region of the universe that is causally disconnected, either time-like or space-like, from the larger beginning less universe, then free will survives intact even for omniscient beings either inside or outside the universe or region.

  96. Anonymous Programmer Says:

    If there is no conceivable evidence you will possibly accept to believe in libertarian free will — it is you that is the irrational unscientific fundamentalist.

    Even the most die hard atheist like Richard Dawkins claims that with the right empirical evidence he could believe in a god.

    Yet a fundamentalist non believer in real free will often except no possible evidence even the most basic evidence of his own ability to choose. Whatever that person values the most must necessarily be an idol with no free will using his own way of thinking — therefore he is an idol worshiper — which makes that person very hard for God or anyone to deeply love and ultimately makes him an unscientific and fundamentally religious person in the worst sense.

  97. Scott Says:

    fred #92:

      what does this all mean in terms of holding someone responsible for his/her actions?

    Contrary to a misconception that’s persisted for centuries, the free-will debate has no direct or obvious implications for morality—as in none, zero. If, for example, a criminal defendant pleaded that “the initial conditions of the universe made me do it,” the judge or jury could simply respond, “well then, the initial conditions of the universe also make us find you guilty.” Or as Ambrose Bierce put it:

      “There’s no free will,” says the philosopher; “To hang is most unjust.”
      “There’s no free will,” assents the officer; “We hang because we must.”

    In discussions of free will, this “symmetry between accuser and accused” is sometimes acknowledged, but then brushed aside as a sort of joke. To me, though, the point is a fundamental one: it shows that the thought “if there’s no free will, then I can’t hold others responsible for their actions” is just a pure cognitive illusion, born of the wrong idea that it’s only other people’s freedom that gets suspended in the hypothetical, not your own.

    By analogy, radical cultural relativists usually imagine they’re making an argument for our culture to tolerate other cultures. They don’t understand or acknowledge that, if their doctrine were true, then invading and conquering other cultures would also be a valid cultural choice that no one should criticize!

    In both cases, I see this as exposing a hidden paternalism on the part of the cultural relativists and “to-hang-is-most-unjust” free-will deniers. They want to undermine the entire concept of “should” as applied to other cultures, criminal defendants, etc., while still silently invoking the concept for themselves and people like them.

  98. Scott Says:

    Anonymous Programmer #96: You completely misunderstood my position. Have you read The Ghost in the Quantum Turing Machine? I wrote an 85-page essay to defend the possibility of libertarian free-will, in the sense of in-principle unpredictability. And for that, I was accused by many in the “anti-free-will” camp of being a closet woo-woo mystic!

    (It’s like in the affair a year ago, where precisely the same position got me attacked as both a craven, boot-licking, feminist coward and an arrogant anti-feminist monster, which made me feel like I must be doing something right! 🙂 )

    People think of the “state space of the universe” as something given a-priori. It’s then an empirical question: do the transitions between states happen deterministically or nondeterministically? If that’s how you think about it, it indeed sounds like irrational dogmatism to assume the answer must be “deterministically” (or at least “probabilistically, with a calculable probability distribution”).

    But from a physics perspective, the state space of the universe is something that our theory gets to postulate—and there might even be different state spaces that lead to the same physical predictions. So, to take an example, Bohmian mechanics supplements the wavefunction of the universe with “hidden variables,” which evolve deterministically, but in a way that exactly reproduces the probabilistic predictions of QM. Personally, I don’t think there’s any fact of the matter about whether Bohmian mechanics is true or false. It’s just a picture that you can adopt, or not adopt, as you find convenient.

    Now, the example of Bohmian mechanics illustrates something much more general. We’re free to take any indeterministic theory in physics, and “determinize” it by the simple mathematical device of adding hidden variables. I.e., no matter how indeterministic the universe looked, someone could always maintain that everything that will happen in the future is nevertheless “latent” in the universe’s current state. Enlightenment will strike when you realize that, not only could that person never be proven wrong, but there’s not even anything definite that it means for the person to be wrong.

    That’s why, again, I advocate just ceasing to talk about determinism, and talking instead about the much more interesting and meaningful question of predictability. No matter what the laws of physics were, you could always add hidden variables to make them “deterministic.” But in some universes you can (in principle) perfectly predict what people will do before they do it via external probes, and in others you can’t, and that’s a genuine difference between the two kinds of universe that can’t be papered over with word-games. And it seems to me that one could do far worse than stipulating that, when people say we have “libertarian free will,” the empirical content of their claim is that we live in the second kind of universe rather than the first kind. Note that, under this conception, the libertarian free-will believers could very well turn out to be right! 🙂

  99. JRT Says:

    The Copenhagen interpretation of quantum mechanics is trivially fully consistent with free will, because there is free will on the part of the conscious observer in terms of what to measure and when to measure it.

    I grant you could consider the observer and object being measured together as part of a larger quantum system with no free will, but then without measurement or wave function collapse there would just be deterministic evolution according to the Schrodinger equation, so there would be no place whatsoever for randomness or the Born rule, and you would need to believe Bell experiment outcomes were due to a far fetched superdeterministic conspiracy.

  100. Scott Says:

    JRT #99: It seems strange to argue that, because the Copenhagen interpretation leaves unspecified how to model the observer quantumly, therefore we get to fill that gap with libertarian free will. I.e., from a modern perspective, the step you “grant”—that the observer should be included as part of a larger quantum system—is just the step we’d take immediately, without even thinking about it! The Many-Worlders obviously do this, but even those who call themselves Copenhagenists will take the same step, provided we’re talking about any observer other than themselves! 🙂

    In an MWI-like view, you argue that probability is something subjectively perceived by observers in the individual branches. And whatever other drawbacks they might have, I’ve never heard it claimed that MWI or its cousins require a “far-fetched superdeterministic conspiracy.” Indeed, if the world just consists of a unit vector evolving by the Schrödinger equation, then Einsteinian locality would seem to be happy as a clam! The question that gets argued is nearly the opposite one: namely, does the Copenhagen interpretation implicitly require superluminal signalling—say, to propagate the collapse caused by measuring the first qubit of |0⟩|0⟩+|1⟩|1⟩?

    I tend to think no, because we should really be looking at density matrices rather than pure states—in which case, we can make perfectly clear the sense in which “Copenhagen satisfies locality.” But it’s Copenhagen, not MWI, that typically needs to play defense in these discussions!

  101. Raoul Ohio Says:

    Modest Mother Nature’s Minions Mothball Particle Masher — Mammal leads the resistance against the wanton disclosure of the secrets of the universe:

    http://www.npr.org/sections/thetwo-way/2016/04/29/476154494/weasel-shuts-down-world-s-most-powerful-particle-collider

    This electrifying event is probably not as effective as afflicting decoherence on QC experiments, but more dramatic. All insurmountable experimental difficulties in QC should be attributed to “Schrodinger’s weasel”.

  102. Sniffnoy Says:

    You should have called it a “mote masher”.

  103. Raoul Ohio Says:

    Considering the dynamic duo of Maxwell’s demon and Schrodinger’s weasel suggests an interesting analogy.

    1. Newton formulated the laws of motion in 1686, after which efforts to “beat the system” by building perpetual motion machines could be analyzed. Sadi Carnot formalized the observation that perpetual motion machines do not work in the Second Law of Thermodynamics in 1824. I think the 2LT is an empirical law that can be derived in various axiomatic developments of TD. At any rate, these two events were 138 years apart.

    2. Quantum mechanics can be said to have started around 1925 or so. Quantum computing research, which can be regarded as attempting to beat the system, started around 1980 or so. I assume attempts to build a general purpose quantum computer started soon afterwards. I have no expertise in quantum computers, but am under the impression that attempts to move past a couple qubits keep getting foiled by “one darned thing after another”. This is what surely happens when anyone tries to actually build a perpetual motion machine based on any number of ingenious designs.

    3. One can speculate that attempts to build quantum computers might continue to foiled by Schrodinger’s weasel (running into additional darned things). Perhaps in this case, somewhere around 1980 + 138 = 2118 someone will codify empirical observations into some kind of “second law” indicating why they don’t work. I can’t think of a snappy name for such a law yet.

  104. Sniffnoy Says:

    Some sort of law that says quantum computers can’t work… you mean, like the extended Church-Turing thesis? 😛

  105. Job Says:

    I have no expertise in quantum computers, but am under the impression that attempts to move past a couple qubits keep getting foiled by “one darned thing after another”. This is what surely happens when anyone tries to actually build a perpetual motion machine based on any number of ingenious designs.

    Even in that sense, the QC is still just a motion machine, whether it runs perpetually depends on the algorithm. It would only be a perpetual motion machine when running Shor’s or Grover’s (or some other solver for a classically difficult problem).

    At this stage, building a QC that uses quantum interference to solve difficult problems is probably just as challenging as building a QC that uses quantum interference to solve trivial problems.

    You could argue that large scale quantum interference is equally impossible for trivial and complex problem solving, but that’s like declaring that motion is impossible – perpetual or otherwise – so the analogy is not a good fit IMO.

  106. Raoul Ohio Says:

    As an illustration of the “Everything is easy once you know it principle”, check out Frank Wilczek’s “Simple guide to entanglement” from Quanta (via Wired):

    http://www.wired.com/2016/05/simple-yes-simple-guide-quantum-entanglement/

  107. Anonymous Programmer Says:

    Scott Aaronson,

    I was ranting mostly about people like Sam Harris not you. I was confused on your position which you helpfully clarified 🙂

  108. fred Says:

    Scott #98

    ” I.e., no matter how indeterministic the universe looked, someone could always maintain that everything that will happen in the future is nevertheless “latent” in the universe’s current state.”

    But isn’t it the case that for most people, “determinism” means some “tree-like” structure of space time, where cause precedes effect, etc. We can all easily “visualize” how things would flow, even with MWI thrown into the mix.

    But if we imagine that time travel is a possibility, then things become way harder to visualize. We would have space time looping on itself, possibly converging into local stable patterns or diverging ones, etc. its structure looking more like a graph than a tree. Even if still strictly “deterministic”, it would be much harder to wrap our intuition around such a model.

  109. mjgeddes Says:

    I definitely like your pragmatic, empirical approach to the mind-body problem Scott. I think that’s the right approach to the free-will conundrum. My disagreement with you is that I don’t think quantum mechanics is in any way relevant, and so I have no problem with the idea of multiple copies of minds in the form of programs. I also think the implications of your ideas run deeper than you realize!

    The only key issue for me that there must be no computational ‘short-cut’ that enables us to predict what a program of a person’s mind does without actually running the program. That is, free-will requires that a simulation of a mind must necessarily be conscious (simulation of person = person). If this condition is established, then free-will is established. If it’s violated, then free will is an illusion.

    It seems to me that this requirement that we can’t predict what the program of a person’s mind does without running the program actually implies a violation of reductionism!

    My claim is that if in our explanation of reality, we cannot *in practice* ever strip away references to mental properties in order to predict how people will behave, then *for all practical purposes* I think we have to say that reductionism has failed! For it means that there is simply is no way to decompose an agent’s behaviour into physics only.

    Now I know claiming that reductionism fails is a huge claim. Don’t get me wrong. I do think consciousness is physical in the sense that it’s *composed* of physical processes. But I also think that computation is an abstraction on a higher level of organization that has an additional reality *over and above* the purely physical level.

    In short, I think reductionism is just plain flat-out wrong!

    Mathematics is another example of a failure of reductionism. If we take Platonism seriously (in the sense that we regard mathematics as having some sort of objective existence independent of minds), then it is extremely hard to see how mathematics can ‘fit’ into the physical world without denying whole branches of math.

    In short, materialists who believe in math are basically forced to deny the existence of infinite sets. But this is a highly dubious move!

    So I think in mathematics we already have a clear example of the failure of reductionism and materialism. So this makes the claim that it fails for mental properties as well not so crazy after all.

    To summarize all I’ve been saying: there simply seem to exist 3 levels of organization in reality – Mathematics, Mind and Matter, and it’s not possible to ‘reduce’ these 3 levels to 1 level.

    You could say that mathematics is the ‘base level’ (as Tegmark claims), and I think that’s a perfectly valid way of looking at things, but it’s incomplete. Because the physical and mental levels , whilst *composed* of mathematics, are simply not *reducible* to mathematics. In *practice*, we would still have to unavoidably reference mental and physical properties to explain what we observe empirically.

    In short, if we want to understand reality, my claim is that we need to think like programmers instead of physicists and mathematicians! And when we think like programmers and try to find *models* of reality in our code, we find inevitably that *in practice* we simply cannot strip out mathematical and mental properties from our ontologies and domain models.

    So why not carry this line of reasoning through to its logical conclusion and simply state, that for all intents and purposes (empirically), materialism and reductionism are false and there exist 3 separate levels of organization – physical, mental and mathematical in reality?

  110. James Gallagher Says:

    in reply to mjgeddes#109

    we only need to disconnect the past from the present deterministically. So, we may have (on average ~planck time) random “jumps” occuring, and then , to ensure that the rest of the universe knows what has just happened, “god” rotates the entire state vector via a unitary evolution

    Now the present and the past interact but are independent

  111. Michele Amoretti Says:

    Finally, this w-e I watched “Fiddler on the Roof”. I knew “If I were a rich man” since I was a little kid (late 70s) and I play it on piano, but I always postponed watching the movie with different excuses (too long, difficult, it’s a musical, etc.). What a mistake! Nice plot, wonderful images and superb music – I loved it. In general, it is beautiful representation of the conflict between tradition and modernity. Much like classical vs quantum computing. 🙂

  112. fred Says:

    Scott #97

    “it shows that the thought “if there’s no free will, then I can’t hold others responsible for their actions” is just a pure cognitive illusion, born of the wrong idea that it’s only other people’s freedom that gets suspended in the hypothetical, not your own.”

    I’m actually assuming that I’m very aware of my freedom being an illusion as well! And my own awareness of this is pure “luck of the draw”.

    We can take “others” out of the picture and focus on the way we cope with the consequences of our own choices/actions.

    It’s very common for humans to drag themselves down over decisions they have made in the past, running alternative scenarios in their own mind, “ah, if only I had done x/y/z…”.

    Western religions and The American Dream tell us that we are basically in charge and that we get what we deserve. If we’re not happy/successful, it’s probably because we didn’t try hard enough. The focus is always on the exceptional.

    Traditional eastern philosophies/religions tend to be more on the opposite side: control is an illusion, we are all in the hands of some invisible “fate”, like pieces of cork floating down a river, so acceptance is really the only path to happiness. Everything is totally inter-connected, therefore “giving credit” and “blaming” are meaningless concepts.

    It seems to me that current science is more aligned with the latter interpretation. The world is built on determinism and randomness, so it’s really pointless to drive myself crazy over my previous choices since I’ve never been in control. And even if I could magically “fix” a single arbitrary thing in my past, the consequences of this change would probably lead to an entirely different universe, which could very well be “worse” than the current one (and how do we even “rank” alternative realities from bad to good?).
    Knowing this, agonizing over my own past would be as illogical as going to see a fortune teller.

  113. Steven R Wenner Says:

    Scott, what are your thoughts on Penrose’s arguments that the operations of the brain must be non-computable because we know the truth of Gödel’s sentence (and, that idea is the fundamental insight concerning consciousness, as he discusses at great length in “Shadows of the Mind”).

  114. Scott Says:

    Steven #113: As it happens, I just got back yesterday from a workshop in Minneapolis where I debated Penrose in person about exactly those questions. For my thoughts about them, please see (e.g.) Quantum Computing Since Democritus lecture 10.5, or Why Philosophers Should Care About Computational Complexity section 4, or The Ghost in the Quantum Turing Machine section 6.

  115. mjgeddes Says:

    I read Penrose’s books many years ago and I thought the math and physics stuff was great, but his views on the mind were just wildly implausible nonsense. He didn’t seem to have any knowledge of cognitive science at all, he was just basically firing out what seemed like random armchair philosophizing.

    The whole point of the work of Church-Turing on computation was supposed to be to develop a mechanical description of everything that could reasonably be construed as ‘thought’. In other words, if it’s not computational, it’s not likely to constitute thinking!

    Penrose argues for mathematical Platonism (that math has objective truth independent of the mind), and he actually makes good arguments for that. But the correct conclusion to be drawn from that is that math as a whole is in some sense, ‘beyond thought’. Math in its entirely must always ‘slip the net’ of mind.

    For if Platonism is correct and math exists in a ‘timeless realm’ as Penrose suggests, then it’s simply logically impossible for us to ever obtain direct knowledge of this within a naturalistic framework. Our brains are finite and time-bound, therefore we cannot possibly be peering into the Platonic realm. This is pure mysticism.

    As to our ability to ‘see’ the truth of Godel sentences: we have no such ability! The whole point about Godel sentences is that it is perfectly valid to assign either ‘true’ OR ‘false’ to them without contradiction. Penrose’s idea that we can somehow ‘see’ the truth of them is flat out wrong.

    If we want to resolve Godel undecidable statements, we *could* do this, but not with any certainty. The only way to do this would be to add new axioms, but this process cannot involve certainty, since we wouldn’t be engaging in deductive reasoning. We’d have to use *induction* and/or *abduction* to decide what new axioms to add (guesswork in other words), and a computer can ‘guess’ as well, so we have no advantage over computers.

    As to the empirical evidence from cognitive science, it all contradicts Penrose. The brain is clearly a highly organized, complex system that is not really going anything mechanical other than signal processing. And as Robin Hanson has pointed out, effective signal processing actually requires that the brain needs to be kept highly insulated from low-level physics.

    So it’s not at all likely that the explanation for consciousness resides in low-level physics like quantum mechanics, but rather the explanation is to be found in how all the different parts of the brain work together at a high-level (i.e., classical computation).

    Quantum computing can’t do anything beyond what a classical computer can do, it can only do some things much faster. There’s no evidence for quantum computing in the brain, and therefore no reason to postulate it. So there is no need to bring quantum mechanics into the discussion, it’s only complicating your theory of the mind for no good reason.