Missing the boat

This morning I got an email from Eric Klien of the Lifeboat Foundation, an organization that advocates building a “space ark” as an insurance policy in case out-of-control nanorobots destroy all life on Earth. Klien was inviting me to join the foundation’s scientific advisory board, which includes such notables as Ray Kurzweil. I thought readers of this blog might be interested in my response.

Dear Eric,

I’m honored (and surprised) that you would consider me for your board. But I’m afraid I’m going to decline, for the following reasons:

(1) I’m generally skeptical of predictions about specific future technologies, especially when those predictions are exactly the sort of thing that a science fiction writer would imagine. In particular, I consider the risk of self-replicating nanobots converting our entire planet into gray goo to be a small one.

(2) Once we’re dealing with such unlikely events, I don’t think we can say with confidence what protective measures would be effective. For all we know, any measures we undertake will actually increase the risk of catastrophe. For example, maybe if humanity launches a space ark, that will tip off a hostile alien civilization to our existence. And maybe the Earth will then be besieged by alien warships, which can only be destroyed using gray goo — the development of which was outlawed as a protective measure. I’m not claiming that this scenario is likely, only that I have no idea whether it’s more or less likely than the scenarios you’re considering.

(3) There are several risks to humanity that I consider more pressing than that of nanotechnology run amok. These include climate change, the loss of forests and freshwater supplies, and nuclear proliferation.

Best regards,
Scott Aaronson

17 Responses to “Missing the boat”

  1. Osias Says:

    Didn’t you think they are joking? I mean, is it not a prank? And you spent time replying seriously?

  2. Scott Says:

    Osias: Go visit the site, and look at this book by Kurzweil. These guys actually believe this stuff. They’re not joking.

  3. Greg Kuperberg Says:

    No, they aren’t joking. I actually think that there is a kernel of truth to the religion of Kurweil and others, one which was predicted in early times by von Neumann, Shannon and others. One day computers may well be smarter than people. When that day comes, computers will inherit the earth. People will have no more control of what happens after that than plants and animals are in control now.

    But even though we can speculate about these things, they are not part of the forseeable or plannable future. It makes no sense to try to plan for exactly Scott’s reason. We don’t know if “The Singularity” will be a good thing or a bad thing, or when it might happen, or what we can do to make it better or worse. All we can do is live in the present.

    (To make an analogy, the advent of sentient people has been good for dogs but very bad for woolly mammoths. But neither dogs nor mammoths were in any position to plan ahead. Indeed, although it may mean nothing to the computers when their time comes, we can at least set a moral precedent by treating plants and animals with respect.)

  4. Dave Bacon Says:

    But just think Scott, you could join the group, and then when they send off their ark, you could be the character in the SciFi novel who is secretly working for the nano-goo which has secretly already taken over the Earth.

  5. Thane Says:

    You’re going to be really sorry when the grey goo hits the fan.

  6. Miss HT Psych Says:

    That sounds like a Douglas Adams book… Life, The Universe and Everything (book 3 in the Hitchhikers Guide to the Galaxy series).

  7. aram Says:

    Just to play devil’s advocate, presumably the space ark would also address (3).

    In fact, let’s suppose that we want to maximize the number of humans that ever live. Then it’s reasonable to say that this expected value is dominated by the chance of settling other planets. Specifically, one possibility is that we all die in the next 1000 years; another is that we survive until the sun burns out in 5 billion years; another is that we expand exponentially through the universe until, say, the last star burns out a trillion years from now.

    I wonder at what point it because cost-effective to devote resources to this last problem, instead of say, trypanosomiasis.

  8. Scott Says:

    “You’re going to be really sorry when the grey goo hits the fan.”

    Well, for a few seconds, before I’m gooified myself.

  9. Scott Says:

    “Just to play devil’s advocate, presumably the space ark would also address (3).”

    Yeah, I thought of that. But I think one needs to consider, not merely the improbability of nanobots gone wild, but also the improbability of actually building a space ark. (Think of the failure of Biosphere 2, or the lack of interest in creating a permanent human settlement in Antarctica, which would be so much easier than in space.)

    Which takes us back to my original point: I don’t know how to compare the probability of a working ark in the forseeable future, to (say) the probability of the would-be ark’s antimatter propulsion system malfunctioning and destroying the Earth during launch.

  10. Anonymous Says:

    One day computers may well be smarter than people. When that day comes, computers will inherit the earth. People will have no more control of what happens after that than plants and animals are in control now.

    Whenever this happens, I don’t see what’s really wrong with it. (I’m not saying Greg thinks it’s wrong.)

    Biological humans were not built to make long journeys into space. If our spirit lives on, and is passed on to some sort of super-intelligent robots , they could go find answers to the rest of the questions (or THE question for that matter).

    The robots might look at us as we look at chimps now, and that’s OK.

  11. Anonymous Says:

    And maybe the Earth will then be besieged by alien warships, which can only be destroyed using gray goo

    umm..that sounds like that voyager episode where they use the nano goo to threaten the advanced aliens with complete and utter destruction if they don’t set them free.

    I think that would be awesome if scott was on the ark working on the nano goos behalf.

  12. Scott Says:

    “But just think Scott, you could join the group, and then when they send off their ark, you could be the character in the SciFi novel who is secretly working for the nano-goo which has secretly already taken over the Earth.”

    “That gray blob at Earth coordinates? Oh, that’s just a smudge on the telescope.”

  13. Greg Kuperberg Says:

    A point: If the sentient robots of the far future look at us as we look at chimpanzees, that’s not very reassuring. After all, look at how humanity treats chimpanzees.

    Also, while we are in this mode of wild speculation, it is silly to hope to escape the robots by flying away in space ships. If the robots wanted to, they would easily chase you down. Or convince you not to escape in the first place. After all, snakes usually hide from people, but it is futile. Their conception of the world is too limited to make good decisions in the modern world. They have surrendered their independence to humanity whether it is good for them or not.

  14. Osias Says:

    I’m reading their FAQ. I see they
    re serious, but I’m not completly sure if they actually believe this stuff…. Is it not just another way to get our money?

    So… They intend to build space arks, find other planets, and spread over the Galaxy… The humankind itself is the actual gray goo!

  15. Anonymous Says:

    Although plausible at first glance, I’m not sure it’s true that it makes no sense to plan for the singularity and that there is nothing which can be done to influence whether it turns out well or badly. You may want to look at some articles by Eli Yudkowsky:

    http://www.singinst.org/CFAI/
    http://www.singinst.org/LOGI/

    He has actually thought carefully about these issues rather than making off the cuff remarks. Considering that the possible consequences of singularity are incalculable, I think a rational utility-maximizing person is obligated to devote some thought to the matter even if the probability of singularity occuring or the probability of being able to positively influence the outcome are very small.

  16. Bram Says:

    We already have ‘grey goo’, aka bacteria.

  17. Michael Gogins Says:

    Isn’t the question here whether a singularity takes us through it, or becomes a gate forbidding our entry?

    Anyway, all this talk of technological singularities strikes me as a scientistic (as in scientism) apocalypse — another form of superstititon.

    And what’s your take on Godel’s dilemma?