Archive for the ‘The Fate of Humanity’ Category

97% environmentalist

Sunday, June 7th, 2015

I decided to add my name to a petition by, as of this writing, 81 MIT faculty, calling on MIT to divest its endowment from fossil fuel companies.  (My co-signatories include Noam Chomsky, so I guess there’s something we agree about!)  There’s also a wider petition signed by nearly 3500 MIT students, faculty, and staff, mirroring similar petitions all over the world.

When the organizers asked me for a brief statement about why I signed, I sent them the following:

Signing this petition wasn’t an obvious choice for me, since I’m sensitive to the charge that divestment petitions are just meaningless sanctimony, a way for activists to feel morally pure without either making serious sacrifices or engaging the real complexities of an issue.  In the end, though, that kind of meta-level judgment can’t absolve us of the need to consider each petition on its merits: if we think of a previous crisis for civilization (say, in the late 1930s), then it seems obvious that even symbolic divestment gestures were better than nothing.  What made up my mind was reading the arguments pro and con, and seeing that the organizers of this petition had a clear-eyed understanding of what they were trying to accomplish and why: they know that divestment can’t directly drive down oil companies’ stock prices, but it can powerfully signal to the world a scientific consensus that, if global catastrophe is to be averted, most of the known fossil-fuel reserves need to be left in the ground, and that current valuations of oil, gas, and coal companies fail to reflect that reality.

For some recent prognoses of the climate situation, see (for example) this or this from Vox.  My own sense is that the threat has been systematically understated even by environmentalists, because of the human impulse to shoehorn all news into a hopeful narrative (“but there’s still time!  if we just buy locally-grown produce, everything can be OK!”).  Logically, there’s an obvious tension between the statements:

(a) there was already an urgent need to act decades ago, and

(b) having failed to act then, we can still feasibly avert a disaster now.

And indeed, (b) appears false to me.  We’re probably well into the era where, regardless of what we do or don’t do, some of us will live to see a climate dramatically different from the one in which human civilization developed for the past 10,000 years, at least as different as the last Ice Ages were.

And yet that fact still doesn’t relieve us of moral responsibility.  We can buy more time to prepare, hoping for technological advances in the interim; we can try to bend the curve of CO2 concentration away from the worst futures and toward the merely terrible ones.  Alas, even those steps will require political will that’s unprecedented outside of major wars.  For the capitalist free market (which I’m a big fan of) to work its magic, actual costs first need to get reflected in prices—which probably means massively taxing fossil fuels, to the point where it’s generally cheaper to leave them in the ground and switch to alternatives.  (Lest anyone call me a doctrinaire treehugger, I also support way less regulation of the nuclear industry, to drive down the cost of building the hundreds of new nuclear plants that we’ll probably need.)

These realities have a counterintuitive practical implication that I wish both sides understood better.  Namely, if you share my desperation and terror about this crisis, the urgent desire to do something, then limiting your personal carbon footprint should be very far from your main concern.  Like, it’s great if you can bike to work, and you should keep it up (fresh air and exercise and all).  But I’d say the anti-environmentalists are right that such voluntary steps are luxuries of the privileged, and will accordingly never add up to a hill of beans.  Let me go further: even to conceptualize this problem in terms of personal virtue and blame seems to me like a tragic mistake, one on which the environmentalists and their opponents colluded.  Given the choice, I’d much rather that the readers of this blog flew to all the faraway conferences they wanted, drove gas-guzzling minivans, ate steaks every night, and had ten kids, but then also took some steps that made serious political action to leave most remaining fossil fuels in the ground even ε more likely, ε closer to the middle of our Overton window.  I signed the MIT divestment petition because it seemed to me like such a step, admittedly with an emphasis on the ε.

The End of Suffering?

Monday, June 1st, 2015

A computer science undergrad who reads this blog recently emailed me about an anxiety he’s been feeling connected to the Singularity—not that it will destroy all human life, but rather that it will make life suffering-free and therefore no longer worth living (more Brave New World than Terminator, one might say).

As he puts it:

This probably sounds silly, but I’ve been existentially troubled by certain science fiction predictions for about a year or two, most of them coming from the Ray Kurzweil/Singularity Institute types … What really bothers me is the idea of the “abolition of suffering” as some put it. I just don’t see the point. Getting rid of cancer, premature death, etc., that all sounds great. But death itself? All suffering? At what point do we just sit down and ask ourselves, why not put our brains in a jar, and just activate our pleasure receptors for all eternity? That seems to be the logical conclusion of that line of thinking. If we want to reduce the conscious feeling of pleasure to the release of dopamine in the brain, well, why not?

I guess what I think I’m worried about is having to make the choice to become a cyborg, or to upload my mind to a computer, to live forever, or to never suffer again. I don’t know how I’d answer, given the choice. I enjoy being human, and that includes my suffering. I really don’t want to live forever. I see that as a hedonic treadmill more than anything else. Crazy bioethicists like David Pearce, who want to genetically re-engineer all species on planet Earth to be herbivores, and literally abolish all suffering, just add fuel to my anxiety.

… Do you think we’re any closer to what Kurzweil (or Pearce) predicted (and by that I mean, will we see it in our lifetimes)? I want to stop worrying about these things, but something is preventing me from doing so. Thoughts about the far flung (or near) future are just intrusive for me. And it seems like everywhere I go I’m reminded of my impending fate. Ernst Jünger would encourage me to take up an attitude of amor fati, but I can’t see myself doing that. My father says I’m too young to worry about these things, and that the answer will be clear when I’ve actually lived my life. But I just don’t know. I want to stop caring, more than anything else. It’s gotten to a point where the thoughts keep me up at night.

I don’t know how many readers might have had similar anxieties, but in any case, I thought my reply might be of some interest to others, so with the questioner’s kind permission, I’m reproducing it below.

1. An end to suffering removing the meaning from life? As my grandmother might say, “we should only have such problems”! I believe, alas, that suffering will always be with us, even after a hypothetical technological singularity, because of basic Malthusian logic. I.e., no matter how many resources there are, population will expand exponentially to exploit them and make the resources scarce again, thereby causing fighting, deprivation, and suffering. What’s terrifying about Malthus’s logic is how fully general it is: it applies equally to tenure-track faculty positions, to any extraterrestrial life that might exist in our universe or in any other bounded universe, and to the distant post-Singularity future.

But if, by some miracle, we were able to overcome Malthus and eliminate all suffering, my own inclination would be to say “go for it”! I can easily imagine a life that was well worth living—filled with beauty, humor, play, love, sex, and mathematical and scientific discovery—even though it was devoid of any serious suffering. (We could debate whether the “ideal life” would include occasional setbacks, frustrations, etc., even while agreeing that at any rate, it should certainly be devoid of cancer, poverty, bullying, suicidal depression, and one’s Internet connection going down.)

2. If you want to worry about something, then rather than an end to suffering, I might humbly suggest worrying about a large increase in human suffering within our lifetimes. A few possible culprits: climate change, resurgent religious fundamentalism, large parts of the world running out of fresh water.

3. It’s fun to think about these questions from time to time, to use them to hone our moral intuitions—and I even agree with Scott Alexander that it’s worthwhile to have a small number of smart people think about them full-time for a living.  But I should tell you that, as I wrote in my post The Singularity Is Far, I don’t expect a Singularity in my lifetime or my grandchildrens’ lifetimes. Yes, technically, if there’s ever going to be a Singularity, then we’re 10 years closer to it now than we were 10 years ago, but it could still be one hell of a long way away! And yes, I expect that technology will continue to change in my lifetime in amazing ways—not as much as it changed in my grandparents’ lifetimes, probably, but still by a lot—but how to put this? I’m willing to bet any amount of money that when I die, people’s shit will still stink.

Kuperberg’s parable

Sunday, November 23rd, 2014

Recently, longtime friend-of-the-blog Greg Kuperberg wrote a Facebook post that, with Greg’s kind permission, I’m sharing here.


A parable about pseudo-skepticism in response to climate science, and science in general.

Doctor: You ought to stop smoking, among other reasons because smoking causes lung cancer.
Patient: Are you sure? I like to smoke. It also creates jobs.
D: Yes, the science is settled.
P: All right, if the science is settled, can you tell me when I will get lung cancer if I continue to smoke?
D: No, of course not, it’s not that precise.
P: Okay, how many cigarettes can I safely smoke?
D: I can’t tell you that, although I wouldn’t recommend smoking at all.
P: Do you know that I will get lung cancer at all no matter how much I smoke?
D: No, it’s a statistical risk. But smoking also causes heart disease.
P: I certainly know smokers with heart disease, but I also know non-smokers with heart disease. Even if I do get heart disease, would you really know that it’s because I smoke?
D: No, not necessarily; it’s a statistical effect.
P: If it’s statistical, then you do know that correlation is not causation, right?
D: Yes, but you can also see the direct effect of smoking on lungs of smokers in autopsies.
P: Some of whom lived a long time, you already admitted.
D: Yes, but there is a lot of research to back this up.
P: Look, I’m not a research scientist, I’m interested in my case. You have an extended medical record for me with X-rays, CAT scans, blood tests, you name it. You can gather more data about me if you like. Yet you’re hedging everything you have to say.
D: Of course, there’s always more to learn about the human body. But it’s a settled recommendation that smoking is bad for you.
P: It sounds like the science is anything but settled. I’m not interested in hypothetical recommendations. Why don’t you get back to me when you actually know what you’re talking about. In the meantime, I will continue to smoke, because as I said, I enjoy it. And by the way, since you’re so concerned about my health, I believe in healthy skepticism.

Interstellar’s dangling wormholes

Monday, November 10th, 2014

Update (Nov. 15): A third of my confusions addressed by reading Kip Thorne’s book! Details at the bottom of this post.


On Saturday Dana and I saw Interstellar, the sci-fi blockbuster co-produced by the famous theoretical physicist Kip Thorne (who told me about his work on this movie when I met him eight years ago).  We had the rare privilege of seeing the movie on the same day that we got to hang out with a real astronaut, Dan Barry, who flew three shuttle missions and did four spacewalks in the 1990s.  (As the end result of a project that Dan’s roboticist daughter, Jenny Barry, did for my graduate course on quantum complexity theory, I’m now the coauthor with both Barrys on a paper in Physical Review A, about uncomputability in quantum partially-observable Markov decision processes.)

Before talking about the movie, let me say a little about the astronaut.  Besides being an inspirational example of someone who’s achieved more dreams in life than most of us—seeing the curvature of the earth while floating in orbit around it, appearing on Survivor, and publishing a Phys. Rev. A paper—Dan is also a passionate advocate of humanity’s colonizing other worlds.  When I asked him whether there was any future for humans in space, he answered firmly that the only future for humans was in space, and then proceeded to tell me about the technical viability of getting humans to Mars with limited radiation exposure, the abundant water there, the romantic appeal that would inspire people to sign up for the one-way trip, and the extinction risk for any species confined to a single planet.  Hearing all this from someone who’d actually been to space gave Interstellar, with its theme of humans needing to leave Earth to survive (and its subsidiary theme of the death of NASA’s manned space program meaning the death of humanity), a special vividness for me.  Granted, I remain skeptical about several points: the feasibility of a human colony on Mars in the foreseeable future (a self-sufficient human colony on Antarctica, or under the ocean, strike me as plenty hard enough for the next few centuries); whether a space colony, even if feasible, cracks the list of the top twenty things we ought to be doing to mitigate the risk of human extinction; and whether there’s anything more to be learned, at this point in history, by sending humans to space that couldn’t be learned a hundred times more cheaply by sending robots.  On the other hand, if there is a case for continuing to send humans to space, then I’d say it’s certainly the case that Dan Barry makes.

OK, but enough about the real-life space traveler: what did I think about the movie?  Interstellar is a work of staggering ambition, grappling with some of the grandest themes of which sci-fi is capable: the deterioration of the earth’s climate; the future of life in the universe; the emotional consequences of extreme relativistic time dilation; whether “our” survival would be ensured by hatching human embryos in a faraway world, while sacrificing almost all the humans currently alive; to what extent humans can place the good of the species above family and self; the malleability of space and time; the paradoxes of time travel.  It’s also an imperfect movie, one with many “dangling wormholes” and unbalanced parentheses that are still generating compile-time errors in my brain.  And it’s full of stilted dialogue that made me giggle—particularly when the characters discussed jumping into a black hole to retrieve its “quantum data.”  Also, despite Kip Thorne’s involvement, I didn’t find the movie’s science spectacularly plausible or coherent (more about that below).  On the other hand, if you just wanted a movie that scrupulously obeyed the laws of physics, rather than intelligently probing their implications and limits, you could watch any romantic comedy.  So sure, Interstellar might make you cringe, but if you like science fiction at all, then it will also make you ponder, stare awestruck, and argue with friends for days afterward—and enough of the latter to make it more than worth your while.  Just one tip: if you’re prone to headaches, do not sit near the front of the theater, especially if you’re seeing it in IMAX.

For other science bloggers’ takes, see John Preskill (who was at a meeting with Steven Spielberg to brainstorm the movie in 2006), Sean Carroll, Clifford Johnson, and Peter Woit.

In the rest of this post, I’m going to list the questions about Interstellar that I still don’t understand the answers to (yes, the ones still not answered by the Interstellar FAQ).  No doubt some of these are answered by Thorne’s book The Science of Interstellar, which I’ve ordered (it hasn’t arrived yet), but since my confusions are more about plot than science, I’m guessing that others are not.

SPOILER ALERT: My questions give away basically the entire plot—so if you’re planning to see the movie, please don’t read any further.  After you’ve seen it, though, come back and see if you can help with any of my questions.


1. What’s causing the blight, and the poisoning of the earth’s atmosphere?  The movie is never clear about this.  Is it a freak occurrence, or is it human-caused climate change?  If the latter, then wouldn’t it be worth some effort to try to reverse the damage and salvage the earth, rather than escaping through a wormhole to another galaxy?

2. What’s with the drone?  Who sent it?  Why are Cooper and Murph able to control it with their laptop?  Most important of all, what does it have to do with the rest of the movie?

3. If NASA wanted Cooper that badly—if he was the best pilot they’d ever had and NASA knew it—then why couldn’t they just call him up?  Why did they have to wait for beings from the fifth dimension to send a coded message to his daughter revealing their coordinates?  Once he did show up, did they just kind of decide opportunistically that it would be a good idea to recruit him?

4. What was with Cooper’s crash in his previous NASA career?  If he was their best pilot, how and why did the crash happen?  If this was such a defining, traumatic incident in his life, why is it never brought up for the rest of the movie?

5. How is NASA funded in this dystopian future?  If official ideology holds that the Apollo missions were faked, and that growing crops is the only thing that matters, then why have the craven politicians been secretly funneling what must be trillions of dollars to a shadow-NASA, over a period of fifty years?

6. Why couldn’t NASA have reconnoitered the planets using robots—especially since this is a future where very impressive robots exist?  Yes, yes, I know, Matt Damon explains in the movie that humans remain more versatile than robots, because of their “survival instinct.”  But the crew arrives at the planets missing extremely basic information about them, like whether they’re inhospitable to human life because of freezing temperatures or mile-high tidal waves.  This is information that robotic probes, even of the sort we have today, could have easily provided.

7. Why are the people who scouted out the 12 planets so limited in the data they can send back?  If they can send anything, then why not data that would make Cooper’s mission completely redundant (excepting, of course, the case of the lying Dr. Mann)?  Does the wormhole limit their transmissions to 1 bit per decade or something?

8. Rather than wasting precious decades waiting for Cooper’s mission to return, while (presumably) billions of people die of starvation on a fading earth, wouldn’t it make more sense for NASA to start colonizing the planets now?  They could simply start trial colonies on all the planets, even if they think most of the colonies will fail.  Yes, this plan involves sacrificing individuals for the greater good of humanity, but NASA is already doing that anyway, with its slower, riskier, stupider reconnaissance plan.  The point becomes even stronger when we remember that, in Professor Brand’s mind, the only feasible plan is “Plan B” (the one involving the frozen human embryos).  Frozen embryos are (relatively) cheap: why not just spray them all over the place?  And why wait for “Plan A” to fail before starting that?

9. The movie involves a planet, Miller, that’s so close to the black hole Gargantua, that every hour spent there corresponds to seven years on earth.  There was an amusing exchange on Slate, where Phil Plait made the commonsense point that a planet that deep in a black hole’s gravity well would presumably get ripped apart by tidal forces.  Plait later had to issue an apology, since, in conceiving this movie, Kip Thorne had made sure that Gargantua was a rapidly rotating black hole—and it turns out that the physics of rotating black holes are sufficiently different from those of non-rotating ones to allow such a planet in principle.  Alas, this clever explanation still leaves me unsatisfied.  Physicists, please help: even if such a planet existed, wouldn’t safely landing a spacecraft on it, and getting it out again, require a staggering amount of energy—well beyond what the humans shown in the movie can produce?  (If they could produce that much acceleration and deceleration, then why couldn’t they have traveled from Earth to Saturn in days rather than years?)  If one could land on Miller and then get off of it using the relatively conventional spacecraft shown in the movie, then the amusing thought suggests itself that one could get factor-of-60,000 computational speedups, “free of charge,” by simply leaving one’s computer in space while one spent some time on the planet.  (And indeed, something like that happens in the movie: after Cooper and Anne Hathaway return from Miller, Romilly—the character who stayed behind—has had 23 years to think about physics.)

10. Why does Cooper decide to go into the black hole?  Surely he could jettison enough weight to escape the black hole’s gravity by sending his capsule into the hole, while he himself shared Anne Hathaway’s capsule?

11. Speaking of which, does Cooper go into the black hole?  I.e., is the “tesseract” something he encounters before or after he crosses the event horizon?  (Or maybe it should be thought of as at the event horizon—like a friendlier version of the AMPS firewall?)

12. Why is Cooper able to send messages back in time—but only by jostling books around, moving the hands of a watch, and creating patterns of dust in one particular room of one particular house?  (Does this have something to do with love and gravity being the only two forces in the universe that transcend space and time?)

13. Why does Cooper desperately send the message “STAY” to his former self?  By this point in the movie, isn’t it clear that staying on Earth means the death of all humans, including Murph?  If Cooper thought that a message could get through at all, then why not a message like: “go, and go directly to Edmunds’ planet, since that’s the best one”?  Also, given that Cooper now exists outside of time, why does he feel such desperate urgency?  Doesn’t he get, like, infinitely many chances?

14. Why is Cooper only able to send “quantum data” that saves the world to the older Murph—the one who lives when (presumably) billions of people are already dying of starvation?  Why can’t he send the “quantum data” back to the 10-year-old Murph, for example?  Even if she can’t yet understand it, surely she could hand it over to Professor Brand.  And even if this plan would be unlikely to succeed: again, Cooper now exists outside of time.  So can’t he just keep going back to the 10-year-old Murph, rattling those books over and over until the message gets through?

15. What exactly is the “quantum data” needed for, anyway?  I gather it has something to do with building a propulsion system that can get the entire human population out of the earth’s gravity well at a reasonable cost?  (Incidentally, what about all the animals?  If the writers of the Old Testament noticed that issue, surely the writers of Interstellar could.)

16. How does Cooper ever make it out of the black hole?  (Maybe it was explained and I missed it: once he entered the black hole, things got extremely confusing.)  Do the fifth-dimensional beings create a new copy of Cooper outside the black hole?  Do they postselect on a branch of the wavefunction where he never entered the black hole in the first place?  Does Murph use the “quantum data” to get him out?

17. At his tearful reunion with the elderly Murph, why is Cooper totally uninterested in meeting his grandchildren and great-grandchildren, who are in the same room?  And why are they uninterested in meeting him?  I mean, seeing Murph again has been Cooper’s overriding motivation during his journey across the universe, and has repeatedly been weighed against the survival of the entire human race, including Murph herself.  But seeing Murph’s kids—his grandkids—isn’t even worth five minutes?

18. Speaking of which, when did Murph ever find time to get married and have kids?  Since she’s such a major character, why don’t we learn anything about this?

19. Also, why is Murph an old woman by the time Cooper gets back?  Yes, Cooper lost a few decades because of the time dilation on Miller’s planet.  I guess he lost the additional decades while entering and leaving Gargantua?  If the five-dimensional beings were able to use their time-travel / causality-warping powers to get Cooper out of the black hole, couldn’t they have re-synced his clock with Murph’s while they were at it?

20. Why does Cooper need to steal a spaceship to get to Anne Hathaway’s planet?  Isn’t Murph, like, the one in charge?  Can’t she order that a spaceship be provided for Cooper?

21. Astute readers will note that I haven’t yet said anything about the movie’s central paradox, the one that dwarfs all the others.  Namely, if humans were going to go extinct without a “wormhole assist” from the humans of the far future, then how were there any humans in the far future to provide the wormhole assist?  And conversely, if the humans of the far future find themselves already existing, then why do they go to the trouble to put the wormhole in their past (which now seems superfluous, except maybe for tidying up the story of their own origins)?  The reason I didn’t ask about this is that I realize it’s supposed to be paradoxical; we’re supposed to feel vertigo thinking about it.  (And also, it’s not entirely unrelated to how PSPACE-complete problems get solved with polynomial resources, in my and John Watrous’s paper on computation with closed timelike curves.)  My problem is a different one: if the fifth-dimensional, far-future humans have the power to mold their own past to make sure everything turned out OK, then what they actually do seems pathetic compared to what they could do.  For example, why don’t they send a coded message to the 21st-century humans (similar to the coded messages that Cooper sends to Murph), telling them how to avoid the blight that destroys their crops?  Or just telling them that Edmunds’ planet is the right one to colonize?  Like the God of theodicy arguments, do the future humans want to use their superpowers only to give us a little boost here and there, while still leaving us a character-forming struggle?  Even if this reticence means that billions of innocent people—ones who had nothing to do with the character-forming struggle—will die horrible deaths?  If so, then I don’t understand these supposedly transcendently-evolved humans any better than I understand the theodical God.


Anyway, rather than ending on that note of cosmic pessimism, I guess I could rejoice that we’re living through what must be the single biggest month in the history of nerd cinema—what with a sci-fi film co-produced by a great theoretical physicist, a Stephen Hawking biopic, and the Alan Turing movie coming out in a few weeks.  I haven’t yet seen the latter two.  But it looks like the time might be ripe to pitch my own decades-old film ideas, like “Radical: The Story of Évariste Galois.”


Update (Nov. 15): I just finished reading Kip Thorne’s interesting book The Science of Interstellar.  I’d say that it addresses (doesn’t always clear up, but at least addresses) 7 of my 21 confusions: 1, 4, 9, 10, 11, 15, and 19.  Briefly:

1. Thorne correctly notes that the movie is vague about what’s causing the blight and the change to the earth’s atmosphere, but he discusses a bunch of possibilities, which are more in the “freak disaster” than the “manmade” category.

4. Cooper’s crash was supposed to have been caused by a gravitational anomaly, as the bulk beings of the far future were figuring out how to communicate with 21st-century humans.  It was another foreshadowing of those bulk beings.

9. Thorne notices the problem of the astronomical amount of energy needed to safely land on Miller’s planet and then get off of it—given that this planet is deep inside the gravity well of the black hole Gargantua, and orbiting Gargantua at a large fraction of the speed of light.  Thorne offers a solution that can only be called creative: namely, while nothing about this was said in the movie (since Christopher Nolan thought it would confuse people), it turns out that the crew accelerated to relativistic speed and then decelerated using a gravitational slingshot around a second, intermediate-mass black hole, which just happened to be in the vicinity of Gargantua at precisely the right times for this.  Thorne again appeals to slingshots around unmentioned but strategically-placed intermediate-mass black holes several more times in the book, to explain other implausible accelerations and decelerations that I hadn’t even noticed.

10. Thorne acknowledges that Cooper didn’t really need to jump into Gargantua in order to jettison the mass of his body (which is trivial compared to the mass of the spacecraft).  Cooper’s real reason for jumping, he says, was the desperate hope that he could somehow find the quantum data there needed to save the humans on Earth, and then somehow get it out of the black hole and back to the humans.  (This being a movie, it of course turns out that Cooper was right.)

11. Yes, Cooper encounters the tesseract while inside the black hole.  Indeed, he hits it while flying into a singularity that’s behind the event horizon, but that isn’t the black hole’s “main” singularity—it’s a different, milder singularity.

15. While this wasn’t made clear in the movie, the purpose of the quantum data was indeed to learn how to manipulate the gravitational anomalies in order to decrease Newton’s constant G in the vicinity of the earth—destroying the earth but also allowing all the humans to escape its gravity with the rocket fuel that’s available.  (Again, nothing said about the poor animals.)

19. Yes, Cooper lost the additional decades while entering Gargantua.  (Furthermore, while Thorne doesn’t discuss this, I guess he must have lost them only when he was still with Anne Hathaway, not after he separates from her.  For otherwise, Anne Hathaway would also be an old woman by the time Cooper reaches her on Edmunds’ planet, contrary to what’s shown in the movie.)

3-sentence summary of what’s happening in Israel and Gaza

Thursday, July 24th, 2014

Hamas is trying to kill as many civilians as it can.

Israel is trying to kill as few civilians as it can.

Neither is succeeding very well.


Update (July 28): Please check out a superb essay by Sam Harris on the Israeli/Palestinian conflict.  While, as Harris says, the essay contains “something to offend everyone”—even me—it also brilliantly articulates many of the points I’ve been trying to make in this comment thread.

See also a good HuffPost article by Ali A. Rizvi, a “Pakistani-Canadian writer, physician, and musician.”

Eigenmorality

Wednesday, June 18th, 2014

This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell.  Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden.  The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query.  Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.”  At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix.  Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.

I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page.  Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar.  At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.

In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”

Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”  After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.”  Yet they both managed to use math to defeat the circularity.  All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map.  And such an equilibrium could be shown to exist—indeed, to exist uniquely.

Searching for other circular notions to elucidate using linear algebra, I hit on morality.  Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer.  Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here?  If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them?  What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice?  Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them?  Should we consider intent, or only outcomes?  Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling?  Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community?  If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?

For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity.  How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?

Ah, I thought—this is precisely where linear algebra can come to the rescue!  Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”  Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already.  We apply the rule over and over, until the number of morality credits per person converges to an equilibrium.  (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.)  We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.

The next step, I figured, would be to hack together some code that computed this “eigenmorality” metric, and then see what happened when I ran the code to measure the morality of each participant in a simulated society.  What would happen?  Would the results conform to my pre-theoretic intuitions about what sort of behavior was moral and what wasn’t?  If not, then would watching the simulation give me new ideas about how to improve the morality metric?  Or would it be my intuitions themselves that would change?

Unfortunately, I never got around to the “coding it up” part—there’s a reason why I became a theorist!  The eigenmorality idea went onto my back burner, where it stayed for the next 16 years: 16 years in which our world descended ever further into darkness, lacking a principled way to quantify morality.  But finally, this year, just two separate things have happened on the eigenmorality front, and that’s why I’m blogging about it now.

Eigenjesus and Eigenmoses

The first thing that’s happened is that Tyler Singer-Clark, my superb former undergraduate advisee, did code up eigenmorality metrics and test them out on a simulated society, for his MIT senior thesis project.  You can read Tyler’s 12-page report here—it’s a fun, enjoyable, thought-provoking first research paper, one that I wholeheartedly recommend.  Or, if you’d like to experiment yourself with the Python code, you can download it here from github.  (Of course, all opinions expressed in this post are mine alone, not necessarily Tyler’s.)

Briefly, Tyler examined what eigenmorality has to say in the setting of an Iterated Prisoner’s Dilemma (IPD) tournament.  The Iterated Prisoner’s Dilemma is the famous game in which two players meet repeatedly, and in each turn can either “Cooperate” or “Defect.”  The absolute best thing, from your perspective, is if you defect while your partner cooperates.  But you’re also pretty happy if you both cooperate.  You’re less happy if you both defect, while the worst (from your standpoint) is if you cooperate while your partner defects.  At each turn, when contemplating what to do, you have the entire previous history of your interaction with this partner available to you.  And thus, for example, you can decide to “punish” your partner for past defections, “reward” her for past cooperations, or “try to take advantage” by unilaterally defecting and seeing what happens.  At each turn, the game has some small constant probability of ending—so you know approximately how many times you’ll meet this partner in the future, but you don’t know exactly when the last turn will be.  Your score, in the game, is then the sum-total of your score over all turns and all partners (where each player meets each other player once).

In the late 1970s, as recounted in his classic work The Evolution of Cooperation, Robert Axelrod invited people all over the world to submit computer programs for playing this game, which were then pit against each other in the world’s first serious IPD tournament.  And, in a tale that’s been retold in hundreds of popular books, while many people submitted complicated programs that used machine learning, etc. to try to suss out their opponents, the program that won—hands-down, repeatedly—was TIT_FOR_TAT, a few lines of code submitted by the psychologist Anatol Rapaport to implement an ancient moral maxim.  TIT_FOR_TAT starts out by cooperating; thereafter, it simply does whatever its opponent did in the last move, swiftly rewarding every cooperation and punishing every defection, and ignoring the entire previous history.  In the decades since Axelrod, running Iterated Prisoners’ Dilemma tournaments has become a minor industry, with countless variations explored (for example, “evolutionary” versions, and versions allowing side-communication between the players), countless new strategies invented, and countless papers published.  To make a long story short, TIT_FOR_TAT continues to do quite well across a wide range of environments, but depending on the mix of players present, other strategies can sometimes beat TIT_FOR_TAT.  (As one example, if there’s a sizable minority of colluding players, who recognize each other by cooperating and defecting in a prearranged sequence, then those players can destroy TIT_FOR_TAT and other “simple” strategies, by cooperating with one another while defecting against everyone else.)

Anyway, Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly.  However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners?

It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities.  The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either!  The players are bots, which do whatever their code tells them to do.  So in some sense, utility—no less than morality—is “merely an interpretation” that we impose on the raw cooperate/defect sequences!  There’s nothing to stop us from imposing some other interpretation (say, one that explicitly tries to measure morality) and seeing what happens.

In an attempt to measure the players’ morality, Tyler uses the eigenmorality idea from before.  The extent to which player A “cooperates” with player B is simply measured by the percentage of times A cooperates.  (One acknowledged limitation of this work is that, when two players both defect, there’s no attempt to take into account “who started it,” and to judge the aggressor more harshly than the retaliator—or to incorporate time in any other way.)  This then gives us a “cooperation matrix,” whose (i,j) entry records the total amount of niceness that player i displayed to player j.  Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores.

Now, there’s a very interesting ambiguity in what I said above.  Namely, should we define the “niceness scores” to lie in [0,1] (so that the lowest, meanest possible score is 0), or in [-1,1] (so that it’s possible to have negative niceness)?  This might sound like a triviality, but in our setting, it’s precisely the mathematical reflection of one of the philosophical conundrums I mentioned earlier.  The conundrum can be stated as follows: is your morality a monotone function of your niceness?  We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler.  But do you have a positive obligation to be not-nice to Hitler: to make him suffer because he made others suffer?  Or, OK, how about not Hitler, but someone who’s somewhat bad?  Consider, for example, a woman who falls in love with, and marries, an unrepentant armed robber (with full knowledge of who he is, and with other options available to her).  Is the woman morally praiseworthy for loving her husband despite his bad behavior?  Or is she blameworthy because, by rewarding his behavior with her love, she helps to enable it?

To capture two possible extremes of opinion about such questions, Tyler and I defined two different morality metrics, which we called … wait for it … eigenmoses and eigenjesus.  Eigenmoses has the niceness scores in [-1,1], which means that you’re actively rewarded for punishing evildoers: that is, for defecting against those who defect against many moral players.  Eigenjesus, by contrast, has the niceness scores in [0,1], which means that you always do at least as well by “turning the other cheek” and cooperating.  (Though note that, even with eigenjesus, you get more morality credits by cooperating with moral players than by cooperating with immoral ones.)

This is probably a good place to mention a second limitation of Tyler’s current study.  Namely, with the current system, there’s no direct way for a player to find out how its partner has been behaving toward third parties.  The only information that A gets about the goodness or evilness of player B, comes from A and B’s direct interaction.  Ideally, one would like to design bots that take into account, not only the other bots’ behavior toward them, but the other bots’ behavior toward each other.  So for example, even if someone is unfailingly nice to you, if that person is an asshole to everyone else, then the eigenmoses moral code would demand that you return the person’s cooperation with icy defection.  Conversely, even if Gandhi is mean and hateful to you, you would still be morally obliged (interestingly, on both the eigenmoses and eigenjesus codes) to be nice to him, because of the amount of good he does for everyone else.

Anyway, you can read Tyler’s paper if you want to see the results of computing the eigenmoses and eigenjesus scores for a diverse population of bots.  Briefly, the results accord pretty well with intuition.  When we look at eigenjesus scores, the all-cooperate bot comes out on top and the all-defect bot on the bottom (as is mathematically necessary), with TIT_FOR_TAT somewhere in the middle, and generous versions of TIT_FOR_TAT higher up.  When we look at eigenmoses, by contrast, TIT_FOR_TWO_TATS comes out on top, with TIT_FOR_TAT in sixth place, and the all-cooperate bot scoring below the median.  Interestingly, once again, the all-defect bot gets the lowest score (though in this case, it wasn’t mathematically necessary).

Even though the measures acquit themselves well in this particular tournament, it’s admittedly easy to construct scenarios where the prescriptions of eigenjesus and eigenmoses alike violently diverge from most people’s moral intuitions.  We’ve already touched on a few such scenarios above (for example, are you really morally obligated to lick the boots of someone who kicks you, just because that person is a saint to everyone other than you?).  Another type of scenario involves minorities.  Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that).  Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%.  Who, in this scenario, is moral, and who’s immoral?  The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil.  After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone.  Of course, for much of human history, this is precisely how morality worked, in many people’s minds.  But I dare say it’s a result that would make moderns uncomfortable.

In summary, it seems clear to me that neither eigenmoses nor eigenjesus correctly captures our intuitions about morality, any more than Φ captures our intuitions about consciousness.  But as they say, I think there’s plenty of scope here for further research: for coming up with new mathematical measures that sharpen our intuitive judgments about morality, and (if we like) testing those measures out using IPD tournaments.  It also seems to me that there’s something fundamentally right about the eigenvector idea: all else being equal, we’d like to say, being nice to others is good, except that aiding and abetting evildoers is not good, and the way we can recognize the evildoers in our midst is that they’re not nice to others—except that, if the people who someone isn’t nice to are themselves evildoers, then the person might again be good, and so on.  The only way to cut off the infinite regress, it seems, is to demand some sort of “reflective equilibrium” in our moral judgments, and that’s precisely what eigenmorality tries to capture.  On the other hand, no such idea can ever make moral debate obsolete—if for no other reason than that we still need to decide which specific eigenmorality metric to use, and that choice is itself a moral judgment.

Scooped by Plato

Which brings me, finally, to the second new thing that’s happened this year on the eigenmorality front.  Namely, Rebecca Newberger Goldstein—who’s far and away my favorite contemporary novelist—published a charming new book entitled Plato at the Googleplex: Why Philosophy Won’t Go Away.  Here she imagines that Plato has reappeared in present-day America (she doesn’t bother to explain how), where he’s taught himself English and the basics of modern science, learned how to use the Internet, and otherwise gotten himself up to speed.  The book recounts Plato’s dialogues with various modern interlocutors, as he volunteers to have his brain scanned, guest-writes a relationship advice column, participates in a panel discussion on child-rearing, and gets interviewed on cable news by “Roy McCoy” (a thinly veiled Bill O’Reilly).  Often, Goldstein has Plato answer the moderns’ questions using direct quotes from the Timaeus, the Gorgias, the Meno, etc., which makes her Plato into a very intelligent sort of chatbot.  This is a genre that’s not often seriously attempted, and that I’d love to read more of (possible subjects: Shakespeare, Galileo, Jefferson, Lincoln, Einstein, Turing…).

Anyway, my favorite episode in the book is the first, eponymous one, where Plato visits the Googleplex in Mountain View.  While eating lunch in one of the many free cafeterias, Plato is cornered by a somewhat self-important, dreadlocked coder named Marcus, who tries to convince Plato that Google PageRank has finally solved the problem agonized over in the Republic, of how to define justice.  By using the Internet, we can simply crowd-source the answer, Marcus declares: get millions of people to render moral judgments on every conceivable question, and also moral judgments on each other’s judgments.  Then declare those judgments the most morally reliable, that are judged the most reliable by the people who are themselves the most morally reliable.  The circularity, as usual, is broken by taking the principal eigenvector of the graph of moral judgments (Goldstein doesn’t have Marcus put it that way, but it’s what she means).

Not surprisingly, Plato is skeptical.  Through Socratic questioning—the method he learned from the horse’s mouth—Plato manages to make Marcus realize that, in the very act of choosing which of several variants of PageRank to use in our crowd-sourced justice engine, we’ll implicitly be making moral choices already.  And therefore, we can’t use PageRank, or anything like it, as the ultimate ground of morality.

Whereas I imagined that the raw data for an “eigenmorality” metric would consist of numerical measures of how nice people had been to each other, Goldstein imagines the raw data to consist of abstract moral judgments, and of judgments about judgments.  Also, whereas the output of my kind of metric would be a measure of the “goodness” of each individual person, the outputs of hers would presumably be verdicts about general moral and political questions.  But, much like with CLEVER versus PageRank, it’s obvious that the ideas are similar—and that I should credit Goldstein with independently discovering my nerdy 16-year-old vision, in order to put it in the mouth of a nerdy character in her story.

As I said before, I agree with Goldstein’s Plato that eigenmorality can’t serve as the ultimate ground of morality.  But that’s a bit like saying that Google rank can’t serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google’s engineers must have some prior notion of “importance” to serve as a standard.  That’s true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years.  Goldstein’s book has the wonderful property that even the ideas she gives to her secondary characters, the ones who serve as foils to Plato, are sometimes interesting enough to deserve book-length treatments of their own, and crowd-sourced morality strikes me as a perfect example.

In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization’s survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives.  (Obviously, such ideas are nonstarters in the current political climate of the US, but I’m not talking here about what’s feasible, only about what’s necessary.)  As several commenters pointed out, my view raises an obvious question: who is to decide how much “damage” each activity causes, and thus how much it should be taxed?  Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever?

For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn’t decree the answer—has been representative democracy.  Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences.  But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it’s ever faced: namely, that of leaving our descendants a livable planet.  Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals, and in many cases we’re moving rapidly in the opposite direction.  Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening.

So what’s the solution?  To put the world under the thumb of an environmentalist dictator?  Absolutely not.  In all of history, I don’t think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren’t too keen on alternative energy either).  The problem, I think, is epistemological.  Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn’t yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history.  Because our domination of the earth’s climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win.  If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won’t be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it’s too late.  If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other.  (Of course, all this is a problem for many fields, not just climate change.  Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.)

Yet for all that, I’m an optimist—sort of.  For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society.  Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little.  We could improve it a lot more.  Consider, for example, the following attempted definitions:

A trustworthy source of information is one that’s considered trustworthy by many sources who are themselves trustworthy (on the same topic or on closely related topics).  The current scientific consensus, on any given issue, is what the trustworthy sources consider to be the consensus.  A good decision-maker is someone who’s considered to be a good decision-maker by many other good decision-makers.

At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that’s needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.  And the computation of such an eigenvector need be no more “Orwellian” than … well, Google.  If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that “usually just works” in much the same way Google usually just works.

Now, would those with axes to grind try to subvert such a system the instant it went online?  Certainly.  For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy.  So there would arise a parallel world of trust and consensus and “expertise,” mutually-reinforcing yet nearly disjoint from the world of the real.  But here’s the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one.  They’d see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe.  The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy.  It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I’d cheer that outcome as well.

Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they’re in a minority!  And far from being worried about it, they treat it as a badge of honor.  They think they’re Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers.

I admit all this.  But the point of an eigentrust system wouldn’t be to convince everyone.  As long as I’m fantasizing, the point would be that, once people’s individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law.  The formation of the giant component would be the signal that there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse.  Conversely, it’s essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority’s objections might be enough to veto action, even if it was numerically small.  This is still democracy; it’s just democracy enhanced by linear algebra.

Other people will object that, while we should use the Internet to improve the democratic process, the idea we’re looking for is not eigentrust or eigenmorality but rather prediction markets.  Such markets would allow us to, as my friend Robin Hanson advocates, “vote on values but bet on beliefs.”  For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise.  But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there’s a serious penalty for being wrong, namely losing your shirt.  If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law.

I actually like the idea of prediction markets—I have ever since I heard about them—but I consider them limited in scope.  My example above, involving the year 2200, gives a hint as to why.  Prediction markets are great whenever our disagreements are over something that will be settled one way or the other, to everyone’s assent, in the near future (e.g., who will win the World Cup, or next year’s GDP).  But most of our important disagreements aren’t like that: they’re over which direction society should move in, which issues to care about, which statistical indicators are even the right ones to measure a country’s health.  Now, those broader questions can sometimes be settled empirically, in a sense: they can be settled by the overwhelming judgment of history, as the slavery, women’s suffrage, and fascism debates were.  But that kind of empirical confirmation typically takes way too long to set up a decent betting market around it.  And for the non-bettable questions, a carefully-crafted eigendemocracy really is the best system I can think of.

Again, I think Rebecca Goldstein’s Plato is completely right that such a system, were it implemented, couldn’t possibly solve the philosophical problem of finding the “ultimate ground of justice,” just like Google can’t provide us with the “ultimate ground of importance.”  If nothing else, we’d still need to decide which of the many possible eigentrust metrics to use, and we couldn’t use eigentrust for that without risking an infinite regress.  But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might—just might—work well enough to save civilization.


Update (6/20): If you haven’t been following, there’s an excellent discussion in the comments, with, as I’d hoped, many commenters raising strong and pertinent objections to the eigenmorality and eigendemocracy concepts, while also proposing possible fixes.  Let me now mention what I think are the most important problems with eigenmorality and eigendemocracy respectively—both of them things that had occurred to me also, but that the commenters have brought out very clearly and explicitly.

With eigenmorality, perhaps the most glaring problem is that, as I mentioned before, there’s no notion of time-ordering, or of “who started it,” in the definition that Tyler and I were using.  As Luca Trevisan aptly points out in the comments, this has the consequence that eigenmorality, as it stands, is completely unable to distinguish between a crime syndicate that’s hated by the majority because of its crimes, and an equally-large ethnic minority that’s hated by the majority solely because it’s different, and that therefore hates the majority.  However, unlike with mathematical theories of consciousness—where I used counterexamples to try to show that no mathematical definition of a certain kind could possibly capture our intuitions about consciousness—here the problem strikes me as much more circumscribed and bounded.  It’s far from obvious to me that we can’t easily improve the definition of eigenmorality so that it does agree with most people’s moral intuition, whenever intuition renders a clear verdict, at least in the limited setting of Iterated Prisoners’ Dilemma tournaments.

Let’s see, in particular, how to solve the problem that Luca stressed.  As a first pass, we could do so as follows:

A moral agent is one who only initiates defection against agents who it has good reason to believe are immoral (where, as usual, linear algebra is used to unravel the definition’s apparent circularity).

Notice that I’ve added two elements to the setup: not only time but also knowledge.  If you shun someone solely because you don’t like how they look, then we’d like to say that reflects poorly on you, even if (unbeknownst to you) it turns out that the person really is an asshole.  Now, several more clauses would need to be added to the above definition to flesh it out: for example, if you’ve initiated defection against an immoral person, but then the person stops being immoral, at what point do you have a moral duty to “forgive and forget”?  Also, just like with the eigenmoses/eigenjesus distinction, do you have a positive duty to initiate defection against an agent who you learn is immoral, or merely no duty not to do so?

OK, so after we handle the above issues, will there still be examples that our time-sensitive, knowledge-sensitive eigenmorality definition gets badly, egregiously wrong?  Maybe—I don’t know!  Please let me know in the comments.

Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul.  Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages’ Google rank.  In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people’s “revealed preferences,” and exploits that structure for its own purposes.  Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings.  But Google still isn’t the only reason for linking, so the link structure still contains real information.

By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject.  If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the “experts” such that claiming to “trust” them would do the most for their favored outcome.  It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust.  So, how would an eigendemocracy suss out the truth about who trusts whom on which subject?  I don’t have a very good answer to this, and am open to suggestions.  The best idea so far is to use Facebook for this purpose, but I don’t know exactly how.


Update (6/22): Many commenters, both here and on Hacker News, interpreted me to be saying something obviously stupid: namely, that any belief identified as “the consensus” by an eigenvector analysis is therefore the morally right one. They then energetically knocked down this strawman, with the standard examples (Hitler, slavery, discrimination against gays).

Admittedly, I probably contributed to this confusion by my ill-advised decision to discuss eigenmorality and eigendemocracy in the same blog post—solely because of their mathematical similarity, and the ease with which thinking about one leads to thinking about the other. But the two are different, as are my claims about them. For the record:

  • Eigenmorality: Within the stylized setting of an Iterated Prisoner’s Dilemma tournament, with side-channels allowing agents to learn who are doing what to each other, I believe it ought to be possible, by looking at who initiated rounds of defection and forgiveness, and then doing an eigenvector analysis on the result, to identify the “moral” and “immoral” agents in a way that more-or-less accords with our moral intuitions. Even if true, of course, this wouldn’t have any obvious moral implications for hot-button issues such as abortion, gun control, or climate change, which it’s far from obvious how to encode in terms of IPD tournaments.
  • Eigendemocracy: By doing an eigenvector analysis, to identify who people implicitly acknowledge as the “experts” within each field, I believe that it might be possible to produce results that, on average, in practice, and in contemporary society, are better and more rational than those produced by ordinary majority-voting. Obviously, there’s no guarantee whatsoever that the results of eigendemocracy would be morally acceptable ones: if the public acknowledges as “experts” people who believe evil things (as in Nazi Germany), then eigendemocracy will produce evil results. But democracy itself suffers from a precisely analogous problem. The situation that interests me is one that’s been with us since the time of ancient Athens: one where there is a consensus among the experts about the wisest course of action, and there’s also an implicit consensus among the public that those experts are indeed the experts, but the democratic system is somehow “unable to complete the modus ponens,” because of manipulation by powerful interests and the sway of demagogues. In such cases, it seems possible to me that an eigendemocracy could improve on the results of ordinary democracy—perhaps dramatically so—while still avoiding the evils of dictatorship.

Crucially, in neither of the above bullet points, nor in their combination, is there any hint of a belief that “the will of the majority always defines what’s morally right” (if anything, there’s a belief in the opposite).


Update (7/4): While this isn’t really a surprise—I’d astonished if it weren’t the case—I’ve now learned that several people, besides me and Rebecca Goldstein, have previously written about the ideas of eigentrust and eigendemocracy. Perhaps more surprising is that one of the earlier groups—consisting of Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina from Stanford—literally called the idea “EigenTrust,” when they published about it in 2003. (Note that Garcia-Molina, in a likely non-coincidence, was Larry Page and Sergey Brin’s PhD adviser.) Kamvar et al.’s intended application for EigenTrust was to determine which nodes are trustworthy in a peer-to-peer file-sharing network, rather than (say) to reinvent democracy, or to address conundrums of epistemology and ethics that have been with us since Plato. But while the scope might be more modest, the core idea is the same. (Hat tip to commenter Babak.)

As for enhancing democracy using linear algebra, it turns out that that too has already been discussed: see for example this presentation by Rob Spekkens of the Perimeter Institute, which Michael Nielsen pointed me to. (In yet another small-world phenomenon, Rob’s main interest is in quantum foundations, and in that context I’ve known him for a decade! But his interest in eigendemocracy was news to me.)

If you’re wondering whether anything in this post was original … well, so far, I haven’t learned of prior work specifically about eigenmorality (e.g., in Iterated Prisoners Dilemma tournaments), much less about eigenmoses and eigenjesus.

I was right: Congress’s attack on the NSF widens

Thursday, April 25th, 2013

Last month, I blogged about Sen. Tom Coburn (R-Oklahoma) passing an amendment blocking the National Science Foundation from funding most political science research.  I wrote:

This sort of political interference with the peer-review process, of course, sets a chilling precedent for all academic research, regardless of discipline.  (What’s next, an amendment banning computer science research, unless it has applications to scheduling baseball games or slicing apple pies?)

In the comments section of that post, I was pilloried by critics, who ridiculed my delusional fears about an anti-science witch hunt.  Obviously, they said, Congressional Republicans only wanted to slash dubious social science research: not computer science or the other hard sciences that people reading this blog really care about, and that everyone agrees are worthy.  Well, today I write to inform you that I was right, and my critics were wrong.  For the benefit of readers who might have missed it the first time, let me repeat that:

I was right, and my critics were wrong.

In this case, like in countless others, my “paranoid fears” about what could happen turned out to be preternaturally well-attuned to what would happen.

According to an article in Science, Lamar Smith (R-Texas), the new chair of the ironically-named House Science Committee, held two hearings in which he “floated the idea of having every NSF grant application [in every field] include a statement of how the research, if funded, ‘would directly benefit the American people.’ ”  Connoisseurs of NSF proposals will know that every proposal already includes a “Broader Impacts” section, and that that section often borders on comic farce.  (“We expect further progress on the μ-approximate shortest vector problem to enthrall middle-school students and other members of the local community, especially if they happen to belong to underrepresented groups.”)  Now progress on the μ-approximate shortest vector problem also has to directly—directly—“benefit the American people.”  It’s not enough for such research to benefit science—arguably the least bad, least wasteful enterprise our sorry species has ever managed—and for science, in turn, to be a principal engine of the country’s economic and military strength, something that generally can’t be privatized because of a tragedy-of-the-commons problem, and something that economists say has repaid public investments many, many times over.  No, the benefit now needs to be “direct.”

The truth is, I find myself strangely indifferent to whether Smith gets his way or not.  On the negative side, sure, a pessimist might worry that this could spell the beginning of the end for American science.  But on the positive side, I would have been proven so massively right that, even as I held up my “Will Prove Quantum Complexity Theorems For Food” sign on a street corner or whatever, I’d have something to crow about until the end of my life.

The $10 billion voter

Monday, November 5th, 2012

Update (Nov. 8): Slate’s pundit scoreboard.


Update (Nov. 6): In crucial election news, a Florida woman wearing an MIT T-shirt was barred from voting, because the election supervisor thought her shirt was advertising Mitt Romney.


At the time of writing, Nate Silver is giving Obama an 86.3% chance.  I accept his estimate, while vividly remembering various admittedly-cruder forecasts the night of November 5, 2000, which gave Gore an 80% chance.  (Of course, those forecasts need not have been “wrong”; an event with 20% probability really does happen 20% of the time.)  For me, the main uncertainties concern turnout and the effects of various voter-suppression tactics.

In the meantime, I wanted to call the attention of any American citizens reading this blog to the wonderful Election FAQ of Peter Norvig, director of research at Google and a person well-known for being right about pretty much everything.  The following passage in particular is worth quoting.

Is it rational to vote?

Yes. Voting for president is one of the most cost-effective actions any patriotic American can take.

Let me explain what the question means. For your vote to have an effect on the outcome of the election, you would have to live in a decisive state, meaning a state that would give one candidate or the other the required 270th electoral vote. More importantly, your vote would have to break an exact tie in your state (or, more likely, shift the way that the lawyers and judges will sort out how to count and recount the votes). With 100 million voters nationwide, what are the chances of that? If the chance is so small, why bother voting at all?

Historically, most voters either didn’t worry about this problem, or figured they would vote despite the fact that they weren’t likely to change the outcome, or vote because they want to register the degree of support for their candidate (even a vote that is not decisive is a vote that helps establish whether or not the winner has a “mandate”). But then the 2000 Florida election changed all that, with its slim 537 vote (0.009%) margin.

What is the probability that there will be a decisive state with a very close vote total, where a single vote could make a difference? Statistician Andrew Gelman of Columbia University says about one in 10 million.

That’s a small chance, but what is the value of getting to break the tie? We can estimate the total monetary value by noting that President George W. Bush presided over a $3 trillion war and at least a $1 trillion economic melt-down. Senator Sheldon Whitehouse (D-RI) estimated the cost of the Bush presidency at $7.7 trillion. Let’s compromise and call it $6 trillion, and assume that the other candidate would have been revenue neutral, so the net difference of the presidential choice is $6 trillion.

The value of not voting is that you save, say, an hour of your time. If you’re an average American wage-earner, that’s about $20. In contrast, the value of voting is the probability that your vote will decide the election (1 in 10 million if you live in a swing state) times the cost difference (potentially $6 trillion). That means the expected value of your vote (in that election) was $600,000. What else have you ever done in your life with an expected value of $600,000 per hour? Not even Warren Buffett makes that much. (One caveat: you need to be certain that your contribution is positive, not negative. If you vote for a candidate who makes things worse, then you have a negative expected value. So do your homework before voting. If you haven’t already done that, then you’ll need to add maybe 100 hours to the cost of voting, and the expected value goes down to $6,000 per hour.)

I’d like to embellish Norvig’s analysis with one further thought experiment.  While I favor a higher figure, for argument’s sake let’s accept Norvig’s estimate that the cost George W. Bush inflicted on the country was something like $6 trillion.  Now, imagine that a delegation of concerned citizens from 2012 were able to go back in time to November 5, 2000, round up 538 lazy Gore supporters in Florida who otherwise would have stayed home, and bribe them to go to the polls.  Set aside the illegality of the time-travelers’ action: they’re already violating the laws of space, time, and causality, which are well-known to be considerably more reliable than Florida state election law!  Set aside all the other interventions that also would’ve swayed the 2000 election outcome, and the 20/20 nature of hindsight, and the insanity of Florida’s recount process.  Instead, let’s simply ask: how much should each of those 538 lazy Floridian Gore supporters have been paid, in order for the delegation from the future to have gotten its money’s worth?

The answer is a mind-boggling ~$10 billion per voter.  Think about that: just for peeling their backsides off the couch, heading to the local library or school gymnasium, and punching a few chads (all the way through, hopefully), each of those 538 voters would have instantly received the sort of wealth normally associated with Saudi princes or founders of Google or Facebook.  And the country and the world would have benefited from that bargain.

No, this isn’t really a decisive argument for anything (I’ll leave it to the commenters to point out the many possible objections).  All it is, is an image worth keeping in mind the next time someone knowingly explains to you why voting is a waste of time.

Silver lining

Monday, October 29th, 2012

Update (10/31): While I continue to engage in surreal arguments in the comments section—Scott, I’m profoundly disappointed that a scientist like you, who surely knows better, would be so sloppy as to assert without any real proof that just because it has tusks and a trunk, and looks and sounds like an elephant, and is the size of the elephant, that it therefore is an elephant, completely ignoring the blah blah blah blah blah—while I do that, there are a few glimmerings that the rest of the world is finally starting to get it.  A new story from The Onion, which I regard as almost the only real newspaper left:

Nation Suddenly Realizes This Just Going To Be A Thing That Happens From Now On

Update (11/1): OK, and this morning from Nicholas Kristof, who’s long been one of the rare nonOnion practitioners of journalism: Will Climate Get Some Respect Now?


I’m writing from the abstract, hypothetical future that climate-change alarmists talk about—the one where huge tropical storms batter the northeastern US, coastal cities are flooded, hundreds of thousands are evacuated from their homes, etc.  I always imagined that, when this future finally showed up, at least I’d have the satisfaction of seeing the deniers admit they were grievously wrong, and that I and those who think similarly were right.  Which, for an academic, is a satisfaction that has to be balanced carefully against the possible destruction of the world.  I don’t think I had the imagination to foresee that the prophesied future would actually arrive, and that climate change would simultaneously disappear as a political issue—with the forces of know-nothingism bolder than ever, pressing their advantage into questions like whether or not raped women can get pregnant, as the President weakly pleads that he too favors more oil drilling.  I should have known from years of blogging that, if you hope for the consolation of seeing those who are wrong admit to being wrong, you hope for a form of happiness all but unattainable in this world.

Yet, if the transformation of the eastern seaboard into something out of the Jurassic hasn’t brought me that satisfaction, it has brought a different, completely unanticipated benefit.  Trapped in my apartment, with the campus closed and all meetings cancelled, I’ve found, for the first time in months, that I actually have some time to write papers.  (And, well, blog posts.)  Because of this, part of me wishes that the hurricane would continue all week, even a month or two (minus, of course, the power outages, evacuations, and other nasty side effects).  I could learn to like this future.

At this point in the post, I was going to transition cleverly into an almost (but not completely) unrelated question about the nature of causality.  But I now realize that the mention of hurricanes and (especially) climate change will overshadow anything I have to say about more abstract matters.  So I’ll save the causality stuff for tomorrow or Wednesday.  Hopefully the hurricane will still be here, and I’ll have time to write.

Enough with Bell’s Theorem. New topic: Psychopathic killer robots!

Friday, May 25th, 2012
A few days ago, a writer named John Rico emailed me the following question, which he’s kindly given me permission to share.
If a computer, or robot, was able to achieve true Artificial Intelligence, but it did not have a parallel programming or capacity for empathy, would that then necessarily make the computer psychopathic?  And if so, would it then follow the rule devised by forensic psychologists that it would necessarily then become predatory?  This then moves us into territory covered by science-fiction films like “The Terminator.”  Would this psychopathic computer decide to kill us?  (Or would that merely be a rational logical decision that wouldn’t require psychopathy?)

See, now this is precisely why I became a CS professor: so that if anyone asked, I could give not merely my opinion, but my professional, expert opinion, on the question of whether psychopathic Terminators will kill us all.

My response (slightly edited) is below.

Dear John,

I fear that your question presupposes way too much anthropomorphizing of an AI machine—that is, imagining that it would even be understandable in terms of human categories like “empathetic” versus “psychopathic.”  Sure, an AI might be understandable in those sorts of terms, but only if it had been programmed to act like a human.  In that case, though, I personally find it no easier or harder to imagine an “empathetic” humanoid robot than a “psychopathic” robot!  (If you want a rich imagining of “empathetic robots” in science fiction, of course you need look no further than Isaac Asimov.)

On the other hand, I personally also think it’s possible –even likely—that an AI would pursue its goals (whatever they happened to be) in a way so different from what humans are used to that the AI couldn’t be usefully compared to any particular type of human, even a human psychopath.  To drive home this point, the AI visionary Eliezer Yudkowsky likes to use the example of the “paperclip maximizer.”  This is an AI whose programming would cause it to use its unimaginably-vast intelligence in the service of one goal only: namely, converting as much matter as it possibly can into paperclips!

Now, if such an AI were created, it would indeed likely spell doom for humanity, since the AI would think nothing of destroying the entire Earth to get more iron for paperclips.  But terrible though it was, would you really want to describe such an entity as a “psychopath,” any more than you’d describe (say) a nuclear weapon as a “psychopath”?  The word “psychopath” connotes some sort of deviation from the human norm, but human norms were never applicable to the paperclip maximizer in the first place … all that was ever relevant was the paperclip norm!

Motivated by these sorts of observations, Yudkowsky has thought and written a great deal about how the question of how to create a “friendly AI,” by which he means one that would use its vast intelligence to improve human welfare, instead of maximizing some arbitrary other objective like the total number of paperclips in existence that might be at odds with our welfare.  While I don’t always agree with him—for example, I don’t think AI has a single “key,” and I certainly don’t think such a key will be discovered anytime soon—I’m sure you’d find his writings at yudkowsky.net, lesswrong.com, and overcomingbias.com to be of interest to you.

I should mention, in passing, that “parallel programming” has nothing at all to do with your other (fun) questions.  You could perfectly well have a murderous robot with parallel programming, or a kind, loving robot with serial programming only.

Hope that helps,
Scott