Hamas is trying to kill as many civilians as it can.
Israel is trying to kill as few civilians as it can.
Neither is succeeding very well.
Hamas is trying to kill as many civilians as it can.
Israel is trying to kill as few civilians as it can.
Neither is succeeding very well.
This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell. Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden. The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query. Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.” At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix. Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.
I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page. Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar. At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.
In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”
Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?” After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.” Yet they both managed to use math to defeat the circularity. All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map. And such an equilibrium could be shown to exist—indeed, to exist uniquely.
Searching for other circular notions to elucidate using linear algebra, I hit on morality. Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here? If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them? What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice? Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them? Should we consider intent, or only outcomes? Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling? Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community? If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?
For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity. How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?
Ah, I thought—this is precisely where linear algebra can come to the rescue! Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
The next step, I figured, would be to hack together some code that computed this “eigenmorality” metric, and then see what happened when I ran the code to measure the morality of each participant in a simulated society. What would happen? Would the results conform to my pre-theoretic intuitions about what sort of behavior was moral and what wasn’t? If not, then would watching the simulation give me new ideas about how to improve the morality metric? Or would it be my intuitions themselves that would change?
Unfortunately, I never got around to the “coding it up” part—there’s a reason why I became a theorist! The eigenmorality idea went onto my back burner, where it stayed for the next 16 years: 16 years in which our world descended ever further into darkness, lacking a principled way to quantify morality. But finally, this year, just two separate things have happened on the eigenmorality front, and that’s why I’m blogging about it now.
Eigenjesus and Eigenmoses
The first thing that’s happened is that Tyler Singer-Clark, my superb former undergraduate advisee, did code up eigenmorality metrics and test them out on a simulated society, for his MIT senior thesis project. You can read Tyler’s 12-page report here—it’s a fun, enjoyable, thought-provoking first research paper, one that I wholeheartedly recommend. Or, if you’d like to experiment yourself with the Python code, you can download it here from github. (Of course, all opinions expressed in this post are mine alone, not necessarily Tyler’s.)
Briefly, Tyler examined what eigenmorality has to say in the setting of an Iterated Prisoner’s Dilemma (IPD) tournament. The Iterated Prisoner’s Dilemma is the famous game in which two players meet repeatedly, and in each turn can either “Cooperate” or “Defect.” The absolute best thing, from your perspective, is if you defect while your partner cooperates. But you’re also pretty happy if you both cooperate. You’re less happy if you both defect, while the worst (from your standpoint) is if you cooperate while your partner defects. At each turn, when contemplating what to do, you have the entire previous history of your interaction with this partner available to you. And thus, for example, you can decide to “punish” your partner for past defections, “reward” her for past cooperations, or “try to take advantage” by unilaterally defecting and seeing what happens. At each turn, the game has some small constant probability of ending—so you know approximately how many times you’ll meet this partner in the future, but you don’t know exactly when the last turn will be. Your score, in the game, is then the sum-total of your score over all turns and all partners (where each player meets each other player once).
In the late 1970s, as recounted in his classic work The Evolution of Cooperation, Robert Axelrod invited people all over the world to submit computer programs for playing this game, which were then pit against each other in the world’s first serious IPD tournament. And, in a tale that’s been retold in hundreds of popular books, while many people submitted complicated programs that used machine learning, etc. to try to suss out their opponents, the program that won—hands-down, repeatedly—was TIT_FOR_TAT, a few lines of code submitted by the psychologist Anatol Rapaport to implement an ancient moral maxim. TIT_FOR_TAT starts out by cooperating; thereafter, it simply does whatever its opponent did in the last move, swiftly rewarding every cooperation and punishing every defection, and ignoring the entire previous history. In the decades since Axelrod, running Iterated Prisoners’ Dilemma tournaments has become a minor industry, with countless variations explored (for example, “evolutionary” versions, and versions allowing side-communication between the players), countless new strategies invented, and countless papers published. To make a long story short, TIT_FOR_TAT continues to do quite well across a wide range of environments, but depending on the mix of players present, other strategies can sometimes beat TIT_FOR_TAT. (As one example, if there’s a sizable minority of colluding players, who recognize each other by cooperating and defecting in a prearranged sequence, then those players can destroy TIT_FOR_TAT and other “simple” strategies, by cooperating with one another while defecting against everyone else.)
Anyway, Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly. However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners?
It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities. The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either! The players are bots, which do whatever their code tells them to do. So in some sense, utility—no less than morality—is “merely an interpretation” that we impose on the raw cooperate/defect sequences! There’s nothing to stop us from imposing some other interpretation (say, one that explicitly tries to measure morality) and seeing what happens.
In an attempt to measure the players’ morality, Tyler uses the eigenmorality idea from before. The extent to which player A “cooperates” with player B is simply measured by the percentage of times A cooperates. (One acknowledged limitation of this work is that, when two players both defect, there’s no attempt to take into account “who started it,” and to judge the aggressor more harshly than the retaliator—or to incorporate time in any other way.) This then gives us a “cooperation matrix,” whose (i,j) entry records the total amount of niceness that player i displayed to player j. Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores.
Now, there’s a very interesting ambiguity in what I said above. Namely, should we define the “niceness scores” to lie in [0,1] (so that the lowest, meanest possible score is 0), or in [-1,1] (so that it’s possible to have negative niceness)? This might sound like a triviality, but in our setting, it’s precisely the mathematical reflection of one of the philosophical conundrums I mentioned earlier. The conundrum can be stated as follows: is your morality a monotone function of your niceness? We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler. But do you have a positive obligation to be not-nice to Hitler: to make him suffer because he made others suffer? Or, OK, how about not Hitler, but someone who’s somewhat bad? Consider, for example, a woman who falls in love with, and marries, an unrepentant armed robber (with full knowledge of who he is, and with other options available to her). Is the woman morally praiseworthy for loving her husband despite his bad behavior? Or is she blameworthy because, by rewarding his behavior with her love, she helps to enable it?
To capture two possible extremes of opinion about such questions, Tyler and I defined two different morality metrics, which we called … wait for it … eigenmoses and eigenjesus. Eigenmoses has the niceness scores in [-1,1], which means that you’re actively rewarded for punishing evildoers: that is, for defecting against those who defect against many moral players. Eigenjesus, by contrast, has the niceness scores in [0,1], which means that you always do at least as well by “turning the other cheek” and cooperating. (Though note that, even with eigenjesus, you get more morality credits by cooperating with moral players than by cooperating with immoral ones.)
This is probably a good place to mention a second limitation of Tyler’s current study. Namely, with the current system, there’s no direct way for a player to find out how its partner has been behaving toward third parties. The only information that A gets about the goodness or evilness of player B, comes from A and B’s direct interaction. Ideally, one would like to design bots that take into account, not only the other bots’ behavior toward them, but the other bots’ behavior toward each other. So for example, even if someone is unfailingly nice to you, if that person is an asshole to everyone else, then the eigenmoses moral code would demand that you return the person’s cooperation with icy defection. Conversely, even if Gandhi is mean and hateful to you, you would still be morally obliged (interestingly, on both the eigenmoses and eigenjesus codes) to be nice to him, because of the amount of good he does for everyone else.
Anyway, you can read Tyler’s paper if you want to see the results of computing the eigenmoses and eigenjesus scores for a diverse population of bots. Briefly, the results accord pretty well with intuition. When we look at eigenjesus scores, the all-cooperate bot comes out on top and the all-defect bot on the bottom (as is mathematically necessary), with TIT_FOR_TAT somewhere in the middle, and generous versions of TIT_FOR_TAT higher up. When we look at eigenmoses, by contrast, TIT_FOR_TWO_TATS comes out on top, with TIT_FOR_TAT in sixth place, and the all-cooperate bot scoring below the median. Interestingly, once again, the all-defect bot gets the lowest score (though in this case, it wasn’t mathematically necessary).
Even though the measures acquit themselves well in this particular tournament, it’s admittedly easy to construct scenarios where the prescriptions of eigenjesus and eigenmoses alike violently diverge from most people’s moral intuitions. We’ve already touched on a few such scenarios above (for example, are you really morally obligated to lick the boots of someone who kicks you, just because that person is a saint to everyone other than you?). Another type of scenario involves minorities. Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that). Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%. Who, in this scenario, is moral, and who’s immoral? The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil. After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone. Of course, for much of human history, this is precisely how morality worked, in many people’s minds. But I dare say it’s a result that would make moderns uncomfortable.
In summary, it seems clear to me that neither eigenmoses nor eigenjesus correctly captures our intuitions about morality, any more than Φ captures our intuitions about consciousness. But as they say, I think there’s plenty of scope here for further research: for coming up with new mathematical measures that sharpen our intuitive judgments about morality, and (if we like) testing those measures out using IPD tournaments. It also seems to me that there’s something fundamentally right about the eigenvector idea: all else being equal, we’d like to say, being nice to others is good, except that aiding and abetting evildoers is not good, and the way we can recognize the evildoers in our midst is that they’re not nice to others—except that, if the people who someone isn’t nice to are themselves evildoers, then the person might again be good, and so on. The only way to cut off the infinite regress, it seems, is to demand some sort of “reflective equilibrium” in our moral judgments, and that’s precisely what eigenmorality tries to capture. On the other hand, no such idea can ever make moral debate obsolete—if for no other reason than that we still need to decide which specific eigenmorality metric to use, and that choice is itself a moral judgment.
Scooped by Plato
Which brings me, finally, to the second new thing that’s happened this year on the eigenmorality front. Namely, Rebecca Newberger Goldstein—who’s far and away my favorite contemporary novelist—published a charming new book entitled Plato at the Googleplex: Why Philosophy Won’t Go Away. Here she imagines that Plato has reappeared in present-day America (she doesn’t bother to explain how), where he’s taught himself English and the basics of modern science, learned how to use the Internet, and otherwise gotten himself up to speed. The book recounts Plato’s dialogues with various modern interlocutors, as he volunteers to have his brain scanned, guest-writes a relationship advice column, participates in a panel discussion on child-rearing, and gets interviewed on cable news by “Roy McCoy” (a thinly veiled Bill O’Reilly). Often, Goldstein has Plato answer the moderns’ questions using direct quotes from the Timaeus, the Gorgias, the Meno, etc., which makes her Plato into a very intelligent sort of chatbot. This is a genre that’s not often seriously attempted, and that I’d love to read more of (possible subjects: Shakespeare, Galileo, Jefferson, Lincoln, Einstein, Turing…).
Anyway, my favorite episode in the book is the first, eponymous one, where Plato visits the Googleplex in Mountain View. While eating lunch in one of the many free cafeterias, Plato is cornered by a somewhat self-important, dreadlocked coder named Marcus, who tries to convince Plato that Google PageRank has finally solved the problem agonized over in the Republic, of how to define justice. By using the Internet, we can simply crowd-source the answer, Marcus declares: get millions of people to render moral judgments on every conceivable question, and also moral judgments on each other’s judgments. Then declare those judgments the most morally reliable, that are judged the most reliable by the people who are themselves the most morally reliable. The circularity, as usual, is broken by taking the principal eigenvector of the graph of moral judgments (Goldstein doesn’t have Marcus put it that way, but it’s what she means).
Not surprisingly, Plato is skeptical. Through Socratic questioning—the method he learned from the horse’s mouth—Plato manages to make Marcus realize that, in the very act of choosing which of several variants of PageRank to use in our crowd-sourced justice engine, we’ll implicitly be making moral choices already. And therefore, we can’t use PageRank, or anything like it, as the ultimate ground of morality.
Whereas I imagined that the raw data for an “eigenmorality” metric would consist of numerical measures of how nice people had been to each other, Goldstein imagines the raw data to consist of abstract moral judgments, and of judgments about judgments. Also, whereas the output of my kind of metric would be a measure of the “goodness” of each individual person, the outputs of hers would presumably be verdicts about general moral and political questions. But, much like with CLEVER versus PageRank, it’s obvious that the ideas are similar—and that I should credit Goldstein with independently discovering my nerdy 16-year-old vision, in order to put it in the mouth of a nerdy character in her story.
As I said before, I agree with Goldstein’s Plato that eigenmorality can’t serve as the ultimate ground of morality. But that’s a bit like saying that Google rank can’t serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google’s engineers must have some prior notion of “importance” to serve as a standard. That’s true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years. Goldstein’s book has the wonderful property that even the ideas she gives to her secondary characters, the ones who serve as foils to Plato, are sometimes interesting enough to deserve book-length treatments of their own, and crowd-sourced morality strikes me as a perfect example.
In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization’s survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives. (Obviously, such ideas are nonstarters in the current political climate of the US, but I’m not talking here about what’s feasible, only about what’s necessary.) As several commenters pointed out, my view raises an obvious question: who is to decide how much “damage” each activity causes, and thus how much it should be taxed? Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever?
For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn’t decree the answer—has been representative democracy. Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences. But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it’s ever faced: namely, that of leaving our descendants a livable planet. Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals, and in many cases we’re moving rapidly in the opposite direction. Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening.
So what’s the solution? To put the world under the thumb of an environmentalist dictator? Absolutely not. In all of history, I don’t think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren’t too keen on alternative energy either). The problem, I think, is epistemological. Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn’t yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history. Because our domination of the earth’s climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win. If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won’t be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it’s too late. If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other. (Of course, all this is a problem for many fields, not just climate change. Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.)
Yet for all that, I’m an optimist—sort of. For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society. Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little. We could improve it a lot more. Consider, for example, the following attempted definitions:
A trustworthy source of information is one that’s considered trustworthy by many sources who are themselves trustworthy (on the same topic or on closely related topics). The current scientific consensus, on any given issue, is what the trustworthy sources consider to be the consensus. A good decision-maker is someone who’s considered to be a good decision-maker by many other good decision-makers.
At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that’s needed to unravel the circularity is a principal eigenvector computation on the matrix of trust. And the computation of such an eigenvector need be no more “Orwellian” than … well, Google. If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that “usually just works” in much the same way Google usually just works.
Now, would those with axes to grind try to subvert such a system the instant it went online? Certainly. For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy. So there would arise a parallel world of trust and consensus and “expertise,” mutually-reinforcing yet nearly disjoint from the world of the real. But here’s the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one. They’d see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe. The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy. It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I’d cheer that outcome as well.
Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they’re in a minority! And far from being worried about it, they treat it as a badge of honor. They think they’re Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers.
I admit all this. But the point of an eigentrust system wouldn’t be to convince everyone. As long as I’m fantasizing, the point would be that, once people’s individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law. The formation of the giant component would be the signal that there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse. Conversely, it’s essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority’s objections might be enough to veto action, even if it was numerically small. This is still democracy; it’s just democracy enhanced by linear algebra.
Other people will object that, while we should use the Internet to improve the democratic process, the idea we’re looking for is not eigentrust or eigenmorality but rather prediction markets. Such markets would allow us to, as my friend Robin Hanson advocates, “vote on values but bet on beliefs.” For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise. But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there’s a serious penalty for being wrong, namely losing your shirt. If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law.
I actually like the idea of prediction markets—I have ever since I heard about them—but I consider them limited in scope. My example above, involving the year 2200, gives a hint as to why. Prediction markets are great whenever our disagreements are over something that will be settled one way or the other, to everyone’s assent, in the near future (e.g., who will win the World Cup, or next year’s GDP). But most of our important disagreements aren’t like that: they’re over which direction society should move in, which issues to care about, which statistical indicators are even the right ones to measure a country’s health. Now, those broader questions can sometimes be settled empirically, in a sense: they can be settled by the overwhelming judgment of history, as the slavery, women’s suffrage, and fascism debates were. But that kind of empirical confirmation typically takes way too long to set up a decent betting market around it. And for the non-bettable questions, a carefully-crafted eigendemocracy really is the best system I can think of.
Again, I think Rebecca Goldstein’s Plato is completely right that such a system, were it implemented, couldn’t possibly solve the philosophical problem of finding the “ultimate ground of justice,” just like Google can’t provide us with the “ultimate ground of importance.” If nothing else, we’d still need to decide which of the many possible eigentrust metrics to use, and we couldn’t use eigentrust for that without risking an infinite regress. But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might—just might—work well enough to save civilization.
Update (6/20): If you haven’t been following, there’s an excellent discussion in the comments, with, as I’d hoped, many commenters raising strong and pertinent objections to the eigenmorality and eigendemocracy concepts, while also proposing possible fixes. Let me now mention what I think are the most important problems with eigenmorality and eigendemocracy respectively—both of them things that had occurred to me also, but that the commenters have brought out very clearly and explicitly.
With eigenmorality, perhaps the most glaring problem is that, as I mentioned before, there’s no notion of time-ordering, or of “who started it,” in the definition that Tyler and I were using. As Luca Trevisan aptly points out in the comments, this has the consequence that eigenmorality, as it stands, is completely unable to distinguish between a crime syndicate that’s hated by the majority because of its crimes, and an equally-large ethnic minority that’s hated by the majority solely because it’s different, and that therefore hates the majority. However, unlike with mathematical theories of consciousness—where I used counterexamples to try to show that no mathematical definition of a certain kind could possibly capture our intuitions about consciousness—here the problem strikes me as much more circumscribed and bounded. It’s far from obvious to me that we can’t easily improve the definition of eigenmorality so that it does agree with most people’s moral intuition, whenever intuition renders a clear verdict, at least in the limited setting of Iterated Prisoners’ Dilemma tournaments.
Let’s see, in particular, how to solve the problem that Luca stressed. As a first pass, we could do so as follows:
A moral agent is one who only initiates defection against agents who it has good reason to believe are immoral (where, as usual, linear algebra is used to unravel the definition’s apparent circularity).
Notice that I’ve added two elements to the setup: not only time but also knowledge. If you shun someone solely because you don’t like how they look, then we’d like to say that reflects poorly on you, even if (unbeknownst to you) it turns out that the person really is an asshole. Now, several more clauses would need to be added to the above definition to flesh it out: for example, if you’ve initiated defection against an immoral person, but then the person stops being immoral, at what point do you have a moral duty to “forgive and forget”? Also, just like with the eigenmoses/eigenjesus distinction, do you have a positive duty to initiate defection against an agent who you learn is immoral, or merely no duty not to do so?
OK, so after we handle the above issues, will there still be examples that our time-sensitive, knowledge-sensitive eigenmorality definition gets badly, egregiously wrong? Maybe—I don’t know! Please let me know in the comments.
Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul. Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages’ Google rank. In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people’s “revealed preferences,” and exploits that structure for its own purposes. Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings. But Google still isn’t the only reason for linking, so the link structure still contains real information.
By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject. If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the “experts” such that claiming to “trust” them would do the most for their favored outcome. It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust. So, how would an eigendemocracy suss out the truth about who trusts whom on which subject? I don’t have a very good answer to this, and am open to suggestions. The best idea so far is to use Facebook for this purpose, but I don’t know exactly how.
Update (6/22): Many commenters, both here and on Hacker News, interpreted me to be saying something obviously stupid: namely, that any belief identified as “the consensus” by an eigenvector analysis is therefore the morally right one. They then energetically knocked down this strawman, with the standard examples (Hitler, slavery, discrimination against gays).
Admittedly, I probably contributed to this confusion by my ill-advised decision to discuss eigenmorality and eigendemocracy in the same blog post—solely because of their mathematical similarity, and the ease with which thinking about one leads to thinking about the other. But the two are different, as are my claims about them. For the record:
Crucially, in neither of the above bullet points, nor in their combination, is there any hint of a belief that “the will of the majority always defines what’s morally right” (if anything, there’s a belief in the opposite).
Update (7/4): While this isn’t really a surprise—I’d astonished if it weren’t the case—I’ve now learned that several people, besides me and Rebecca Goldstein, have previously written about the ideas of eigentrust and eigendemocracy. Perhaps more surprising is that one of the earlier groups—consisting of Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina from Stanford—literally called the idea “EigenTrust,” when they published about it in 2003. (Note that Garcia-Molina, in a likely non-coincidence, was Larry Page and Sergey Brin’s PhD adviser.) Kamvar et al.’s intended application for EigenTrust was to determine which nodes are trustworthy in a peer-to-peer file-sharing network, rather than (say) to reinvent democracy, or to address conundrums of epistemology and ethics that have been with us since Plato. But while the scope might be more modest, the core idea is the same. (Hat tip to commenter Babak.)
As for enhancing democracy using linear algebra, it turns out that that too has already been discussed: see for example this presentation by Rob Spekkens of the Perimeter Institute, which Michael Nielsen pointed me to. (In yet another small-world phenomenon, Rob’s main interest is in quantum foundations, and in that context I’ve known him for a decade! But his interest in eigendemocracy was news to me.)
If you’re wondering whether anything in this post was original … well, so far, I haven’t learned of prior work specifically about eigenmorality (e.g., in Iterated Prisoners Dilemma tournaments), much less about eigenmoses and eigenjesus.
Last month, I blogged about Sen. Tom Coburn (R-Oklahoma) passing an amendment blocking the National Science Foundation from funding most political science research. I wrote:
This sort of political interference with the peer-review process, of course, sets a chilling precedent for all academic research, regardless of discipline. (What’s next, an amendment banning computer science research, unless it has applications to scheduling baseball games or slicing apple pies?)
In the comments section of that post, I was pilloried by critics, who ridiculed my delusional fears about an anti-science witch hunt. Obviously, they said, Congressional Republicans only wanted to slash dubious social science research: not computer science or the other hard sciences that people reading this blog really care about, and that everyone agrees are worthy. Well, today I write to inform you that I was right, and my critics were wrong. For the benefit of readers who might have missed it the first time, let me repeat that:
I was right, and my critics were wrong.
In this case, like in countless others, my “paranoid fears” about what could happen turned out to be preternaturally well-attuned to what would happen.
According to an article in Science, Lamar Smith (R-Texas), the new chair of the ironically-named House Science Committee, held two hearings in which he “floated the idea of having every NSF grant application [in every field] include a statement of how the research, if funded, ‘would directly benefit the American people.’ ” Connoisseurs of NSF proposals will know that every proposal already includes a “Broader Impacts” section, and that that section often borders on comic farce. (“We expect further progress on the μ-approximate shortest vector problem to enthrall middle-school students and other members of the local community, especially if they happen to belong to underrepresented groups.”) Now progress on the μ-approximate shortest vector problem also has to directly—directly—”benefit the American people.” It’s not enough for such research to benefit science—arguably the least bad, least wasteful enterprise our sorry species has ever managed—and for science, in turn, to be a principal engine of the country’s economic and military strength, something that generally can’t be privatized because of a tragedy-of-the-commons problem, and something that economists say has repaid public investments many, many times over. No, the benefit now needs to be “direct.”
The truth is, I find myself strangely indifferent to whether Smith gets his way or not. On the negative side, sure, a pessimist might worry that this could spell the beginning of the end for American science. But on the positive side, I would have been proven so massively right that, even as I held up my “Will Prove Quantum Complexity Theorems For Food” sign on a street corner or whatever, I’d have something to crow about until the end of my life.
Update (Nov. 8): Slate’s pundit scoreboard.
Update (Nov. 6): In crucial election news, a Florida woman wearing an MIT T-shirt was barred from voting, because the election supervisor thought her shirt was advertising Mitt Romney.
At the time of writing, Nate Silver is giving Obama an 86.3% chance. I accept his estimate, while vividly remembering various admittedly-cruder forecasts the night of November 5, 2000, which gave Gore an 80% chance. (Of course, those forecasts need not have been “wrong”; an event with 20% probability really does happen 20% of the time.) For me, the main uncertainties concern turnout and the effects of various voter-suppression tactics.
In the meantime, I wanted to call the attention of any American citizens reading this blog to the wonderful Election FAQ of Peter Norvig, director of research at Google and a person well-known for being right about pretty much everything. The following passage in particular is worth quoting.
Yes. Voting for president is one of the most cost-effective actions any patriotic American can take.
Let me explain what the question means. For your vote to have an effect on the outcome of the election, you would have to live in a decisive state, meaning a state that would give one candidate or the other the required 270th electoral vote. More importantly, your vote would have to break an exact tie in your state (or, more likely, shift the way that the lawyers and judges will sort out how to count and recount the votes). With 100 million voters nationwide, what are the chances of that? If the chance is so small, why bother voting at all?
Historically, most voters either didn’t worry about this problem, or figured they would vote despite the fact that they weren’t likely to change the outcome, or vote because they want to register the degree of support for their candidate (even a vote that is not decisive is a vote that helps establish whether or not the winner has a “mandate”). But then the 2000 Florida election changed all that, with its slim 537 vote (0.009%) margin.
What is the probability that there will be a decisive state with a very close vote total, where a single vote could make a difference? Statistician Andrew Gelman of Columbia University says about one in 10 million.
That’s a small chance, but what is the value of getting to break the tie? We can estimate the total monetary value by noting that President George W. Bush presided over a $3 trillion war and at least a $1 trillion economic melt-down. Senator Sheldon Whitehouse (D-RI) estimated the cost of the Bush presidency at $7.7 trillion. Let’s compromise and call it $6 trillion, and assume that the other candidate would have been revenue neutral, so the net difference of the presidential choice is $6 trillion.
The value of not voting is that you save, say, an hour of your time. If you’re an average American wage-earner, that’s about $20. In contrast, the value of voting is the probability that your vote will decide the election (1 in 10 million if you live in a swing state) times the cost difference (potentially $6 trillion). That means the expected value of your vote (in that election) was $600,000. What else have you ever done in your life with an expected value of $600,000 per hour? Not even Warren Buffett makes that much. (One caveat: you need to be certain that your contribution is positive, not negative. If you vote for a candidate who makes things worse, then you have a negative expected value. So do your homework before voting. If you haven’t already done that, then you’ll need to add maybe 100 hours to the cost of voting, and the expected value goes down to $6,000 per hour.)
I’d like to embellish Norvig’s analysis with one further thought experiment. While I favor a higher figure, for argument’s sake let’s accept Norvig’s estimate that the cost George W. Bush inflicted on the country was something like $6 trillion. Now, imagine that a delegation of concerned citizens from 2012 were able to go back in time to November 5, 2000, round up 538 lazy Gore supporters in Florida who otherwise would have stayed home, and bribe them to go to the polls. Set aside the illegality of the time-travelers’ action: they’re already violating the laws of space, time, and causality, which are well-known to be considerably more reliable than Florida state election law! Set aside all the other interventions that also would’ve swayed the 2000 election outcome, and the 20/20 nature of hindsight, and the insanity of Florida’s recount process. Instead, let’s simply ask: how much should each of those 538 lazy Floridian Gore supporters have been paid, in order for the delegation from the future to have gotten its money’s worth?
The answer is a mind-boggling ~$10 billion per voter. Think about that: just for peeling their backsides off the couch, heading to the local library or school gymnasium, and punching a few chads (all the way through, hopefully), each of those 538 voters would have instantly received the sort of wealth normally associated with Saudi princes or founders of Google or Facebook. And the country and the world would have benefited from that bargain.
No, this isn’t really a decisive argument for anything (I’ll leave it to the commenters to point out the many possible objections). All it is, is an image worth keeping in mind the next time someone knowingly explains to you why voting is a waste of time.
Update (10/31): While I continue to engage in surreal arguments in the comments section—Scott, I’m profoundly disappointed that a scientist like you, who surely knows better, would be so sloppy as to assert without any real proof that just because it has tusks and a trunk, and looks and sounds like an elephant, and is the size of the elephant, that it therefore is an elephant, completely ignoring the blah blah blah blah blah—while I do that, there are a few glimmerings that the rest of the world is finally starting to get it. A new story from The Onion, which I regard as almost the only real newspaper left:
Update (11/1): OK, and this morning from Nicholas Kristof, who’s long been one of the rare non-Onion practitioners of journalism: Will Climate Get Some Respect Now?
I’m writing from the abstract, hypothetical future that climate-change alarmists talk about—the one where huge tropical storms batter the northeastern US, coastal cities are flooded, hundreds of thousands are evacuated from their homes, etc. I always imagined that, when this future finally showed up, at least I’d have the satisfaction of seeing the deniers admit they were grievously wrong, and that I and those who think similarly were right. Which, for an academic, is a satisfaction that has to be balanced carefully against the possible destruction of the world. I don’t think I had the imagination to foresee that the prophesied future would actually arrive, and that climate change would simultaneously disappear as a political issue—with the forces of know-nothingism bolder than ever, pressing their advantage into questions like whether or not raped women can get pregnant, as the President weakly pleads that he too favors more oil drilling. I should have known from years of blogging that, if you hope for the consolation of seeing those who are wrong admit to being wrong, you hope for a form of happiness all but unattainable in this world.
Yet, if the transformation of the eastern seaboard into something out of the Jurassic hasn’t brought me that satisfaction, it has brought a different, completely unanticipated benefit. Trapped in my apartment, with the campus closed and all meetings cancelled, I’ve found, for the first time in months, that I actually have some time to write papers. (And, well, blog posts.) Because of this, part of me wishes that the hurricane would continue all week, even a month or two (minus, of course, the power outages, evacuations, and other nasty side effects). I could learn to like this future.
At this point in the post, I was going to transition cleverly into an almost (but not completely) unrelated question about the nature of causality. But I now realize that the mention of hurricanes and (especially) climate change will overshadow anything I have to say about more abstract matters. So I’ll save the causality stuff for tomorrow or Wednesday. Hopefully the hurricane will still be here, and I’ll have time to write.
See, now this is precisely why I became a CS professor: so that if anyone asked, I could give not merely my opinion, but my professional, expert opinion, on the question of whether psychopathic Terminators will kill us all.
My response (slightly edited) is below.
I fear that your question presupposes way too much anthropomorphizing of an AI machine—that is, imagining that it would even be understandable in terms of human categories like “empathetic” versus “psychopathic.” Sure, an AI might be understandable in those sorts of terms, but only if it had been programmed to act like a human. In that case, though, I personally find it no easier or harder to imagine an “empathetic” humanoid robot than a “psychopathic” robot! (If you want a rich imagining of “empathetic robots” in science fiction, of course you need look no further than Isaac Asimov.)
On the other hand, I personally also think it’s possible –even likely—that an AI would pursue its goals (whatever they happened to be) in a way so different from what humans are used to that the AI couldn’t be usefully compared to any particular type of human, even a human psychopath. To drive home this point, the AI visionary Eliezer Yudkowsky likes to use the example of the “paperclip maximizer.” This is an AI whose programming would cause it to use its unimaginably-vast intelligence in the service of one goal only: namely, converting as much matter as it possibly can into paperclips!
Now, if such an AI were created, it would indeed likely spell doom for humanity, since the AI would think nothing of destroying the entire Earth to get more iron for paperclips. But terrible though it was, would you really want to describe such an entity as a “psychopath,” any more than you’d describe (say) a nuclear weapon as a “psychopath”? The word “psychopath” connotes some sort of deviation from the human norm, but human norms were never applicable to the paperclip maximizer in the first place … all that was ever relevant was the paperclip norm!
Motivated by these sorts of observations, Yudkowsky has thought and written a great deal about how the question of how to create a “friendly AI,” by which he means one that would use its vast intelligence to improve human welfare, instead of maximizing some arbitrary other objective like the total number of paperclips in existence that might be at odds with our welfare. While I don’t always agree with him—for example, I don’t think AI has a single “key,” and I certainly don’t think such a key will be discovered anytime soon—I’m sure you’d find his writings at yudkowsky.net, lesswrong.com, and overcomingbias.com to be of interest to you.
I should mention, in passing, that “parallel programming” has nothing at all to do with your other (fun) questions. You could perfectly well have a murderous robot with parallel programming, or a kind, loving robot with serial programming only.
Hope that helps,
Alex Halderman, University of Michigan computer security professor and my best friend from childhood (see previous Shtetl-Optimized coverage here and here), has been in the news again, with a new Internet anti-censorship system called Telex that he co-developed with Ian Goldberg, Eric Wustrow, and Scott Wolchok (see, e.g., here, here, here for more info). Basically, Telex would let interested governments or ISPs help the citizens of (say) China or Iran access content that their governments are trying to block. Having gotten hold of the Telex software (say, from a friend), the Chinese or Iranian websurfer would access an innocuous-looking website, but insert cryptographic tags into its HTTPS requests to alert an ISP along the way (not an ISP inside China or Iran) that it wanted to activate the anti-censorship service.
If you happen to be a high-level official at the State Department or a three-letter agency, or a wealthy philanthropist, I can think of few smarter things you could do than to support this kind of effort. The system that Alex and his collaborators envision wouldn’t be trivial to deploy, but it’s certainly cheaper than aircraft carriers.
Meanwhile, in other Al?x news, my cousin Alix Genter was splashed across the cover of Philadelphia Daily News this morning (you can read the accompanying article here). What happened is that the owner of a bridal store in New Jersey called “Here Comes the Bride” refused to sell Alix a wedding dress, after finding out that Alix plans to marry another woman in New York State. So now supporters of gay rights are having a field day with Here Comes the Bride’s Yelp page.
I wish both of these Al?xes the best, as they work toward a better world in their different ways.
Spoiler: Actual change of opinion below! You’ll need to read to the end, though.
I’ve learned that the only way to find out who reads this blog is to criticize famous people. For example, when I criticized Ayn Rand’s Atlas Shrugged, legions of Objectivist readers appeared out of nowhere to hammer me in the comments section, while the left-wing readers were silent. Now that I criticize Chomsky (or originally, mainly just quoted him), I’m getting firebombed in the comments section by Chomsky fans, with only a few brave souls showing up from the right flank to offer reinforcements. One would imagine that, on at least one of these topics, more readers must agree with me than are making themselves heard in the comments—but maybe I just have the rare gift of writing in a way that enrages everyone!
Yesterday, I found myself trying to be extra-nice to people I met, as if to reassure myself that I wasn’t the monster some of the Chomskyan commenters portrayed me as. I told myself that, if agreeing with President Obama’s decision to target bin Laden made me a barbarian unworthy of civilization, then at least I’d have the likes of Salman Rushdie, Christopher Hitchens, and Jon Stewart with me in hell—better company than Sean Hannity and Rush Limbaugh.
In my view, one of the reasons the discussion was so heated is that two extremely different questions got conflated (leaving aside the third question of whether al Qaeda was “really” responsible for 9/11, which I find unworthy of discussion).
The first question is whether, as Chomsky suggests, the US government is “uncontroversially” a “vastly” worse terrorist organization than al Qaeda, since it’s caused many more civilian deaths. On this, my opinion is unchanged: the answer is a flat-out no. There is a fundamental reason, having nothing to do with nationalist prejudices, why Osama bin Laden was much more evil than Henry Kissinger, Donald Rumsfeld, Dick Cheney, and George W. Bush combined. The reason is one that Chomsky and his supporters find easy to elide, since—like many other facts about the actual world—it requires considering hypothetical scenarios.
Give Kissinger, Rumsfeld, Cheney, and Bush magic dials that let them adjust the number of civilian casualties they inflict, consistent with achieving their (partly-justified and largely-foolish) military goals. As odious as those men are, who can deny that they turn the dial to zero? By contrast, give bin Laden a dial that lets him adjust the number of Jews, Americans, and apostates he kills, and what do you think the chances are that he turns it from 3000 up to 300 million, or “infinity”? But if, implausibly (in my view), one maintains that bin Laden would have preferred not to kill any civilians, provided that he could magically attain his goal of imposing Sharia law on the world, then the crux of the matter is simply that I don’t want to live under Sharia law: I even prefer living in George W. Bush’s America. (One obvious reason these hypotheticals matter is that, once the Jihadists get access to nuclear weapons, the dial is no longer particularly hypothetical at all.)
So much for the first question. The second, and to me much more worthwhile question, is whether the US should have made a more strenuous effort to capture bin Laden alive and try him, rather than executing him on the spot. (Of course, part of the problem is that we don’t really understand how strenuous of an effort the SEAL team did make. However, let’s suppose, for the sake of an interesting question, that it wasn’t very strenuous.) It’s on this second question that my views have changed.
My original reasoning was as follows: the purpose of a trial is to bring facts to light, but this is an unusual case in which the entire world has known the facts for a decade (and the “defendant” agrees to the facts, having openly declared war on the West). It’s almost impossible to conceive of a person who would be convinced after a trial of bin Laden’s guilt, who wasn’t already convinced of it now. The people who need convincing—such as Jihadists and 9/11 conspiracy theorists—are people who can never be convinced, for fundamental reasons. Therefore, while a trial would have been fine—if bin Laden had come out with his hands up, or (let’s suppose) turned himself in, at any point during the last decade—a bullet to the head was fine as well.
To put it differently: trials struck me as merely a means to the end of justice, just as college courses are merely a means to the end of learnin’. Now personally, I always favor letting a student skip a course if it’s obvious that the student already knows the material—even if that means bending university rules. It stands to reason, then, that I should similarly favor letting a government skip a trial if the verdict is already obvious to the entire sane world.
Many commenters made arguments against this viewpoint—often phrased in terms of bin Laden’s “rights”—that did nothing to persuade me. The one argument that did ultimately persuade me was that, at least for some people, trials are not just a means to an end: they’re an end in themselves, a moving demonstration of the superiority of our system to the Nazis’ and the Jihadists’. Here’s how a reader named Steve E put it, in a personal email that he’s kindly allowed me to quote:
I wonder what you think of the proposition that the Jews of Norwich [the victims of the first blood libel, in 1190] would have preferred a show trial to the mob justice they received. I’m not sure of this proposition, because I could also see a show trial being somehow worse, but on the other hand wouldn’t we all prefer a real trial to a show trial and a show trial to no trial when our lives hang in the balance? Trials perform a nontrivial service even if they don’t convince anyone who is not already convinced, just as human babies perform a nontrivial service even if they have no use, and particle colliders perform a nontrivial service even if they don’t defend our nation. Trials make our nation worth defending; they, like human babies, have intrinsic value not just for their potential. In this case, it may be true that giving bin Laden a trial would have been a bonus rather than a requirement, but wouldn’t you agree that it’d have been a bonus? Trying Osama bin Laden would have shown our moral high ground, maybe not to some who can’t be convinced of America’s goodness, but it would have done so for me! (I’m very proud that Israel tried Eichmann, not just because it showed the world about the Holocaust, but also because it showed me about Israel’s character. Let people react to the trial as they may. That trial had meaning to me.)
And so I’ve decided that, while assassinating bin Laden was vastly better than leaving him at large, and I applaud the success of the operation, it would’ve been even better if he’d been captured alive and tried—even if that’s not what bin Laden himself wanted. For the sake of people like Steve E.
It’s increasingly clear that the operation was a planned assassination, multiply violating elementary norms of international law. There appears to have been no attempt to apprehend the unarmed victim, as presumably could have been done by 80 commandos facing virtually no opposition—except, they claim, from his wife, who lunged towards them. In societies that profess some respect for law, suspects are apprehended and brought to fair trial. I stress “suspects.” In April 2002, the head of the FBI, Robert Mueller, informed the press that after the most intensive investigation in history, the FBI could say no more than that it “believed” that the plot was hatched in Afghanistan, though implemented in the UAE and Germany. What they only believed in April 2002, they obviously didn’t know 8 months earlier, when Washington dismissed tentative offers by the Taliban (how serious, we do not know, because they were instantly dismissed) to extradite bin Laden if they were presented with evidence—which, as we soon learned, Washington didn’t have. Thus Obama was simply lying when he said, in his White House statement, that “we quickly learned that the 9/11 attacks were carried out by al Qaeda.”
Nothing serious has been provided since. There is much talk of bin Laden’s “confession,” but that is rather like my confession that I won the Boston Marathon. He boasted of what he regarded as a great achievement.
There is also much media discussion of Washington’s anger that Pakistan didn’t turn over bin Laden, though surely elements of the military and security forces were aware of his presence in Abbottabad. Less is said about Pakistani anger that the U.S. invaded their territory to carry out a political assassination…
We might ask ourselves how we would be reacting if Iraqi commandos landed at George W. Bush’s compound, assassinated him, and dumped his body in the Atlantic. Uncontroversially, his crimes vastly exceed bin Laden’s, and he is not a “suspect” but uncontroversially the “decider” who gave the orders to commit the “supreme international crime differing only from other war crimes in that it contains within itself the accumulated evil of the whole” (quoting the Nuremberg Tribunal) for which Nazi criminals were hanged…
There is much more to say, but even the most obvious and elementary facts should provide us with a good deal to think about.
Shortly after I got into office, I brought [CIA director] Leon Panetta privately into the Oval Office and I said to him, “We need to redouble our efforts in hunting bin Laden down. And I want us to start putting more resources, more focus, and more urgency into that mission” …
We had multiple meetings in the Situation Room in which we would map out — and we would actually have a model of the compound and discuss how this operation might proceed, and what various options there were because there was more than one way in which we might go about this.
And in some ways sending in choppers and actually puttin’ our guys on the ground entailed some greater risks than some other options. I thought it was important, though, for us to be able to say that we’d definitely got the guy. We thought that it was important for us to be able to exploit potential information that was on the ground in the compound if it did turn out to be him.
We thought that it was important for us not only to protect the lives of our guys, but also to try to minimize collateral damage in the region because this was in a residential neighborhood …
Last night, the MIT Egyptian Club hosted a “What’s Going On In Egypt?” event, which included a lecture, a Q&A session with Egyptian students, Egyptian music, and free falafel and baklava. I went, not least because of the falafel.
The announcement that Mubarak was leaving came just a few hours before the event, which was planned as a somber discussion but hastily reconfigured as a celebration. As you’d imagine, the mood was ecstatic: some people came draped in Egyptian flags, and there was shouting, embracing, and even blowing of vuvuzelas. Building E51 wasn’t quite Tahrir Square, but it was as close as I was going to get.
About 300 people showed up. I’d expected an even bigger turnout—but then again, this was MIT, where the democratic awakening of the Arab world might have to wait if there’s a pset due next week. Many of the people who came were speaking Arabic, greeting each other with “salaam aleykum.” But only a minority were Egyptians: I met jubilant Syrians and Saudi Arabians, and pan-Arab pride was a major theme of the evening.
At one point, I overheard two guys speaking something that sounded like Arabic but wasn’t: “yesh khasa? eyn?” It was Hebrew, which I’m proud to say I now speak at almost the level of a 3-year-old. The Israelis were debating whether there was lettuce in the falafel (there wasn’t). Joining their conversation, I confirmed that we had come for basically the same reasons: first, to “witness” (insofar as one could without leaving campus) one of the great revolutions of our time; secondly, the falafel.
Two socialist organizations were selling newspapers, with headlines trumpeting the events in Egypt as the dawn of a long-awaited global workers’ revolt against capitalism. Buying a $1 newspaper (and politely turning down a subscription), I thought to myself that one has to admire these folks’ persistence, if not their powers of analysis.
Finally the main event started. An Egyptian student from Harvard presented a slideshow, which summarized both the events of the last three weeks and the outrages of the last 30 years that led to them (poverty, torture, suppression of opposition parties, indefinite detention without charges, arrests for things like having long hair). He said that this uprising wasn’t anything like Iran’s 30 years ago, that it was non-Islamic and led by the pro-democracy Facebook generation. Then there was half an hour for Q&A.
Someone asked about the protesters’ economic goals. One student panelist started to answer, but then another interjected: “Look, the people in Tahrir Square just overthrew the government. I don’t think they’ve had much time yet to think through their economic plan.”
Someone else asked about the role of the US. A student answered that it was “complicated, to say the least,” and that the Obama administration seemed internally divided.
Perhaps the most interesting question was whether the students themselves planned to return to Egypt, to help build the new democratic society. After a long silence, two students said yes.
No one asked about the future of Egypt/Israel relations, and the subject never came up. But it seemed obvious that, if the students I saw were running Egypt, they’d be too busy modernizing their country’s economy to spend much time denouncing Zionist iniquities.
In general, I agree with Natan Sharansky that, for the US and Israel, it would be incredibly shortsighted to see only danger and “instability” in the Great Egyptian Twitter Revolt of 2011. The variance is enormous, which makes it almost impossible to estimate the expectation, but there’s certainly large support on the positive half of the spectrum.
So, to my Egyptian readers: congratulations, best wishes, mazel tov, and mabrouk from the entire executive staff of Shtetl-Optimized. May your revolution be remembered with those of 1776 and 1989 and not with those of 1917 and 1979.
Five years ago, not long after the founding of Shtetl-Optimized, I blogged about Alex Halderman: my best friend since seventh grade at Newtown Junior High School, now a famous security researcher and a computer science professor at the University of Michigan, and someone whose exploits seem to be worrying at least one government as much as Julian Assange’s.
In the past, Alex has demonstrated the futility of copy-protection schemes for music CDs, helped force the state of California to change its standards for electronic voting machines, and led a spectacular attack against an Internet voting pilot in Washington DC. But Alex’s latest project is probably his most important and politically-riskiest yet. Alex, Hari Prasad of India, and Rop Gonggrijp of the Netherlands demonstrated massive security problems with electronic voting machines in India (which are used by about 400 million people in each election, making them the most widely-used voting system on earth). As a result of this work, Hari was arrested in his home and jailed by the Indian authorities, who threatened not to release him until he revealed the source of the voting machine that he, Alex, and Rop had analyzed. After finally being released by a sympathetic judge, Hari flew to the United States, where he received the Electronic Frontier Foundation’s 2010 Pioneer Award. I had the honor of meeting Hari at MIT during his and Alex’s subsequent US lecture tour.
But the story continues. Earlier this week, after flying into India to give a talk at the International Conference on Information Systems Security (ICISS’2010) in Gandhinagar, Alex and Rop were detained at the New Delhi airport and threatened with deportation from India. No explanation was given, even though the story became front-page news in India. Finally, after refusing to board planes out of New Delhi without being given a reason in writing for their deportation, Alex and Rop were allowed to enter India, but only on the condition that they did so as “tourists.” In particular, they were banned from presenting their research on electronic voting machines, and the relevant conference session was cancelled.
To those in the Indian government responsible for the harassment of Alex Halderman and Rop Gonggrijp and (more seriously) the imprisonment of Hari Prasad: shame on you! And to Alex, Hari, and Rop: let the well-wishes of this blog be like a small, nerdy wind beneath your wings.