Eigenmorality

This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell.  Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden.  The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query.  Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.”  At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix.  Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.

I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page.  Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar.  At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.

In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”

Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”  After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.”  Yet they both managed to use math to defeat the circularity.  All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map.  And such an equilibrium could be shown to exist—indeed, to exist uniquely.

Searching for other circular notions to elucidate using linear algebra, I hit on morality.  Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer.  Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here?  If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them?  What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice?  Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them?  Should we consider intent, or only outcomes?  Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling?  Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community?  If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?

For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity.  How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?

Ah, I thought—this is precisely where linear algebra can come to the rescue!  Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”  Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already.  We apply the rule over and over, until the number of morality credits per person converges to an equilibrium.  (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.)  We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.

The next step, I figured, would be to hack together some code that computed this “eigenmorality” metric, and then see what happened when I ran the code to measure the morality of each participant in a simulated society.  What would happen?  Would the results conform to my pre-theoretic intuitions about what sort of behavior was moral and what wasn’t?  If not, then would watching the simulation give me new ideas about how to improve the morality metric?  Or would it be my intuitions themselves that would change?

Unfortunately, I never got around to the “coding it up” part—there’s a reason why I became a theorist!  The eigenmorality idea went onto my back burner, where it stayed for the next 16 years: 16 years in which our world descended ever further into darkness, lacking a principled way to quantify morality.  But finally, this year, just two separate things have happened on the eigenmorality front, and that’s why I’m blogging about it now.

Eigenjesus and Eigenmoses

The first thing that’s happened is that Tyler Singer-Clark, my superb former undergraduate advisee, did code up eigenmorality metrics and test them out on a simulated society, for his MIT senior thesis project.  You can read Tyler’s 12-page report here—it’s a fun, enjoyable, thought-provoking first research paper, one that I wholeheartedly recommend.  Or, if you’d like to experiment yourself with the Python code, you can download it here from github.  (Of course, all opinions expressed in this post are mine alone, not necessarily Tyler’s.)

Briefly, Tyler examined what eigenmorality has to say in the setting of an Iterated Prisoner’s Dilemma (IPD) tournament.  The Iterated Prisoner’s Dilemma is the famous game in which two players meet repeatedly, and in each turn can either “Cooperate” or “Defect.”  The absolute best thing, from your perspective, is if you defect while your partner cooperates.  But you’re also pretty happy if you both cooperate.  You’re less happy if you both defect, while the worst (from your standpoint) is if you cooperate while your partner defects.  At each turn, when contemplating what to do, you have the entire previous history of your interaction with this partner available to you.  And thus, for example, you can decide to “punish” your partner for past defections, “reward” her for past cooperations, or “try to take advantage” by unilaterally defecting and seeing what happens.  At each turn, the game has some small constant probability of ending—so you know approximately how many times you’ll meet this partner in the future, but you don’t know exactly when the last turn will be.  Your score, in the game, is then the sum-total of your score over all turns and all partners (where each player meets each other player once).

In the late 1970s, as recounted in his classic work The Evolution of Cooperation, Robert Axelrod invited people all over the world to submit computer programs for playing this game, which were then pit against each other in the world’s first serious IPD tournament.  And, in a tale that’s been retold in hundreds of popular books, while many people submitted complicated programs that used machine learning, etc. to try to suss out their opponents, the program that won—hands-down, repeatedly—was TIT_FOR_TAT, a few lines of code submitted by the psychologist Anatol Rapaport to implement an ancient moral maxim.  TIT_FOR_TAT starts out by cooperating; thereafter, it simply does whatever its opponent did in the last move, swiftly rewarding every cooperation and punishing every defection, and ignoring the entire previous history.  In the decades since Axelrod, running Iterated Prisoners’ Dilemma tournaments has become a minor industry, with countless variations explored (for example, “evolutionary” versions, and versions allowing side-communication between the players), countless new strategies invented, and countless papers published.  To make a long story short, TIT_FOR_TAT continues to do quite well across a wide range of environments, but depending on the mix of players present, other strategies can sometimes beat TIT_FOR_TAT.  (As one example, if there’s a sizable minority of colluding players, who recognize each other by cooperating and defecting in a prearranged sequence, then those players can destroy TIT_FOR_TAT and other “simple” strategies, by cooperating with one another while defecting against everyone else.)

Anyway, Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly.  However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners?

It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities.  The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either!  The players are bots, which do whatever their code tells them to do.  So in some sense, utility—no less than morality—is “merely an interpretation” that we impose on the raw cooperate/defect sequences!  There’s nothing to stop us from imposing some other interpretation (say, one that explicitly tries to measure morality) and seeing what happens.

In an attempt to measure the players’ morality, Tyler uses the eigenmorality idea from before.  The extent to which player A “cooperates” with player B is simply measured by the percentage of times A cooperates.  (One acknowledged limitation of this work is that, when two players both defect, there’s no attempt to take into account “who started it,” and to judge the aggressor more harshly than the retaliator—or to incorporate time in any other way.)  This then gives us a “cooperation matrix,” whose (i,j) entry records the total amount of niceness that player i displayed to player j.  Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores.

Now, there’s a very interesting ambiguity in what I said above.  Namely, should we define the “niceness scores” to lie in [0,1] (so that the lowest, meanest possible score is 0), or in [-1,1] (so that it’s possible to have negative niceness)?  This might sound like a triviality, but in our setting, it’s precisely the mathematical reflection of one of the philosophical conundrums I mentioned earlier.  The conundrum can be stated as follows: is your morality a monotone function of your niceness?  We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler.  But do you have a positive obligation to be not-nice to Hitler: to make him suffer because he made others suffer?  Or, OK, how about not Hitler, but someone who’s somewhat bad?  Consider, for example, a woman who falls in love with, and marries, an unrepentant armed robber (with full knowledge of who he is, and with other options available to her).  Is the woman morally praiseworthy for loving her husband despite his bad behavior?  Or is she blameworthy because, by rewarding his behavior with her love, she helps to enable it?

To capture two possible extremes of opinion about such questions, Tyler and I defined two different morality metrics, which we called … wait for it … eigenmoses and eigenjesus.  Eigenmoses has the niceness scores in [-1,1], which means that you’re actively rewarded for punishing evildoers: that is, for defecting against those who defect against many moral players.  Eigenjesus, by contrast, has the niceness scores in [0,1], which means that you always do at least as well by “turning the other cheek” and cooperating.  (Though note that, even with eigenjesus, you get more morality credits by cooperating with moral players than by cooperating with immoral ones.)

This is probably a good place to mention a second limitation of Tyler’s current study.  Namely, with the current system, there’s no direct way for a player to find out how its partner has been behaving toward third parties.  The only information that A gets about the goodness or evilness of player B, comes from A and B’s direct interaction.  Ideally, one would like to design bots that take into account, not only the other bots’ behavior toward them, but the other bots’ behavior toward each other.  So for example, even if someone is unfailingly nice to you, if that person is an asshole to everyone else, then the eigenmoses moral code would demand that you return the person’s cooperation with icy defection.  Conversely, even if Gandhi is mean and hateful to you, you would still be morally obliged (interestingly, on both the eigenmoses and eigenjesus codes) to be nice to him, because of the amount of good he does for everyone else.

Anyway, you can read Tyler’s paper if you want to see the results of computing the eigenmoses and eigenjesus scores for a diverse population of bots.  Briefly, the results accord pretty well with intuition.  When we look at eigenjesus scores, the all-cooperate bot comes out on top and the all-defect bot on the bottom (as is mathematically necessary), with TIT_FOR_TAT somewhere in the middle, and generous versions of TIT_FOR_TAT higher up.  When we look at eigenmoses, by contrast, TIT_FOR_TWO_TATS comes out on top, with TIT_FOR_TAT in sixth place, and the all-cooperate bot scoring below the median.  Interestingly, once again, the all-defect bot gets the lowest score (though in this case, it wasn’t mathematically necessary).

Even though the measures acquit themselves well in this particular tournament, it’s admittedly easy to construct scenarios where the prescriptions of eigenjesus and eigenmoses alike violently diverge from most people’s moral intuitions.  We’ve already touched on a few such scenarios above (for example, are you really morally obligated to lick the boots of someone who kicks you, just because that person is a saint to everyone other than you?).  Another type of scenario involves minorities.  Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that).  Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%.  Who, in this scenario, is moral, and who’s immoral?  The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil.  After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone.  Of course, for much of human history, this is precisely how morality worked, in many people’s minds.  But I dare say it’s a result that would make moderns uncomfortable.

In summary, it seems clear to me that neither eigenmoses nor eigenjesus correctly captures our intuitions about morality, any more than Φ captures our intuitions about consciousness.  But as they say, I think there’s plenty of scope here for further research: for coming up with new mathematical measures that sharpen our intuitive judgments about morality, and (if we like) testing those measures out using IPD tournaments.  It also seems to me that there’s something fundamentally right about the eigenvector idea: all else being equal, we’d like to say, being nice to others is good, except that aiding and abetting evildoers is not good, and the way we can recognize the evildoers in our midst is that they’re not nice to others—except that, if the people who someone isn’t nice to are themselves evildoers, then the person might again be good, and so on.  The only way to cut off the infinite regress, it seems, is to demand some sort of “reflective equilibrium” in our moral judgments, and that’s precisely what eigenmorality tries to capture.  On the other hand, no such idea can ever make moral debate obsolete—if for no other reason than that we still need to decide which specific eigenmorality metric to use, and that choice is itself a moral judgment.

Scooped by Plato

Which brings me, finally, to the second new thing that’s happened this year on the eigenmorality front.  Namely, Rebecca Newberger Goldstein—who’s far and away my favorite contemporary novelist—published a charming new book entitled Plato at the Googleplex: Why Philosophy Won’t Go Away.  Here she imagines that Plato has reappeared in present-day America (she doesn’t bother to explain how), where he’s taught himself English and the basics of modern science, learned how to use the Internet, and otherwise gotten himself up to speed.  The book recounts Plato’s dialogues with various modern interlocutors, as he volunteers to have his brain scanned, guest-writes a relationship advice column, participates in a panel discussion on child-rearing, and gets interviewed on cable news by “Roy McCoy” (a thinly veiled Bill O’Reilly).  Often, Goldstein has Plato answer the moderns’ questions using direct quotes from the Timaeus, the Gorgias, the Meno, etc., which makes her Plato into a very intelligent sort of chatbot.  This is a genre that’s not often seriously attempted, and that I’d love to read more of (possible subjects: Shakespeare, Galileo, Jefferson, Lincoln, Einstein, Turing…).

Anyway, my favorite episode in the book is the first, eponymous one, where Plato visits the Googleplex in Mountain View.  While eating lunch in one of the many free cafeterias, Plato is cornered by a somewhat self-important, dreadlocked coder named Marcus, who tries to convince Plato that Google PageRank has finally solved the problem agonized over in the Republic, of how to define justice.  By using the Internet, we can simply crowd-source the answer, Marcus declares: get millions of people to render moral judgments on every conceivable question, and also moral judgments on each other’s judgments.  Then declare those judgments the most morally reliable, that are judged the most reliable by the people who are themselves the most morally reliable.  The circularity, as usual, is broken by taking the principal eigenvector of the graph of moral judgments (Goldstein doesn’t have Marcus put it that way, but it’s what she means).

Not surprisingly, Plato is skeptical.  Through Socratic questioning—the method he learned from the horse’s mouth—Plato manages to make Marcus realize that, in the very act of choosing which of several variants of PageRank to use in our crowd-sourced justice engine, we’ll implicitly be making moral choices already.  And therefore, we can’t use PageRank, or anything like it, as the ultimate ground of morality.

Whereas I imagined that the raw data for an “eigenmorality” metric would consist of numerical measures of how nice people had been to each other, Goldstein imagines the raw data to consist of abstract moral judgments, and of judgments about judgments.  Also, whereas the output of my kind of metric would be a measure of the “goodness” of each individual person, the outputs of hers would presumably be verdicts about general moral and political questions.  But, much like with CLEVER versus PageRank, it’s obvious that the ideas are similar—and that I should credit Goldstein with independently discovering my nerdy 16-year-old vision, in order to put it in the mouth of a nerdy character in her story.

As I said before, I agree with Goldstein’s Plato that eigenmorality can’t serve as the ultimate ground of morality.  But that’s a bit like saying that Google rank can’t serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google’s engineers must have some prior notion of “importance” to serve as a standard.  That’s true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years.  Goldstein’s book has the wonderful property that even the ideas she gives to her secondary characters, the ones who serve as foils to Plato, are sometimes interesting enough to deserve book-length treatments of their own, and crowd-sourced morality strikes me as a perfect example.

In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization’s survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives.  (Obviously, such ideas are nonstarters in the current political climate of the US, but I’m not talking here about what’s feasible, only about what’s necessary.)  As several commenters pointed out, my view raises an obvious question: who is to decide how much “damage” each activity causes, and thus how much it should be taxed?  Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever?

For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn’t decree the answer—has been representative democracy.  Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences.  But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it’s ever faced: namely, that of leaving our descendants a livable planet.  Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals, and in many cases we’re moving rapidly in the opposite direction.  Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening.

So what’s the solution?  To put the world under the thumb of an environmentalist dictator?  Absolutely not.  In all of history, I don’t think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren’t too keen on alternative energy either).  The problem, I think, is epistemological.  Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn’t yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history.  Because our domination of the earth’s climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win.  If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won’t be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it’s too late.  If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other.  (Of course, all this is a problem for many fields, not just climate change.  Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.)

Yet for all that, I’m an optimist—sort of.  For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society.  Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little.  We could improve it a lot more.  Consider, for example, the following attempted definitions:

A trustworthy source of information is one that’s considered trustworthy by many sources who are themselves trustworthy (on the same topic or on closely related topics).  The current scientific consensus, on any given issue, is what the trustworthy sources consider to be the consensus.  A good decision-maker is someone who’s considered to be a good decision-maker by many other good decision-makers.

At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that’s needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.  And the computation of such an eigenvector need be no more “Orwellian” than … well, Google.  If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that “usually just works” in much the same way Google usually just works.

Now, would those with axes to grind try to subvert such a system the instant it went online?  Certainly.  For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy.  So there would arise a parallel world of trust and consensus and “expertise,” mutually-reinforcing yet nearly disjoint from the world of the real.  But here’s the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one.  They’d see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe.  The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy.  It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I’d cheer that outcome as well.

Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they’re in a minority!  And far from being worried about it, they treat it as a badge of honor.  They think they’re Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers.

I admit all this.  But the point of an eigentrust system wouldn’t be to convince everyone.  As long as I’m fantasizing, the point would be that, once people’s individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law.  The formation of the giant component would be the signal that there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse.  Conversely, it’s essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority’s objections might be enough to veto action, even if it was numerically small.  This is still democracy; it’s just democracy enhanced by linear algebra.

Other people will object that, while we should use the Internet to improve the democratic process, the idea we’re looking for is not eigentrust or eigenmorality but rather prediction markets.  Such markets would allow us to, as my friend Robin Hanson advocates, “vote on values but bet on beliefs.”  For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise.  But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there’s a serious penalty for being wrong, namely losing your shirt.  If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law.

I actually like the idea of prediction markets—I have ever since I heard about them—but I consider them limited in scope.  My example above, involving the year 2200, gives a hint as to why.  Prediction markets are great whenever our disagreements are over something that will be settled one way or the other, to everyone’s assent, in the near future (e.g., who will win the World Cup, or next year’s GDP).  But most of our important disagreements aren’t like that: they’re over which direction society should move in, which issues to care about, which statistical indicators are even the right ones to measure a country’s health.  Now, those broader questions can sometimes be settled empirically, in a sense: they can be settled by the overwhelming judgment of history, as the slavery, women’s suffrage, and fascism debates were.  But that kind of empirical confirmation typically takes way too long to set up a decent betting market around it.  And for the non-bettable questions, a carefully-crafted eigendemocracy really is the best system I can think of.

Again, I think Rebecca Goldstein’s Plato is completely right that such a system, were it implemented, couldn’t possibly solve the philosophical problem of finding the “ultimate ground of justice,” just like Google can’t provide us with the “ultimate ground of importance.”  If nothing else, we’d still need to decide which of the many possible eigentrust metrics to use, and we couldn’t use eigentrust for that without risking an infinite regress.  But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might—just might—work well enough to save civilization.


Update (6/20): If you haven’t been following, there’s an excellent discussion in the comments, with, as I’d hoped, many commenters raising strong and pertinent objections to the eigenmorality and eigendemocracy concepts, while also proposing possible fixes.  Let me now mention what I think are the most important problems with eigenmorality and eigendemocracy respectively—both of them things that had occurred to me also, but that the commenters have brought out very clearly and explicitly.

With eigenmorality, perhaps the most glaring problem is that, as I mentioned before, there’s no notion of time-ordering, or of “who started it,” in the definition that Tyler and I were using.  As Luca Trevisan aptly points out in the comments, this has the consequence that eigenmorality, as it stands, is completely unable to distinguish between a crime syndicate that’s hated by the majority because of its crimes, and an equally-large ethnic minority that’s hated by the majority solely because it’s different, and that therefore hates the majority.  However, unlike with mathematical theories of consciousness—where I used counterexamples to try to show that no mathematical definition of a certain kind could possibly capture our intuitions about consciousness—here the problem strikes me as much more circumscribed and bounded.  It’s far from obvious to me that we can’t easily improve the definition of eigenmorality so that it does agree with most people’s moral intuition, whenever intuition renders a clear verdict, at least in the limited setting of Iterated Prisoners’ Dilemma tournaments.

Let’s see, in particular, how to solve the problem that Luca stressed.  As a first pass, we could do so as follows:

A moral agent is one who only initiates defection against agents who it has good reason to believe are immoral (where, as usual, linear algebra is used to unravel the definition’s apparent circularity).

Notice that I’ve added two elements to the setup: not only time but also knowledge.  If you shun someone solely because you don’t like how they look, then we’d like to say that reflects poorly on you, even if (unbeknownst to you) it turns out that the person really is an asshole.  Now, several more clauses would need to be added to the above definition to flesh it out: for example, if you’ve initiated defection against an immoral person, but then the person stops being immoral, at what point do you have a moral duty to “forgive and forget”?  Also, just like with the eigenmoses/eigenjesus distinction, do you have a positive duty to initiate defection against an agent who you learn is immoral, or merely no duty not to do so?

OK, so after we handle the above issues, will there still be examples that our time-sensitive, knowledge-sensitive eigenmorality definition gets badly, egregiously wrong?  Maybe—I don’t know!  Please let me know in the comments.

Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul.  Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages’ Google rank.  In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people’s “revealed preferences,” and exploits that structure for its own purposes.  Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings.  But Google still isn’t the only reason for linking, so the link structure still contains real information.

By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject.  If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the “experts” such that claiming to “trust” them would do the most for their favored outcome.  It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust.  So, how would an eigendemocracy suss out the truth about who trusts whom on which subject?  I don’t have a very good answer to this, and am open to suggestions.  The best idea so far is to use Facebook for this purpose, but I don’t know exactly how.


Update (6/22): Many commenters, both here and on Hacker News, interpreted me to be saying something obviously stupid: namely, that any belief identified as “the consensus” by an eigenvector analysis is therefore the morally right one. They then energetically knocked down this strawman, with the standard examples (Hitler, slavery, discrimination against gays).

Admittedly, I probably contributed to this confusion by my ill-advised decision to discuss eigenmorality and eigendemocracy in the same blog post—solely because of their mathematical similarity, and the ease with which thinking about one leads to thinking about the other. But the two are different, as are my claims about them. For the record:

  • Eigenmorality: Within the stylized setting of an Iterated Prisoner’s Dilemma tournament, with side-channels allowing agents to learn who are doing what to each other, I believe it ought to be possible, by looking at who initiated rounds of defection and forgiveness, and then doing an eigenvector analysis on the result, to identify the “moral” and “immoral” agents in a way that more-or-less accords with our moral intuitions. Even if true, of course, this wouldn’t have any obvious moral implications for hot-button issues such as abortion, gun control, or climate change, which it’s far from obvious how to encode in terms of IPD tournaments.
  • Eigendemocracy: By doing an eigenvector analysis, to identify who people implicitly acknowledge as the “experts” within each field, I believe that it might be possible to produce results that, on average, in practice, and in contemporary society, are better and more rational than those produced by ordinary majority-voting. Obviously, there’s no guarantee whatsoever that the results of eigendemocracy would be morally acceptable ones: if the public acknowledges as “experts” people who believe evil things (as in Nazi Germany), then eigendemocracy will produce evil results. But democracy itself suffers from a precisely analogous problem. The situation that interests me is one that’s been with us since the time of ancient Athens: one where there is a consensus among the experts about the wisest course of action, and there’s also an implicit consensus among the public that those experts are indeed the experts, but the democratic system is somehow “unable to complete the modus ponens,” because of manipulation by powerful interests and the sway of demagogues. In such cases, it seems possible to me that an eigendemocracy could improve on the results of ordinary democracy—perhaps dramatically so—while still avoiding the evils of dictatorship.

Crucially, in neither of the above bullet points, nor in their combination, is there any hint of a belief that “the will of the majority always defines what’s morally right” (if anything, there’s a belief in the opposite).


Update (7/4): While this isn’t really a surprise—I’d astonished if it weren’t the case—I’ve now learned that several people, besides me and Rebecca Goldstein, have previously written about the ideas of eigentrust and eigendemocracy. Perhaps more surprising is that one of the earlier groups—consisting of Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina from Stanford—literally called the idea “EigenTrust,” when they published about it in 2003. (Note that Garcia-Molina, in a likely non-coincidence, was Larry Page and Sergey Brin’s PhD adviser.) Kamvar et al.’s intended application for EigenTrust was to determine which nodes are trustworthy in a peer-to-peer file-sharing network, rather than (say) to reinvent democracy, or to address conundrums of epistemology and ethics that have been with us since Plato. But while the scope might be more modest, the core idea is the same. (Hat tip to commenter Babak.)

As for enhancing democracy using linear algebra, it turns out that that too has already been discussed: see for example this presentation by Rob Spekkens of the Perimeter Institute, which Michael Nielsen pointed me to. (In yet another small-world phenomenon, Rob’s main interest is in quantum foundations, and in that context I’ve known him for a decade! But his interest in eigendemocracy was news to me.)

If you’re wondering whether anything in this post was original … well, so far, I haven’t learned of prior work specifically about eigenmorality (e.g., in Iterated Prisoners Dilemma tournaments), much less about eigenmoses and eigenjesus.

209 Responses to “Eigenmorality”

  1. JeanHuguesRobert Says:

    What about liquid democracy? It’s like direct democracy but with propositions sometimes voted on behalf of you by people you trust on some specified topics.

    I am implementing a prototype (#kudocracy, see link) and would be glad to study the possibility of eigendemocracy inside that network of trust. I am not a mathematician however. Directions? Thanks.

  2. Scott Says:

    Jean #1: “Liquid democracy” sounds like a very nice idea, and not at all unrelated to eigendemocracy! Unfortunately, I don’t know of anything written on the subject of eigendemocracy outside of this blog post, or maybe the Googleplex chapter of Rebecca Goldstein’s book. (I’m sure there’s something that I’m missing.) If you want to know the specifics of how to compute the eigenvectors of a trust graph, how to interpret them, etc., you could look at Tyler Singer-Clark’s paper (linked to in the post), or the original papers on CLEVER and PageRank.

  3. MadRocketSci Says:

    You’ve pointed out that it’s a little ludicrous to award “morality” to whatever group of in-group cooperators/out-group defectors happens to be largest. That seems to be a general feature of this sort of algorithm, as stated.

    If two groups are internally running the same sort of program, why should one be awarded the badge of “moral”? Given a bunch of tribes with conflicting interests, or at least interests where there is a potential for destructive conflict if certain strategies are pursued, why elevate any of their interests to being somehow globally “right”?

  4. MadRocketSci Says:

    How does this algorithm end up measuring expertise wrt actual accuracy of their information? Wouldn’t it simply find the largest group of people that believe each other?

  5. MadRocketSci Says:

    Suppose some group of programmers decided, for (and here I make *my* personal evaluation) nefarious purposes, to code a bunch of bots, launching infinite websites, which all reference each other in a caricature of authority-hub structure, link to their own “authorities”, such as Azathothopedia etc – all of these sites being written by bots are entirely contentless. Maybe to fool people for long enough, these bots go grinding up human generated websites from the real internet.

    A pagerank type algorithm, without using some sort of human judgement to initialize the “prior authority”, would end up assigning the larger body of the “fake internet” with a high score, and ignoring the smaller human generated internet of content that real people care about.

    IIRC, Google spends a lot of human attention just going over websites to suppress just this sort of thing.

    Also, unlike your blog, these evil robots make the vast majority of my own blog and forum’s patrons. :/

  6. Lawrence D'Anna Says:

    Is eigendemocracy easier to game than normal democracy? Or harder? How hard is it for google to fight off the SEOs? Who would have the authority to design countermeasures to fight the SEO-Lobbyists?

  7. Jeremy Kun Says:

    Do you think that an eigenvector-based trustworthiness measure would slow the rate of progress that removes bad norms from our society? For example, if this system existed 60 years ago, would people still think smoking is healthy? Or, something with historically more pseudoscience than actual science, would we still have slavery today on the basis of “most trustworthy people” thought non-white races were inferior? How would one avoid the minority situation you describe in the morality IPD?

  8. Neal Kelly Says:

    When it comes to judging the morality of conflicting groups with size disparities, would there be a way cluster the groups of people who do cooperate, and then judge how cooperative each group is with the rest of the world, or perhaps with other groups?

    I’m just imagining that if 98% of the world gets along fine, and 2% of the world gets along fine, you might be able to look at the 2% and say, “well THEY get along well with the majority, but the majority doesn’t seem to like the minority very much, so on ‘average’ the minority seems much more moral.”

  9. Scott Says:

    MadRocketSci #3 (and others): In practice, and despite the obvious ingroup vs, outgroup problem, going with the majority is what we already do in democracies, basically for lack of a better alternative. My point, in this post, was that eigendemocracy does provide an alternative, and that it’s plausibly at least somewhat better. So for example, suppose our society has a put-upon minority, hated by most members of the majority. Even then, if at least some members of the majority take a more tolerant view, and if enough of those members are recognized by their fellow majority-members as being unusually enlightened, rational people, then the minority’s voice will effectively be amplified. Compared to ordinary democracy, the main difference is that social change can plausibly happen faster: rather than having to wait for enough people to “complete the modus ponens,” and then translate the completed modus ponens into actual election victories (which typically takes decades), the system could simply detect automatically, by examining the continuously-evolving trust network, who the most respected experts in the relevant field are right now and what those experts currently think should be done.

    Is this system capable of producing stupid or evil decisions? Of course it is! Just like direct democracy, representative democracy, dictatorship, and every other system. The question is just whether it can do interestingly better.

  10. Jay Says:

    This idea seems so rich… eigenquale, eigensemantic eigenknowledge, eigenconscious, eigenyounameit! This smells interesting! Wow!

  11. Ben Mahala Says:

    The main problem I can think of with this is technical. This system is very illegible to the vast majority of people. Only a small minority would be able to understand it. I mean, how many people know the technical details behind Google search?

    This creates a weak point in the system: someone on the inside could easily hack the system to warp the outcomes and only someone on the inside could even recognize that something was wrong. This isn’t a new problem, but voting is pretty simple by comparison so it’s worse. Especially if the outcomes are supposed to be fast and counter-intuitive, as it is supposed to in this system.

    My gut reaction to this is to try and find some kind of open source solution, where experts are prevented from manipulation by other experts. Since the group of experts is highly fractured, it isn’t possible to monopolize all of them.

  12. Jay Says:

    On second thought, this might as well be a weakness.

    Suppose one say, well, we’re all human, and what dominates who we cooperate with is not agreement over ideas, it’s sexual attractiveness. So what you tought were eigenmorality is actually eigensexappeal. Or eigencleverness. Or eigenhighsocialclassness.

    How would we tell these interpretations apparts?

    BTW, this is definitely one your most exciting idea ever. Congrats to Singer-Clark for wonderfull implementation!

  13. Daniel Says:

    You’ve got a colleague at MIT in the philosophy department who’s explored some very similar ideas, arguing that apparent worries about circularity in definitions of desert can be dispelled using strategies similar to the ones you discuss here. Here’s a paper:
    http://web.mit.edu/bskow/www/research/desert.pdf

  14. Pete Says:

    This post just makes me feel bad for Jon Kleinberg.

  15. Rahul Says:

    So, to clarify, would we need one eigentrust network for each issue? Or just one network?

  16. Jason Says:

    Scott – I think to your credit – your definition of “moral” is very close to Rawls’ definition of “reasonable”.

    He defines “reasonable” by saying, “Persons are reasonable in one basic aspect when, among equals say, they are ready to propose principles and standards as fair terms of cooperation and to abide by them willingly, given the assurance that others will likewise do so.” He spends much of his book Political Liberalism spelling out the implications of this definition.

    Perhaps it’s reassuring given your very heterogeneous backgrounds that your reflective equilibria would settle on such similar notions.

  17. Sandro Says:

    I think it’s a mistake to say that representative democracies are failing to address real problems, mainly because we no longer have such governments. A recent MIT study showed that the US has a much stronger resemblence to an oligarchy, which matches precisely the symptoms you describe about some people having stronger voices than others, despite evidence contradicting their views. This isn’t merely a question of information dissemination.

    Recent studies have also shown that increased information dissemination can actually *reduce* the optimality of democractic choices. Normally the wisdom of the crowd is a powerful means of making good choices, but it’s effectiveness is actually reduced in the presence of increased communication between individual voting elements. Communication actually tends to concentrate people around more extreme opinions, rather than making people take more moderate stances between the available extremes.

    In fact, moderation is itself simply viewed by people as merely another type of opinion, so the way to spread moderation is to actually have avocates for moderation. Analogously, the way to spread scientific understanding is to advocate for it in the same way. Neil deGrasse Tyson and other public figures like him who try to spread scientific literacy have one of the most important jobs in science.

    In any case, I replied to the above points simply because I don’t think any analytical moral framework will have any impact on decision-making processes by itself. There seems to be no data, and no theory which is self-evidently good, right or better than the status quo in the average mind. Every inch must fought for and held with constant evangelizing, despite how distasteful that might seem. That just seems to be how we’re wired on the whole.

    As for the actual meat of your article, I think eigenmorality is interesting, and certainly worth pursuing in and of itself. I’ll have to read the available material in more detail. Perhaps one day it or something like it will be worthy of evangelizing!

  18. lmm Says:

    Look who’s back is in something like the same genre. I found it amusing.

  19. Rahul Says:

    Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals,

    That’s right but I think in a slightly different sense: It’s like I have always agreed I need to wake up early, go to the gym & eat healthy etc. but after years of knowing what needs to be done am limping towards those goals.

    It may be true that a part of the problem are corporates, vested lobbies, climate deniers etc. but I think the key reason we don’t do much for solving AGW isn’t that.

    Basically humans are quite bad forsaking current pleasures for future rewards. The sacrifices involved aren’t small. We are distrustful of measures making us relatively worse off than another group. We are distrustful of taxation because it is often a guise for other purposes etc. Et cetra.

    So I don’t think climate change deniers (although stupid IMO) are the main part of the problem. The bigger part (not as much discussed) is that we as a race are fairly apathetic to big, long term problems that need complex solutions, which will bring pain and results that only our descendants might enjoy.

    You are not fighting misinformation as much as apathy, in my opinion.

  20. Alejandro Says:

    Great, thought-provoking post. The “liquid democracy” idea above seems related to the ideas discussed under the heading “Be a sheep” in this recent Less Wrong post.

  21. Martin Wolf Says:

    Once you introduce the concept of “expert in a relevant field”, it may require almost human-level intelligence (in fact, maybe even smarter-than-average-human level intelligence) to figure out what that means in a given situation.

    E.g. if most legitimate climate scientists are saying that there will be a global catastrophe if we don’t take drastic actions, but a lot of high-profile economists are saying that even if we take the climate scientists’ predictions at face value, the actions they propose would do more harm than good to global welfare, whose recommendations should we follow?

    Or if the majority of people who call themselves parapsychologists say that ESP is real, while scientists in other fields such as physics and psychology mostly think it’s bunk, who do we trust? Should we believe the parapsychologists since they have the most directly relevant expertise, while those other guys are talking outside their field?

    I’d be very impressed if the eigenvalue approach could come up with the right answer in such cases (in so far as we have an objective reference for what the “right answer” is, of course — that’s kind of the problem you’re trying to solve in the first place), without the people who program the system having to do so much micromanagement that we might as well let them hardcode the “correct” answers directly..

  22. Peter Erwin Says:

    MadRocketSci @ 3:
    You’ve put your finger on it: the problem is that Scott’s algorithm doesn’t measure anything like morality at all; it really measures something like the product of in-group solidarity and group size. As originally formulated (i.e., the “eigenmoses” version), it also explicitly rewards out-group intolerance. Scott’s original intuition, I gather, is that this could be an acceptable proxy for “niceness”, which in turn is supposed to stand in for “morality” as a whole.

    Scott gets around to recognizing this with his “98% vs 2%” example, although the way it’s described makes it sounds like an unlikely extreme failure mode instead of a much more generic problem.

    Among the obvious inferences are that well-disciplined armies are among the most moral of all institutions, and that in any dispute between two states, the most moral side will be the one with the larger, more disciplined population and military.

    This makes it very easy to work out the morality of inter-state conflicts, although said morality can be highly time-variable. Thus in the Nazi-Soviet invasion of Poland in 1939, the most moral side was the Nazi-Soviet one, not the Poles. This also suggests after the Fall of France, the Axis became even more moral (and the Allies less so), since populations of Belgium, the Netherlands, and France — along with much of France’s colonies — were now cooperating more with Germany and Italy than they were with the UK.

    So among other things it ends up being a formalization of Voltaire’s famous remark that “God is always on the side of the big battalions”. (From Wikiquote I learn that Voltaire went on to suggest that “God is not on the side of the big battalions, but on the side of those who shoot best.” So that would be one way of modifying the algorithm…)

  23. Scott Says:

    Pete #14:

      This post just makes me feel bad for Jon Kleinberg.

    You mean, because of CLEVER’s not becoming Google, or because of how my post mutilated his ideas? 🙂

  24. wolfgang Says:

    @Scott,

    >> finding the principal eigenvector

    In general there will be more than one eigenvector, what meaning do you assign to the other eigenvectors?
    Are they associated with competing “moralities” (comparable perhaps to competing religions) ?

  25. MadRocketSci Says:

    @wolfgang:

    I was reading some presentation about Markov processes the other day. Markov process: the probability of the state you end up in is a function of the state you start in, but with no memory.

    Each eigenvector is associated with a “stationary distribution” – if you started with that prior distribution, you would remain there.

    However, some processes, such as {p1} = [0, 1; 1, 0]{p0} have neutrally stable eigenvectors which the process will not settle down to if it is not already there. (Classical hamiltonian deterministic systems IIRC, always look like this, and never settle down to the stationary distribution that statistical mechanics prescribes. :-P)

    Unstable eigenvectors might be precluded by the rules governing admissible Markov process matrices, but I would need to look.

  26. fred Says:

    Seems to me that the metrics is more about “social” than “moral”. There are high degrees of co-operation in organized crime (mafia, yakuza), where this is often recognized as some “code of honor” (and the whole society goes along with it).

    What’s also interesting is the idea of coming up with absolute attributes starting from circular definitions – it is at the crux of the human experience/consciousness (all symbols in our mind are defined in relation to one another, i.e. the same circularity you find in dictionaries, etc).
    This also goes back to the post about the mathematical universe. e.g. mathematical objects (graphs in this case) having emergent properties from seemingly circular definitions (consciousness itself being strongly self-referential).

  27. Joshua Zelinsky Says:

    I’d be curious to see how this interacts with something like the modal combat as presented in Eliezer’s talk on MIT (does anyone know if there’s a formal write up of that anywhere). Maybe moral combat?

  28. Scott Says:

    Lawrence #6:

      Is eigendemocracy easier to game than normal democracy? Or harder? How hard is it for google to fight off the SEOs? Who would have the authority to design countermeasures to fight the SEO-Lobbyists?

    Good questions! There are some kinds of SEO that an eigendemocracy wouldn’t want to fight: namely, any kind that involves getting other people to trust you by convincing them of the rightness of your views.

    But yes, it’s true that an eigendemocracy would be vulnerable to gaming. Whether it would be better or worse than the massive manipulation we know to be possible with “regular” democracy is hard to say, and would probably depend on the details of the eigendemocracy’s design. I’ll have an update to the post soon addressing the manipulation issue.

  29. wolfgang Says:

    @MadRocketSci , #25

    Yes, this is how it is.
    But my question is how you interpret this in terms of morality. Is there more than one morality?

    What might be moral for an islamic jihadist (and within their group there is probably a stable equilibrium) , may not be moral for other people, but something more like insanity.

    So I am wondering if Scott’s eingenmorality is perhaps misnamed …

  30. Martin Wolf Says:

    > what meaning do you assign to the other eigenvectors?

    Hmm, I wonder if Google already does something like that? Figure out from your browsing behavior which subculture(s) you belong to, and then show you the set of search results which best matches your interests/prejucides.

  31. Scott Says:

    Jeremy Kun #7:

      Do you think that an eigenvector-based trustworthiness measure would slow the rate of progress that removes bad norms from our society? For example, if this system existed 60 years ago, would people still think smoking is healthy?

    On the contrary, the hope is precisely that an eigendemocracy would speed up progress, by removing the decades-or-longer time lag that we currently suffer between when a country’s experts on some topic (recognized as such by their countrymen) figure out that something is true, and when the expert consensus gets reflected in policy. In the case of smoking in the US, this time lag lasted very roughly from the 1950s until the 1990s (and policy still hasn’t completely caught up), during which time millions of Americans have died unnecessarily. See also my comment #9.

  32. Ernie Davis Says:

    Scott — It’s a cute idea, but it seems to me hopelessly susceptible to Google-bombs, as MadRocketSci says in comment #5. Google has to wage unrelenting war, constantly updating its algorithm to try to prevent people from spamming the rankings. Since it’s a private entity and since, at the end of the day, PageRank is just advice from Google to the user, it’s OK to cede Google the authority to do that, and to decide what’s a bomb and what a legitimate vote for “importance”. But if large consequential decisions are being based on the outcome of the calculation, then there will be huge efforts to game it, which will need corresponding huge efforts to combat those, by someone who has to be trusted to make somewhat arbitrary decision. There are all kinds of technical issues here that very much change the outcome. What are the entities over which you are computing eigentrust — people, web pages, articles? Are you computing overall trust or trust with respect to a particular issue (e.g. climage change); and either way how do you compute it? What will happen is that people will first game the system, by creating alternative authorities, and then game the evaluator, by creating a new algorithm for eigentrust and tuning it to return the values they want.

    The advantage of democracy is that it is almost always very well defined what is a single person, and it is comparatively expensive to create new people.

    You say that anyone can see that anyone can see “with a click of a mouse” that there’s a large component of respected people (Nobel prize winners etc.) who trust one another, but I don’t see how exactly you’re envisioning that working, or what is the relation would between that and the eigenvector

  33. fred Says:

    Worries about Global Warming, dangers of smoking, wars, … are all nice and well, but the real issue is that at some point humanity as a whole will have to tackle the harder question of what’s our main goal as a species.
    Just letting nature take its course just isn’t sustainable – at this point humanity has as much long term planning as a virus – i.e. consume and reproduce as much as possible.
    The problem is that the world economy (capitalism) is based on the premise of never ending economic growth, itself relying on never ending population growth.
    Even with a 1% rate of growth, and assuming we colonize the whole galaxy, we would reach saturation within a few thousand years.

  34. Scott Says:

    Ben Mahala #11:

      This creates a weak point in the system: someone on the inside could easily hack the system to warp the outcomes and only someone on the inside could even recognize that something was wrong. This isn’t a new problem, but voting is pretty simple by comparison so it’s worse. Especially if the outcomes are supposed to be fast and counter-intuitive, as it is supposed to in this system.

      My gut reaction to this is to try and find some kind of open source solution, where experts are prevented from manipulation by other experts. Since the group of experts is highly fractured, it isn’t possible to monopolize all of them.

    Yes, these are extremely legitimate worries. The hope would be that, once you’d gotten the system started, changes to the eigentrust system would be governed by a consensus of the eigentrust experts, acknowledged as such according to the current system—just like the US Constitution contains a provision (namely amendments) for its own modification, and in principle, there could even be an amendment to the process by which amendments are ratified. (Indeed, probably there should be such an amendment, since it’s no longer realistic to expect 3/4 of the states to ratify that water is wet 🙂 ) On the other hand, just like with the Constitution, we might include a provision that changes to the eigentrust system require a stronger consensus than other proposed changes before they can become policy.

  35. Sandro Hawke Says:

    (I’m a different Sandro from comment #17; small world)

    In Comment #11, Ben Mahala writes:

    This creates a weak point in the system: someone on the inside could easily hack the system to warp the outcomes and only someone on the inside could even recognize that something was wrong. This isn’t a new problem, but voting is pretty simple by comparison so it’s worse. Especially if the outcomes are supposed to be fast and counter-intuitive, as it is supposed to in this system.

    I agree in practice this is the greatest danger to moving in this direction of technology-assisted societal decision making. In fact it was a talk by Jon Klienberg a few weeks ago presenting some results of studying the Facebook social graph that made me think about how much potential influence Facebook’s “Top Stories” algorithm already has, in making people feel like society is trending some direction. If people perceive their social group thinks something (even if that perception is false), they are much more likely to think it too.

    In any case, the solution is decentralized systems, where people control their own data. Then a variety of analyses, transparent and/or opaque, with or without special permissions and privacy agreements, can be run on that data. Much like with opinion polls, it will be the quality and history of the analysis that matters.

  36. Scott Says:

    Daniel #13: Thanks so much for the reference to Bradford Skow’s paper! What he discusses there is indeed extremely similar to eigenmorality, and I’ll be sure to get in touch with him.

  37. Scott Says:

    Rahul #15:

      So, to clarify, would we need one eigentrust network for each issue? Or just one network?

    There could be one network, but in my opinion, it would need to contain some provision for separating out trust by expertise, in order to make statements like the following:

      I recognize Roger Penrose as a trusted expert on general relativity, but not on neurobiology or mathematical logic.

      I recognize Leonid Levin as an expert on mathematical logic and classical computation, but not on quantum mechanics.

      I recognize Noam Chomsky as an expert on formal linguistic theory, but not on foreign policy.

      I recognize Senator Barbara Boxer as an expert on foreign policy, but not on formal linguistic theory.

      I recognize Scott Aaronson as an expert on quantum complexity classes, but not on political philosophy. 😉

    Now, the way I’d try to design such a system, trust in one field would “spill over” into trust in related fields, but the degree of spillover would decrease rapidly with the distance between the fields, in order to protect against the ever-present danger of extralusionary intelligence. So for example, Noam Chomsky’s acknowledged expertise in formal linguistic theory would give some automatic weight to his statements about experimental linguistics, correct English usage, or the philosophy of mind. But, while Chomsky would still be welcome to say anything he wanted about foreign policy, he would have to build up his eigentrust score in that field essentially independently, without its being parasitic on his stratospheric formal linguistics score.

    Of course, how to decide on the division between fields, and on the degree of relatedness between fields, is a very interesting problem in its own right. Maybe those decisions would be made by the most eigentrusted experts in the field of intellectual taxonomy. 🙂

  38. fred Says:

    That “trust” network discussion reminds me of the bitcoin network somehow… interestingly, the whole assumption that it would be extremely difficult to subvert the network just got recently invalidated (some mining pool got over 51% of the total computing power for extended periods of time).

  39. Rahul Says:

    Scott #37:

    Thanks for clarifying. That’s exactly the problem I had in mind. Too much overlap and you risk the phenomenon of an expert in one area making crackpot assessments of another area (e.g. The Physicist Nobel Laureate who wrote papers praising homeopathy etc.).

    Too little overlap and you waste useful prior information and need too much duplication of effort.

    It’d be an interesting tuning exercise to allow the right amount of spillover. Your last bit is deliciously recursive. 🙂

  40. Nisan Says:

    Joshua Zelinsky #27: The Modal Combat paper is here: arxiv link.

    I think the Modal Combat concept gets us partway to the conception of morality Gary Drescher tried to point at in their book Good and Real.

  41. Mark Probst Says:

    I think it’s important to note that PageRank does not define the importance of a web page, it’s an estimate/heuristic for the importance of a page.

    Similarly, one wouldn’t want to define morality by a linear algebra metric.

  42. Scott Says:

    Rahul #19:

      So I don’t think climate change deniers (although stupid IMO) are the main part of the problem. The bigger part (not as much discussed) is that we as a race are fairly apathetic to big, long term problems that need complex solutions, which will bring pain and results that only our descendants might enjoy.

      You are not fighting misinformation as much as apathy, in my opinion.

    I understand what you’re saying, but here’s the problem with it. How much effort does it really take to pull the lever in the voting booth for the environmentalist candidate, rather than the anti-environmentalist one? Yet many people don’t even make that completely-nonexistent “effort.”

    The background for this is that, as I said in the previous thread, I take it as obvious from the beginning that problems like climate change will never, ever be solved by individuals simply willing themselves to be good Samaritans, by biking to work, going without air-conditioning, etc. Given the reality of human apathy that you correctly pointed out, the only possible solution is for polluting activities to be heavily taxed—or cap-and-trade legislation, or some other proposal that achieves a similar effect.

    So I don’t even think about the question of “how to motivate people to pollute less”—as I said, I regard that as a complete nonstarter. The question, for me, immediately shifts to a different one: why is our democratic system unable even to pass the legislation that would reduce these environmental problems to straightforward problems of economics? Even if people are too lazy/apathetic to decide unilaterally to bike to work—which I completely understand—can’t they see that it would be a good idea to pass legislation that created economic incentives for everyone to bike to work, and that therefore induced more people to do it?

    So, the eigendemocracy idea is targeted directly at the latter question. The point of it is to try to close the terrible gap, in our current system, between

    (1) what quite possibly a majority of the public, and certainly the people publicly recognized as experts, can see is rational, and

    (2) what actually gets enacted into law.

  43. Scott Says:

    Alejandro #20: I love John Maxwell’s “Be a sheep” proposal! (Though it could use a better name for marketing purposes. 🙂 ) In practice, Maxwell’s proposal is probably much more politically feasible (and related to that, understandable by the electorate) than a full-blown eigendemocracy. I would readily accept a “sheepocracy” as a substitute for what I’m advocating in this post.

    My only addition to the idea, is that I think it’s crucial that individuals be allowed to divvy up their ceded votes by topic. For example, maybe I want to cede all my votes about environmental issues to the Sierra Club—except for votes related to nuclear power, which I cede to a pro-nuclear organization—while ceding my votes about education reform to another organization, etc., as always reserving the option to override individual votes if I feel strongly enough about something.

    With such a system in place, it seems to me that direct democracy could work, if not perfectly then better than what we have now, even in a country as large as the US.

  44. asdf Says:

    I’d say discrimination is immoral, and can best be understood in terms of the hawk-dove game (aka Chicken) rather than the prisoners’ dilemma. See this blog post and the paper it connects to:

    http://yanisvaroufakis.eu/2014/03/21/how-do-the-powerful-get-the-idea-that-they-deserve-more-lessons-from-a-laboratory/

    The paper is more restrained than the blog post and doesn’t (can’t) speak as much to psychological motivations, but the outcome of the experiment itself really tells us something.

    Could there be an Eigenfallwell in the game, who is nice to members of one religion/ethnic group but not others? Or maybe nice to all groups except for a certain specific one?

  45. Scott Says:

    Martin Wolf #21:

      E.g. if most legitimate climate scientists are saying that there will be a global catastrophe if we don’t take drastic actions, but a lot of high-profile economists are saying that even if we take the climate scientists’ predictions at face value, the actions they propose would do more harm than good to global welfare, whose recommendations should we follow?

      Or if the majority of people who call themselves parapsychologists say that ESP is real, while scientists in other fields such as physics and psychology mostly think it’s bunk, who do we trust?

    I agree that these are difficult questions, though your first example is harder than the second.

    I take it as obvious that climate scientists and economists both have expertise relevant to what economic policies we should adopt to deal with climate change. So, if they firmly disagree with each other, then there’s indeed an impasse—though there are two excellent ways to move forward.

    (1) Divide into subproblems based on expertise. The climatologists figure out which parts of our world will be destroyed for a given amount of CO2 increase. The economists figure out how best to achieve a given emissions reduction target, and how much it will cost to do so. And as for the “monetary value” of those parts of the world at risk of being destroyed—and therefore, what the emissions reduction targets should actually be—that question is opened up to a much broader electorate, since it’s fundamentally a moral question, on which neither the climatologists nor the economists can claim a monopoly.

    (2) Any time the economists and climatologists do agree about something—let’s say, about the benefits of modest cap-and-trade legislation—we can certainly implement that! It’s extremely noteworthy that not even that—i.e., not even the things that essentially all legitimate experts agree about—are being implemented in the current US political climate.

    Now, regarding your ESP example, that one can be resolved simply by observing that the parapsychologists form an isolated cluster in the trust graph of science. I.e., in order for your scientific expertise to be accepted as legitimate, we can say that it’s necessary, first of all, that the “giant connected component” of science (physics, chemistry, biology, math, etc.) contain links that recognize your field itself as a legitimate part of science. Then, secondly, there should be links within your field that recognize you as an expert practitioner.

    Anyway, I admit that treating your specific examples is a far cry from what we really want, which is to give an abstract, general way to crowd-source the meta-question of what kinds of expertise are relevant to a given question. I might have some thoughts about that in an update to the post.

  46. RubeRad Says:

    “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”

    I have always wanted to measure language complexity by parsing a dictionary for which word definitions use other words, and computing and inspecting the sink connected component — i.e. the minimum set of words from which you can boostrap into a language (i.e. fully use a dictionary). How much variation in sink components between various english dictionaries? How do different languages compare?

  47. Scott Says:

    Peter Erwin #22:

      This makes it very easy to work out the morality of inter-state conflicts, although said morality can be highly time-variable. Thus in the Nazi-Soviet invasion of Poland in 1939, the most moral side was the Nazi-Soviet one … So among other things it ends up being a formalization of Voltaire’s famous remark that “God is always on the side of the big battalions”.

    There’s a crucial observation that I took for granted in the post but shouldn’t have, so let me now make it explicit. The observation is this:

    No system for aggregating preferences whatsoever—neither direct democracy, nor representative democracy, nor eigendemocracy, nor anything else—can possibly deal with the “Nazi Germany problem,” wherein basically an entire society’s value system becomes inverted to the point where evil is good and good evil.

    Whenever such an inversion happens, the only solution I know of is for the rest of the world to defeat the morally-inverted society in war—the sooner the better. (Of course, a better preference-aggregating process might have prevented the rise of Hitler in the first place, but that’s a separate question.)

    Thus, the most we can possibly hope for, from any political system, is that as long as there remains a “core of reasonableness” within the population—by which I mean, for each issue, a core of subject-matter experts who more-or-less know the right things to do on that issue, and who most of the population implicitly acknowledges as being experts on the issue—any consensus that forms among those experts should quickly get translated into policy, and should be safe from being overridden by demagogues, charlatans, bloviators, and cynical profiteers.

    Then one simply needs to observe that representative democracy, certainly in the US and probably in other countries, is failing miserably even at that right now. The core of reasonableness still exists and functions in the society—and is even, by and large, recognized as such by the majority—but somehow the mechanisms for translating expert consensus into policy have broken down. This then motivates the search for improvements to our political system that could solve this problem.

  48. Rahul Says:

    The failures and slow pace of representative democracy might indeed be a problem but sometimes I think this inefficiency might in itself be a virtue overall?

    Direct democracy scares me; tyranny of the masses and all that. The risk lies in its ability to execute even evil in a highly efficient manner. Scott rightly complains about a minority successfully pushing its own agenda in the AGW context in the face of perhaps a majority that has a different opinion. But this same flaw becomes a virtue when it lets minorities protect their rights even when a misguided majority wants to “democratically” trample on such rights.

    I wonder what operational characteristics an eigendemocracy would have in this context.

    Does it creep a bit too near the scary areas of a direct democracy framework?

  49. Scott Says:

    wolfgang #24:

      In general there will be more than one eigenvector, what meaning do you assign to the other eigenvectors?
      Are they associated with competing “moralities” (comparable perhaps to competing religions) ?

    That’s an extremely good question! The short answer is that, yes, the other eigenvectors (which aren’t nonnegative) can be used to identify “clusters” in the graph other than the main cluster. For example, in my illustration with the 98% majority and 2% minority who hated each other, looking at the first and second eigenvectors together would let you identify the minority. For more, see this Q&A on Math StackExchange.

  50. Scott Says:

    Rahul #48: You raise an excellent question, which could be phrased as:

    Why should anyone believe that an eigendemocracy would overrule the minorities who deserve to be overruled—for example, the climate-change deniers—even while upholding the rights of the minorities whose rights deserve to be upheld?

    The only way I can explain my answer to this question, is via the notion of a “core of reasonableness” within any decently-functioning society (cf. comment #46). By the “core of reasonableness,” I mean statements like the following:

      “Even if the anti-vaxxer Michelle Bachmann won an election, there are people who most of the voters who elected her would recognize as medical experts—namely, their doctors—and those doctors are essentially unanimously pro-vaccine.”

      “Even if a majority in such-and-such a district are closet racists, and might even act on their racism in the privacy of the ballot box, they nevertheless usually have enough class not to express racist sentiments when the microphones are on. As a result, if we look for the most prestigious, respected 10% of people in the district—i.e., the ones most respected by those of their friends and neighbors who are themselves respected, etc.—they’re much less likely to be racist than the population as a whole.”

    Now, we could imagine a society where not even the above was true—where even the doctors, or other people who were acknowledged as the experts in their field by the public, experts in related fields, etc., had been corrupted and propagated evil absurdities; or where even the most iteratively respected figures were racists and villains. Nazi Germany was an example of such a society, as was the Soviet Union in pretty much all domains other than physics and math.

    But it seems to me that contemporary Western democracies are not examples of this. That is, they have cores of recognized expertise that continue to function, and that regularly reach moral consensus based on real knowledge. The problem is just that, in many cases, the political system now fails abysmally at translating those cores’ knowledge into policy. Eigendemocracy is my (admittedly flawed, incomplete) attempt to come up with a system that might do a better job than either direct democracy or representative democracy at “automatically picking out” the cores of expertise and exploiting what they know.

  51. Patrick Says:

    I look forward to a cottage industry: The SEO of immorality & the Google bombing of truth.

  52. patrioticduo Says:

    “We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler” No, I most certainly do not agree. And the reason is simple. Being nice (or not) to Gandhi probably isn’t going to negatively affect you. Being nice to Hitler might save your life – a distinct positive. And, so the question of morality is always about utility within the environment. Few of us experience a broad enough range of environments to know how punishing it can be when you go from one to another and make choices that provide little, no or negative outcomes; choices that were perfect in one environment but are absolutely horrendous in another. So if you consider other cultures, nations, locations and geopolitical places in the world, utility is driven by the environment in which you find yourself. How do you define (and then measure) “environment” in mathematical terms across the entire human existence?

  53. patrioticduo Says:

    On another completely different point. You forget to include the fact that ALL of the western democracies are for the most part democratic republics. The basic premise of your article thesis seems to be that a) “democracy” is a perfect system b) the democratic process can be defined c) an eigenvalue can be reached for all output of a democratic system. But such a claim makes the mistake of trying to define all problems as democratic games. The idea that majority is right is just plain flat wrong. It could also be the reason why you seem willing to throw around the idea that minorities are wrong. Give yourself a few decades of experience and you will discover that majorities are almost always wrong. The beauty of democratic republics is that they get to do things wrong slowly and but can correct them quickly. Now for homework, you should do some serious research in to AGW. You seem to be suffering from symptoms of tribalism in the words you’re using on particular that topic. Go read!

  54. Scott Says:

    patrioticduo #52: In order to eliminate extraneous factors (like whether being nice to Hitler will save your life), which would have to be included as separate terms in the model, let’s focus on a simplified scenario. A mad scientist forces you to choose one among Gandhi and Hitler to be tortured and killed, while the other is confined for the rest of his life to a very pleasant tropical island. No one else will ever know which choice you made, and indeed, it will have no effect on anything else.

    In this scenario, do you agree that it’s better, more moral, for Hitler to be tortured and killed than Gandhi? If so, then that’s all I needed in the passage in question.

  55. Frank Wilhoit Says:

    “A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.”

    By abstraction, “A Foovian is a person who cooperates with other Foovians and who refuses to cooperate with non-Foovians.”

    So: morality == tribalism. Who knew? (Actually, everybody knew.)

    This doesn’t seem to have a very high utility, as the expression-in-action of “refusal to cooperate” decays to genocide with an observed half-life (in any given context) of approximately seventy years.

  56. Ilya Shpitser Says:

    “there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority”

    I can see no way this could possibly go wrong :).

    Graphs are pretty simple structures, they have a hard time representing fine distinctions like “we should do something about global warming” vs “we should kill all the Jews in Germany.”

  57. AdamT Says:

    I’d like to know what would happen if you had an EigenBohdisattva who would score not just individual acts of cooperation or defection, but also score any individual who helped to increase the total amount of cooperation or defection.

    In other words, to a bodhisattva the moral question is what effect does an individual bot’s actions have on the overall level of cooperation. It is conceivable that some bots cooperate all the time, but other bots sensing this or observing this would defect.

    To an EigenBodhisattva the question is can a bot be designed who’s mere presence *always* has an overall positive impact on the total amount of cooperation.

  58. Ron Fagin Says:

    Scott, it is not true that CLEVER and PageRank are doing essentially the same thing. The hub and authority scores in Clever are topic-oriented, while the PageRank score is not. Specifically, in CLEVER, you pick a topic (say, giraffes), and find a set S of pages about giraffes (using some other search engine – say, you take the top k pages from some other search engine with the query “giraffes”). You then expand this set S by taking forward and backward pointers, I think by two levels (so, for example, you are including in this expansion of S those pages pointed to by pages pointed to by the original set S). You then find hubs and authorities for the expanded set S. So these are hubs and authorities specifically about giraffes. By contrast, the PageRank score is absolute, not topic-oriented – every page has a PageRank, which is topic-independent. If the user asks about a specific topic (like giraffes), Google (at least originally, to first order) essentially returns the pages with the highest PageRank scores that also happen to have something to do with giraffes.

    Furthermore, CLEVER and PageRank were not obtained at the same time. CLEVER came first, and indeed the Google guys acknowledged that CLEVER had shown the importance of using links in deciding the importance of a page, and the Google guys made use of links in a quite different way.

  59. AdamT Says:

    One obvious example of ‘vicious circularity’ might be the following:

    “A good commenter in an internet thread is one who posts good comments as judged by other good commenters.”

    I would love someone to design a commenting system using this linear algebra trick to improve the overall quality of internet comments on blogs like this 🙂

  60. AdamT Says:

    Or how about:

    “A good citizen capable of representing others in a societies political system is one who is judged so by other good citizens capable of representing others in a societies political system.”

    How about a voting system using this same algorithm?

  61. AdamT Says:

    Leave it to me only read half the article before replying. I now see you are advocating exactly to use this technique applied to political systems.

    Funnily enough I had thought of something along these lines when Vox media launched and I was pondering how we could make a better internet commenting system.

    I think the future of future political science should be a good commenting system. Figure that out and demonstrate it to work and people will see how it could be used to make good group decisions.

  62. fred Says:

    Scott #54

    I recommend watching the documentary “The act of killing” (on Netflix).

  63. Cody Says:

    This reminds me of DW-NOMINATE.

    And I also wonder, do we lose something with a one dimensional ranking? It seems like morality might sometimes contain a property akin to non-transitive dice.

  64. Vitruvius Says:

    Interesting essay, Scott, thanks. I love how you’ve abstracted out and expanded on some of the other recent discussions here at Shtetl-Optimized. There’s lots of good food for thought there, some of it delightfully anti-democratic, which I’m not always completely opposed to, at least in principle (only in practice). It reminds me a bit of the time when as a young engineer I showed my Burt™ ski bindings to a wise old engineer and he said, “it’s a very clever design, but it is too complicated and is going to break down on the hill” (and it often did).

    At some point it can become one of those “the cure is worse than the disease” problems. Since you’ve provided some other interesting false dichotomies, here’s another: would you rather our species be wiped out by the weather, or be forever enslaved? It also occurs to me that it might not be a great idea to overdo the everyone-bike-to-work idea, for example, in the case of the wheelchair-bound when the temperature is -40° 😉

    More seriously, perhaps, I think that your “a moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people” conjecture has interesting parallels to Daniel Dennett’s observation in his Is Free Will an Illusion? talk at the Santa Fe Institute last month, which spends most of the latter part talking about morality, where he notes that morality is a club, the “Moral Agent Club”, and he takes a look at the rules for membership in said club.

    Lastly (for this comment), note that any algorithmatized decision-making system still has to watch out for local minima, even among so-called experts. Just consider the many examples in Charles Mackay’s 1852 classic, Memoirs of Extraordinary Popular Delusions and the Madness of Crowds (volume I) and (volume II). This may be a recursively unsolvable problem that can only be, at best, mitigated.

  65. Ray Says:

    Once a nerd, always a nerd.

  66. Matthew Says:

    I think Rebecca Newberger Goldstein’s objection runs deeper than you imply in this article. For example, your system seems to only allow for a consequentialist view of morality, and many people – like me! – would object to such an assumption. Personaly I adhere to virtue ethics in which one is judged (in my conception of virtue) by the virtuousness of who you want to become and your sincerity at trying to get there. It is therefore possible for someone to be a hopeless alcoholic, with all of the concomitant bad behaviour, and still be morally upright through his sincere attempts to become sober. How could you possibly measure something so subjective?

    Then there are other measurability problems. How for example would your algorithm deal with someone like St Therese of Lisieux who lived a perfectly middle-class life until entering an enclosed convent at age 15 and then died at 27. And yet her writings have had a profound effect on countless people. Or again, there is Jean Vanier who founded the L’Arche communities. Almost all of the people he has helped are profoundly disabled and are unable to take part in a study except as a largely passive subject. How could his moral acts be taken into account?

    Then there is the black swan problem. There are cases throughout history of people living a perfectly normal life until they are given an extreme moral choice and then choose to make a genuinely heroic supererogatory act. There are all sorts of philosophical questions there about whether the person was extremely good before the act or not, but your system seems to mandate one particular view over the other (That they were ordinary until the extraordinary act).

    Also, shouldn’t a single heroic act be given a much greater weight than many small acts of kindness? Can your system account for such a view of moral value?

    Finally, your system doesn’t seem to take account of the possibility of a sudden, or even gradual, transformation of the individual. If St Paul organised the execution of many Christians, then would the number and severity of those acts outweigh everything else he did after his conversion? If an individual doesn’t have a change of heart then it would be right to include immoral acts from long ago, but if a change of heart comes with a change in behaviour then surely it would be wrong to hold those things against him forever. Your system needs to include atonement in some way, but it is far from clear to me how you might try to do that.

  67. fred Says:

    I’m not sure whether Godwin’s law applies here, but all that talk about coming up with “mathematical metrics” to “measure” human elements such as consciousness, morality, expertise… is quite disturbing at many levels.

    – The last guys who tried to bring science and the human element together to support their doctrines were the Nazis. At the time genetics was cutting edge science and naturally lead to eugenics, which seems to many like a freaking great way to “improve” humanity.

    – Some of the previous discussions in this blog have devolved into the average level internet discussion, if you see what I mean… many academics tend to fall in love with your own theories, but that doesn’t mean they’re any wiser than the guy next door.

    – You can’t be flip-flopping between mathematical rigor and sudden appeal to emotion (“ok, by say you had to chose between Mother Theresa and Kadaffi…” and other non-sense).
    As far as modern science goes, everything is either causal or random… so, if you gonna blame anyone for their own actions, you might as well blame the whole universe (e.g. WW2 could be explain just as well or as poorly by looking at the various lumps of atoms and the fundamental forces involved, moving around the surface of the earth between 1939 and 1945).

  68. Scott Says:

    Ilya #56:

      Graphs are pretty simple structures, they have a hard time representing fine distinctions like “we should do something about global warming” vs “we should kill all the Jews in Germany.”

    (sigh) See comment #47, which was already over this territory.

    Again, no system whatsoever for aggregating people’s preferences—not direct democracy, not representative democracy, not eigendemocracy—is going to deliver just outcomes if you employ it in Nazi Germany circa 1941, any more than the world’s best supercomputer will give you the right answer if you feed it garbage input. I would’ve thought that was obvious. Your comment could just as well be directed against democracy itself, as against anything I was exploring in this post.

    In my view, the most we can hope for from any democratic system is the following conditional guarantee: that if it’s possible, just by examining the data of who believes what and who trusts whom, to identify a clear consensus for doing the right thing among those acknowledged to have the most relevant expertise, then the right thing gets done. Nazi Germany isn’t particularly relevant here, since the condition wasn’t satisfied. What interests me is that, in the modern US, it seems clear to me that the condition is satisfied while the consequence is not. In other words, this seems to me like a genuine failure of the democratic process, rather than (as in Nazi Germany) a failure of the underlying society. So it invites the obvious question of whether a different process would do better.

    However, let me go a step further: while it’s true that no preference-aggregation mechanism could possibly have helped once Hitler had brainwashed / won over most of Germany, it seems very likely that a better democratic process in 1933 would’ve prevented him from coming to power in the first place.

    Finally, let me note the irony that, while you consider “eigendemocracy” so absurdly democratic that it would bless even the Nazis if a big enough majority were behind them, others criticize the idea for being anti-democratic and intellectually elitist. Maybe that means there’s probably something right about it? 😉

  69. Scott Says:

    Matthew #66: OK then, let me give out generalizing the eigenmorality idea to incorporate non-consequentialist ethics, black swans, heroic acts, individual transformations, atonement, and the examples of St. Therese of Lisieux and Jean Vanier as research challenges for the commenters. As they say, “numerous open problems remain.” 😉

  70. Scott Says:

    fred #67:

      The last guys who tried to bring science and the human element together to support their doctrines were the Nazis.

    That’s an absurd and even offensive statement. Other people who “tried to bring science and the human element together to support their doctrines” were … uh, let me see here … Niels Bohr, Carl Sagan, Neil deGrasse Tyson, Steven Pinker, Margaret Mead, Nate Silver, Freudians, Marxists, behaviorists, evolutionary psychologists, sociologists, political scientists, analytic philosophers, environmentalists, transhumanists, libertarian economists, diet and exercise gurus, and numerous high-school science teachers. Your argument is like saying that the cautionary lesson we should draw from the Nazis is not to support anything whose name contains the letter “a.” Not only is it silly, but it trivializes their evil.

  71. Ilya Shpitser Says:

    “Your comment could just as well be directed against democracy itself, as against anything I was exploring in this post.”

    I guess I am just confused by the title: “Eigenmorality” vs “Eigen-aggregating-preferences” — there is nothing in your language that makes moral distinctions at all, of any kind.

    I can imagine a hypothetical planet (Ferengi Prime?) where the global warming sides are completely flipped, but the graph looks the same. You would consult your internal “moral algorithm” and conclude the Ferengi aren’t being moral — there is a “failure of underlying morality”, as you put it.

    A lot of people view the “morality problem” as writing down what that algorithm is that you consult.

  72. Ilya Shpitser Says:

    “failure of underlying society” not “morality,” sorry.

  73. Rahul Says:

    One reason why PageRank works so well, IMO, is that it uses an inherent characteristic (i.e. hyperlinks / content keywords) for a secondary ranking purpose. Sure links can be gamed, & dummy keywords stuffed into pages but those are second order effects.

    So also when using eigenvectors on, say, citation networks etc.

    I speculate that with systems like EigenDemocracy which they use a network / digraph whose explicit and only purpose is the rating / authority validation effect in itself these gaming problems are going to be amplified.

    The strength in Google’s algorithm was that it was using structure already inherent in the data and not created for rating purposes.

    I could be wrong.

  74. David Chudzicki Says:

    Scott #43 (“it’s crucial that individuals be allowed to divvy up their ceded votes by topic”):

    I think it’d be needlessly complicated for the legal system to explicitly allow you to divvy up ceded voted by topic:

    In a Simple Sheepocracy, perhaps you’d cede your vote to me and send a machine-readable message to my vote-divying service that instructs me how to vote on your behalf. I could implement a topic identification system (with maybe a mix of human and machine involvement) that you trust to determine whose vote you follow, but that topic system may not even be necessary — you could specify that you vote with the Sierra Club except where your chosen pro-nuclear group disagrees with them, and trust the pro-nuclear group to abstain on issues that aren’t specifically about its topic (and switch to a different pro-nuclear group if they ever break that trust).

    There could be many independent services claiming to implement the same vote-allocation procedure, checking each other’s work. (You’d be officially giving your vote to one but will be notified very quickly if it veers off course.) If at any point you stop trusting me, you take your votes elsewhere.

    Your eigendemocracy could even take root in this system, through people (eventually a majority) allocating their votes to the Scott Aaronson Eigendemocracy (SAE). For robustness, we would assign our votes among a large set of independently operating SAE implementations sharing the same algorithm and data sources (they’d recognize each other’s members for the purposes of the computation).

    Also worth noting: Deal-making (which we probably want to allow for) breaks any sharp divisions between topics. The flexibility to handle complications like this in a variety of ways (chosen individually by voters) is an advantage of this system.

  75. Luca Says:

    I don’t know if I am repeating something already said above, but I find the failure of the definition in the case of oppressed minorities to be very interesting.

    It is also unavoidable because, if the only input to the definition is the graph of who is nice and who is not nice to whom, then an oppressed minority (whose members are nice to each others, but not nice to their oppressors) is indistinguishable from a crime syndicate, whose members are bound to an honor code between themselves, but are definitely not nice to everybody else.

    A definition able to distinguish the two cases needs some additional input, and it seems that temporal information is important. For example, imagine that a man gropes a woman and then the woman punches him, and the two of them are the only agents in the world. Then they have not been nice to each other, yet their moral scores shouldn’t be the same.

    Indeed, in kindergarden, where one has pre-theoretical notions of morality, “he started it” is a common argument. But there is nothing infantile about it. In almost any situation where a group is oppressed, they will say, “let’s look at the root causes of the situation,” and the majority will say “we need to look forward, not backward,” or “these are the facts on the ground.” That is, the oppressed want to look at the time sequence of events, the oppressors to the *current* graph. (Note also that the people of the crime syndicate don’t want to explore the past either.) For an excellent example, take Chief Justice Roberts: “The way to stop discrimination on the basis of race is to stop discriminating on the basis of race” and consider Sotomayor’s outraged response.

    http://www.msnbc.com/msnbc/sonia-sotomayor-slams-supreme-court-right-wing-race-matters

  76. asdf Says:

    Hey Scott, I posted earlier about discrimination and the Hawk-Dove game. Did my post just get stuck in a moderation queue or was something actually wrong with it? I thought it was ok but I can rewrite it if required.

  77. Darrell Burgan Says:

    I wonder where EigenBuddha would land in terms of niceness scores … 🙂

  78. Darrell Burgan Says:

    For what it’s worth the ideas of prediction markets and eigendemocracy are great models, and one of them may indeed be as good as we can do. But I need only remind myself of the instability of the allegedly perfectly self-regulating economic market to recover my skepticism.

    Any system involving people will involve people making honest mistakes, outright dumb decisions, obstinately wrong decisions (just because they can), decisions based on lies or misinformation or false perception, decisions based upon mysticism, and a host of other unmodelable human behavior. Put simply, humans are not rational creatures. On our best days we approximate rationality fairly well. On our worst, we can’t be reasoned with. Most days are somewhere in between. This is why I’m skeptical of any economic or game theory approach to managing any complex human system. At some point, humans will push it far enough out of balance that it will fall off the rails and end up somewhere perverse.

    I think something with a lot of counter-balancing positive feedback loops is what is required. Unfortunately in such systems we gain stability but lose any chance at progress. The system is gridlocked in a stable, immovable state from which no one can dislodge. Something like the mess we have today?

  79. Scott Says:

    asdf #76: Sorry about that; I just missed it! Your comment is now up (and, alas, has screwed up the numbering).

    (OK, numbering should now be fixed.)

  80. fred Says:

    Scott #70

    You guys are all talking here about precise, mathematical measure for the human element. Not psychology and nutrition diets.
    I can’t think of anything more trivializing than the claim that one can come up with mathematical formulas to quantify the amount of “consciousnesses” in a being, the amount of “morality” in a brain or society, and “rank” who should and shouldn’t be allowed to take decision for the good of everyone.
    If you don’t see the dangers in that from looking at the past history, I don’t know what to say…

  81. James Gallagher Says:

    I like this project more than the mathematical attempts to analyse consciousness. Morality seems quite amenable to quantitative analysis eg consider that the notion of moral behaviour has arisen over centuries as the result of zillions of piecemeal interactions and observation of which ones caused the least/most “upset”. That’s not precise, but it’s more easy to translate to mathematical modelling than ideas about consciousness.

    Maybe one day we’ll have a moral rating, just like a credit rating.

    re your failure to make millions from CLEVER project, have you thought about writing a (fictional) script for a hollywood movie a la The Social Network, where you play a role similar to the Winklevoss twins (without the CGI twin) and sue Google for stealing your idea?

  82. David Brown Says:

    “Even though, by and large, reasonable people agree on what needs to be done …”
    I think that one thing that needs to be done is to understand the truth about human volition. My guess is that the concept of free will is a quasi-theological concept that is very misleading. Fundamentally, I think that the truth about volition is either semi-random will or superdeterministic will.
    http://arxiv.org/abs/1308.1007 “The Fate of the Quantum” by Gerard ‘t Hooft, 2013
    Through understanding molecular psychology, there might emerge a reasonable notion about the real possibilities for the future. Lawyers and investment bankers probably won’t agree about what needs to done, if the doing damages them financially. Financial sharks and other animals probably are merely doing what comes naturally.

  83. AdamT Says:

    Luca #75, Scott original post,

    “I don’t know if I am repeating something already said above, but I find the failure of the definition in the case of oppressed minorities to be very interesting.”

    Actually, I found this corner case as *supportive* of the method and a failure of societies moral intuition. And I think highly advanced morally sophisticated people like HH the Dalia Lama and Jesus and Ghandi would agree.

    Take Tibet as an example. The Dalia Lama does not believe the repressed Tibetan minority should act non-morally towards the larger majority. That is because the Dalai Lama does not view the very act of being oppressed as important for distinguishing moral identity. Indeed, the Dalia Lama would regard the happiness of the oppressors as equal in importance to the happiness of the oppressed.

    Therefore if one finds oneself as belonging to the two percent of the oppressed and uses that as a justification towards violence to the 98% I believe this would make for VERY immoral behavior and I think HHDL, Jesus and Ghandi would agree.

  84. AdamT Says:

    Scott, what do you think of the EigenBodhisattva which would score not individual acts as much as score the total effect of the bot’s mere presence in the system with regards to total levels of cooperation/defection? Assuming one could program such a bot capable of reliably having said effect?

  85. fred Says:

    AdamT #83
    That’s an interesting point of view.
    Also, the question of oppression is tricky once you start to look at things from a “natural selection” perspective, at the cultural level.
    Societies don’t just compete through overt wars and active oppressions, but also by simply being more thriving (more babies, more popular culture, etc), leading to subtle instinction of the others.

  86. Scott Says:

    fred #80:

      I can’t think of anything more trivializing than the claim that one can come up with mathematical formulas to quantify the amount of “consciousnesses” in a being, the amount of “morality” in a brain or society, and “rank” who should and shouldn’t be allowed to take decision for the good of everyone.

    Uh, you do realize, don’t you, that my recent posts about consciousness were sharply criticizing Tononi et al.’s Φ measure, and the claim that any measure of its type can capture the amount of consciousness in a system? And also that, in this post (and the comment thread), I and others have largely been discussing the many ways in which the proposed measures of eigenmorality and eigentrust fall short, and what would need to be done to improve them? Do you pay any attention to the content of what’s written? 🙂

    Also, did you reflect that, whether we’re deciding how many votes it takes to override a President’s veto, how many signatures it takes to get a proposition onto the ballot, what the tax rate should be as a function of income, how many years in prison an attempted murderer should get, at what age a person should be allowed to vote, at which trimester (if any) abortion should be restricted or illegal, etc., our civilization is constantly reducing complicated human problems to “mathematical formulas” … not because we’re closet Nazis but because there’s no alternative? If so, did this cause you to reflect that maybe, just maybe, the problem is not with the entire idea of “mathematical formulas” applied to human problems, but with the actual content of the formulas? So, for example:

    “abortion should be legal until the 6-month mark, but heavily restricted thereafter”: not a perfect mathematical rule, but not trivial to come up with a better one

    “anyone with at least 1/8 Jewish ancestry gets sent to the death camps, while those with less can be suffered to live”: extremely bad mathematical rule

  87. Scott Says:

    Ilya #71: Sorry, my bad. I think the confusion arose simply because I was trying to cover too much ground in one post. I started out with the eigenmorality idea—which is purely about a population of agents cooperating or defecting with each other, a framework that (as it stands) isn’t even obviously able to express a debate about climate change—and then later segued to discussing eigendemocracy, which is mathematically related but philosophically different. My comments about climate change were meant as part of the eigendemocracy discussion. Of course eigenmorality is also open to strong objections—Luca’s comment #75 provides an excellent example—but the obvious fact that a majority can believe some statement of fact and still be wrong about it is not an argument against eigenmorality.

  88. Scott Says:

    AdamT #84:

      Scott, what do you think of the EigenBodhisattva which would score not individual acts as much as score the total effect of the bot’s mere presence in the system with regards to total levels of cooperation/defection?

    So, uh, let me see if I understand correctly. Suppose that a terrorist, by terrorizing everyone, caused millions of people to get closer to their loved ones, and thereby increased the total amount of cooperation in society. Would we then have to say that the terrorist was morally good?

  89. Scott Says:

    James #81:

      re your failure to make millions from CLEVER project, have you thought about writing a (fictional) script for a hollywood movie a la The Social Network, where you play a role similar to the Winklevoss twins (without the CGI twin) and sue Google for stealing your idea?

    But it wasn’t my idea, and I wasn’t involved in the CLEVER project in any way. As I said in the post, I just heard about it early on, e.g. from Kleinberg’s talks. If such a movie were made, then Kleinberg should be the star. But I’m not sure how much material there would be; I haven’t heard anyone level an accusation that the Google guys behaved inappropriately.

  90. fred Says:

    Scott #86

    Scott, I know you’ve been skeptical of all such claims and your interest is in the name of scientific curiosity and debate.
    But my point is that it’s at least worth considering a little bit what the moral implications would be.
    Maybe we can come up with a morality measure for the idea of measuring morality? 🙂

    About your examples of “mathematical formulas” you’re quoting:
    It’s one thing to measure the human life of a human in years, and then use x-years as a cut-off metrics for various complicated morality questions, like voting age, age to go to war, age to have sex, etc.
    Or measure a human income in terms of dollars, then come up with cut-off numbers for taxes, success, etc.
    The metrics are obvious here, it’s their applications that are tricky.

    But that’s all quite different to claim that it’s possible to reduce human consciousness and morality (i.e. how good a person/society is compared to another) into one objective number.
    The closest example I can think of is the IQ level, but even that seems quite a notch below. If I’m not mistaken, the IQ level is used in the USA to send or not people to the electric chair, and there’s plenty of debate surrounding that I think. Seems to me that quantifying “morality/consciousness” as a sort blackbox measurement would make things even trickier.

  91. Scott Says:

    Ron Fagin #58: Thanks very much for the historical corrections! I’ve edited the post slightly to reflect what you said.

  92. Scott Says:

    Rahul #73:

      The strength in Google’s algorithm was that it was using structure already inherent in the data and not created for rating purposes.

    I think you’re absolutely right, and it’s an extremely important point—in fact, that’s exactly what I was going to talk about in the update to the post that I still haven’t written! 🙂

    On reflection, I believe that an eigentrust system would only work if, rather than asking people who they trust, with the understanding that the information would only be used for rating purposes, we were somehow able to collect data about who people do in fact trust as they go about their lives. And I’m not sure how that would be done (open to suggestions).

  93. John Baker Says:

    PageRank, Plato and eigenvalues are the not the bed fellows you expect in a blog post. You have made my morning. Judging morality, peer consensus, and other issues on the basis of network component structure will certainly add perspective, but as many have already noted, this way of looking at things will be rapidly gamed. I suggest applying the same network approach to the gamers. Thought, and real, criminals travel in cabals and uncovering the cabals will be of great value. They’ll try and hide, but that will leave signals in the network as well. Thanks for a great post.

  94. AdamT Says:

    Scott #88,

    Strictly speaking in terms of the mathematical algorithm, then yes an EigenBodhisattva valued system would judge the total amount of moral goodness of the system and a particular bot’s contribution to that. In your original formulation the total amount of moral goodness would be equal to the total amount of cooperation vs defection. So in this sense if a “terrorist bot” could be written that would always cause the total amount of cooperation in the system to be higher with its mere presence, then in spite of its own acts of defection it would be judged morally good. I think this would be the equivalent moral value metric to model the morality system of a Bodhisattva in your formulation.

    In the real world, if this terrorists acts of terror were specifically intended to increase the total amount of happiness in the world and decrease the total amount of suffering in the world AND it was effective in achieving this, then from a Buddhist/Bodhisattva’s perspective this would be deemed moral acts! Note: that in the real world we – speaking for Mahayana Buddhists here – believe both the intention and the outcome are important to the calculus of determining virtuous behavior.

    However, just as you noted in your original post ‘cooperation’ can be quibbled over. Not all actions can be reduced to ‘cooperative’ vs ‘defecting’ any more than we can say that the act of killing has the same moral value as stealing.

    In the Buddhist world view it is not about cooperation VS defection. The Buddhist system judges virtuous behavior to be those acts that have the intent and outcome of increasing the total amount of happiness among the entire population of sentient beings. Conversely, we judge non-virtuous behavior to be those acts that have the intent and outcome of increasing the total amount of suffering among the entire population of sentient beings. Actions that have the intent, but not the outcome OR the outcome, but not the intent are more subtly defined.

    Does this surprise you?

  95. Martin Wolf Says:

    > So, uh, let me see if I understand correctly. Suppose that
    > a terrorist, by terrorizing everyone, caused millions of
    > people to get closer to their loved ones, and thereby
    > increased the total amount of cooperation in society.
    > Would we then have to say that the terrorist was morally
    > good?

    I think that was the plot of Watchmen.

  96. rrtucci Says:

    The structure of “who do you trust” is already there in Facebook. (Ouch, not only did you miss the Google train, but also the Facebook train.)

  97. Scott Says:

    rrtucci #96: But is it actually there? I’ve had an FB profile since 2004, and while it shows who my “friends” are (meaning, people who I allow to see my profile), I’m not sure how you’d extract from FB a list of the people who I trust to have true beliefs about vaccines or climate change or any other particular issue. (Maybe you could do it, but it certainly doesn’t seem trivial.)

    Incidentally, did you not “miss the boat” on Google in the same way I did? Were you an early investor or something? Like many others, I was an “early importance-realizer,” but not someone who managed to monetize that realization.

  98. rrtucci Says:

    LOL “Missing the boat” is the story of my life.

  99. jack morava Says:

    Re comments 8, 24, 49, 75 — ie about eigenvectors other
    than the principal one: perhaps these ideas will be useful
    in understanding political and social polarization, class
    formation, etc?

  100. Ilya Shpitser Says:

    Scott #87: Thanks for the clarification. It sort of seems like Luca and I have the same objection: the core problem of ethics is classification, and the info as stated is insufficient to distinguish good from bad (in his case a crime syndicate from an oppressed minority, and in my case whether the majority in a global warming debate is like foresighted cooperating humans, or the shortsighted money-grasping Ferengi).

    I agree with Luca that we need a much more powerful language to make progress on classification, but I am perhaps even more pessimistic than he is. That is, I think even adding temporal information will be insufficient, as long as we stay in the relatively simple language that describes PD rounds, say. It just seems like any language that simple is vulnerable to the same kind of attack where very different moral stories agree on the description in the simple language.

  101. johnstricker Says:

    @Scott: Well thanks a lot, I prefer my mind a little less blown.

  102. Vitruvius Says:

    Mahayana Buddhists aren’t the only ones who believe that both intention and outcome are important to the calculus of determining virtuous behavior, AdamT (not that you were saying that they were), there’s also mens rea and actus reus.

    In the global warming case, Ilya, part of the debate remains not just about your humans and Ferengi, but about whether or not the pro- and anti-AGW camps of humans are themselves foresighted cooperating humans, or shortsighted money-grasping humans, and the case can be made either way in all of the cells of the corresponding Karnaugh map.

    On the matter of “missing the boat”, that’s not always a bad thing: think of the people who missed the Titanic. One of the folks whose opinion I most value once said, “the most interesting thing about analogies often isn’t where they apply, it’s where they break down”.

    Which brings us to the who-do-you-trust question. Like most people, probably, when I hear someone I disagree with I usually start explaining why they’re wrong. As John Kenneth Galbraith once said, “faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof”. Yet there are a few people who, when they disagree with me, I think: hmm, am I making a mistake here? The fellow in the previous paragraph is one of those rare folks, for me. Perhaps we should rate the person who one names as the executor of their will as highly valued for these eigentrust purposes.

  103. Ilya Shpitser Says:

    I should state the obvious, namely that my thinking about this is influenced/biased by another concept of crucial philosophical importance with a long history of mathematical reductions that turned out to be incorrect for subtle reasons. 🙂

  104. Rahul Says:

    Scott #92:

    I think you’re absolutely right, and it’s an extremely important point—in fact, that’s exactly what I was going to talk about in the update to the post that I still haven’t written!

    Ha! We seem to be agreeing about a lot of things lately. :p

    I’m surprised (pleasantly). 🙂

    I believe that an eigentrust system would only work if, rather than asking people who they trust, with the understanding that the information would only be used for rating purposes, we were somehow able to collect data about who people do in fact trust as they go about their lives. And I’m not sure how that would be done

    I’ve no good answer either. But one way to bootstrap the system might be piggybacking on the existing social networks (say, LinkedIn or Facebook). Each is essentially a large graph. I imagine an app that’d allow us to assign trust scores to our contacts or to other people on the network. And then the trust cascades.

    I’m not exactly sure how to incentivize adoption but I can see a few potential uses. Say, I want to install an Android app or some new software package. There might be utility in knowing who on my trust network are using it. Or if say Linus Torvalds is using it and if I have assigned him a high trust score then I’m relatively safe using it. Or if none of my immediate contacts use it yet someone Linus rates as high-trust than I myself may feel more safe using it.

    That might be a start?

  105. Scott Says:

    Vitruvius #102:

      Mahayana Buddhists aren’t the only ones who believe that both intention and outcome are important to the calculus of determining virtuous behavior

    What I want to know is, who in practice doesn’t think that? Is there any country, anywhere in the world, that has no penalty for attempted murder, or that doesn’t have a lesser penalty for attempted murder than for completed murder?

  106. Scott Says:

    fred #90: Have you ever tried to read academic moral philosophy? It’s full of thought experiments involving whether you should kill 5 innocent people, if you’re certain that your failure to do so would indirectly cause 10 innocent people to die … and much more complicated and macabre scenarios than that. And I can easily imagine someone who’s not used to such things thinking:

    “Eww, yuck! These people are questioning moral precepts that most people go through life without questioning, and trying to come up with so-called ‘rational theories’ of how to behave that can override our gut instincts. And you know else did that? The Nazis!!

    When put that way, I hope the fallacy is obvious. The above description might fit the Nazis, but it also fits Spinoza, John Stuart Mill, and others who we credit with much of the progress of the human species. From which it follows that the essence of Nazism can’t be asking the questions; it also has to involve coming up with really terrible answers to the questions.

    Though actually, I don’t think even that is enough. I doubt there’s been anyone, in the history of the world, who ever came to the conclusion that they should commit genocide as the result of an intellectual discussion about moral philosophy. The sorts of things that we know can lead to genocide are a Nietzschean belief in “Will” or “Power” trumping mere reason, and a delusional belief in the villainy and inhumanity of a despised minority. Neither of which are much in evidence in any of the commenters on this thread.

  107. Shmi Nux Says:

    My one other favourite blogger with the name starting with Scott A (there are 3 in total) suggested applying DW-NOMINATE to “calculating morality”:

    In the same way that ten thousand Congressional votes, suitably analyzed, naturally group people into two categories that look to our trained eyes like Left and Right, would ten thousand little life decisions, suitably analyzed, naturally group people into two categories that look to our trained eyes like Good and Bad?

    Not sure how it compares with eigenmorality or Modal Combat, but maybe looking at all three together would be illuminating.

  108. fred Says:

    Scott #106

    Again, I’m not questioning making moral decisions based on various “obvious” metrics (number of human lives, cost, etc).

    You suggested a thought experiment involving mad scientist forcing you to chose who to kill between a world hero and a world villain. What would be the moral choice.

    I was going to suggest that a real moral choice would be more like:
    “I erase your memories and send you back to July 1945, and ask you to make a choice between nuking two Japanese cities, killing 200,000 men, women, and children… or proceed with a conventional invasion and possibly lose a million US soldiers”

    But that’s not my point.
    I’m questioning the idea that morality/consciousness ought be measurable just like the size of a foot.
    There’s nothing “academic” about using an IQ number to decide to send someone to the electric chair or not.

  109. fred Says:

    There’s nothing “academic” about using an IQ number to decide to send someone to the electric chair or not.

    Scientists may not be the ones making the decision about what IQ cutoff number is the right one for this or that situation, etc.
    But it’s their responsibility to make sure that the metrics are correct, valid, make sense, have names reflecting their meaning and not creating confusion.
    It’s very easy for such things to get misinterpreted and hijacked.

  110. Jay Says:

    Scott #update,

    There are some effects in social psychology you might use and seem robust to manipulation.

    If you learn something shamefull for one group (say the democrats) and you identify yourself as democrat, then you’ll tend to propagate the news to democrats only. If you’re republican, or if the news makes democrat prouds, then you’ll tend to propagate the same news to everyone. This could help identifying who share your moral values.

    Another effect is that if you are, say, a CT theorist, then you’ll score other CT theorists lower when they are below average, and higher when they are above average. This could help identifying who you share your fields of expertise.

    Of course individuals varies, but if you’re systematically overestimated and underestimated by the same set of persons that over- and under- estimate Vazirani, then we could conclude you and Vazirani share a field of expertise, irrespective of what other thinks about your level of expertise in this field.

    That said, the drawback I talk about in #12 seems to stay still.

  111. John Baez Says:

    When I first started reading your post I thought you were going to tell us if a bunch of programs playing an iterated prisoner’s dilemma, mutating, and being selected according to their score tended to become better or worse as measured by one of your measures. This would be nice to know.

    And then you could write a book called The Evolution of Morality.

  112. Robert Says:

    Know I’m late to the party but had a few scattered thoughts. First there is another example of where this thinking has been described, though admittedly a lot more shoddy mathematical attempts have been made. However there are still examples of similar linear algebraic formulations and the concept of ranking “a good sports team as one that beats other good sports teams” is at the heart of, say, the Colley Matrix.

    Second, I’m not actually sure that PageRank is or was that important to Google at all. If anything, being a fast and efficient regex-operator for the entire Internet seems to be an inordinately important factor to a search engine. This is an assumption people are so blind to (regex stuff being easy in computational complexity of course makes it seem obvious) that conceiving of alternatives is far-fetched, though one can give examples, and competitors to Google back in the day arguably really weren’t good at this in an end-user friendly way.

    A regex algorithm site with roughly the other capabilities of present day Google, but zero attempt whatsoever to implement any sorting in the sense of PageRank or any similar recursion, would have vastly outcompeted competitors at the time Google got started. Likewise, a search website offering a hypothetical PageRankesque sorting algorithm that users would deem far superior to Google and whatever competitors like Bing have right now in every way would have almost zero market share without regex capabilities.

    It’s not clear at all what the importance of regex functionality is right now nor if that has varied significantly over time. Superiority to competitors in this manner along with other functions like advertising, and first mover advantages or network effects might fully explain Google’s (search engine) success and PageRank might not really matter at all.

    Third, I think it’s worth noting that large numbers of people interested in and attempting expert study on “existential risks” consider things such as asteroid strikes, nanotechnology, etc. as equally or far more severe threats than global warming. In the sense of clustering people with trustworthy views, versus the mainstream science community the end result would almost certainly be isolating these people as nutcases. Do you implicitly accept this would be the “correct” result or had you not thought about it?

  113. Scott Says:

    John #111: That’s an excellent question! What I can tell you right now is that, in past evolutionary IPD tournaments, TIT_FOR_TAT and variants of it have usually come to dominate the landscape. And I believe TIT_FOR_TAT would be ranked very moral according to any eigenmorality metric of the sort described in my update (i.e., one that properly accounts for “who started it”).

    It would be nice to run an experiment that tests this directly, and that’s something we could absolutely do. But I think the first step would be to design an eigenmorality metric that I actually believe in (in particular, one that properly takes account of temporal order).

  114. Scott Says:

    Robert #112:

      large numbers of people interested in and attempting expert study on “existential risks” consider things such as asteroid strikes, nanotechnology, etc. as equally or far more severe threats than global warming. In the sense of clustering people with trustworthy views, versus the mainstream science community the end result would almost certainly be isolating these people as nutcases. Do you implicitly accept this would be the “correct” result or had you not thought about it?

    I explicitly accept this as the “correct” result. 😀

  115. fred Says:

    By including time, does it mean that the equilibrium can now be dynamic? E.g. oscillations, vortices could happen?

  116. Scott Says:

    fred #115: Well, the setup I had in mind was that you let agents interact for a while, and then, “at the end of time,” you try to judge how moral the agents were to each other (like God passing final judgment… 🙂 ). On the other hand, if you wanted to computed eigenmorality dynamically, then of course it could change over time as agents change their behavior. You could certainly get oscillations, though I don’t know what you mean by “vortices” in this context.

  117. wolfgang Says:

    @Scott #116

    >> how moral the agents were to each other

    I think you will not get around the problem that you have to parse some content (somebody mentioned regex above already).

    Take a simple example, you want to find “good physicists” on the graph of all blogs. As we said before, there are many eigenvectors to this graph and if you want to distinguish physicists from e.g. Nazis
    you will have to parse their blogs. This could be quite simple, looking for Higgs, relativity, quantum etc. should be enough.

    But now you want to find just the “good people” or on this graph.
    What words are you now looking for?
    I think this is the true problem for your approach.

  118. Scott Says:

    wolfgang #117: No, like Ilya Shpitser (#56, #68), I think you’ve conflated eigenmorality with eigendemocracy (which is partly my bad, for discussing them both in the same post). With eigenmorality, I was imagining that you’re simply given as input a graph encoding how nice or mean each person has been to each other (ideally also with the temporal information of “who started it”). I don’t know exactly how you’d get such a graph (outside of idealized settings like an Iterated Prisoners’ Dilemma tournament), but I was interested in the problem of how you’d identify the moral agents even if given it.

  119. wolfgang Says:

    >> I don’t know exactly how you’d get such a graph

    OK, but I think this is very much related to he problem I mentioned.
    I think the graph of blog links (or twitter if you prefer) is a concrete starting point, so I’ll use it once more.

    The search for Higgs, relativity etc., mentioned above, is easy only because a lot of knowledge about physics goes into it.

    But how would you parse for “how nice or mean” ?

    You could ask people to tag their links as “nice” or “mean” but this would not help you, because of imposters, spam etc.
    You would have to parse the blog posts or tweets – but what exactly would you be looking for?

    But I have one positive input to Eigenmorality also …
    Perhaps truly “good” people are “good” to everybody, independent of what group they belong to. So perhaps you want to search for cooperating people, who are cooperating “across the board” – i.e. those who are *not* associated with eigenvectors of your graph.

    In other words, people like you, who are willing to engage and discuss with pretty much anybody (except L.M.) …

  120. Scott Says:

    wolfgang #119:

      Perhaps truly “good” people are “good” to everybody, independent of what group they belong to. So perhaps you want to search for cooperating people, who are cooperating “across the board” – i.e. those who are *not* associated with eigenvectors of your graph.

    Well, that’s basically eigenjesus morality, except that it goes even further, if you want to say that how “good” someone is doesn’t even matter at all in deciding how important it is to cooperate with that person (i.e., that cooperating with Charles Manson is just as good as cooperating with Gandhi).

      In other words, people like you, who are willing to engage and discuss with pretty much anybody (except L.M.) …

    Thanks! But I’ve gotta tell you, these days, when I hear about yet another person who’s had a spat with L.M., even before learning anything else I generally increase my moral assessment of the other person… 🙂

  121. wolfgang Says:

    @Scott

    >> cooperating with Charles Manson

    “cooperation” is the wrong term – you are not cooperating with D-wave, but you are willing to discuss with them.

    In a practical example of my proposal, one would look for “good blogs”, i.e. blogs with a high PageRank (i.e. they are experts in something) but at the same time, their comment section is diverse and from many different eigenvectors.
    We would still need some parsing imho to discern if there is some sort of “honest engagement” in the comments, but perhaps this would be doable …

  122. Ad Nausica Says:

    This is very interesting, but several things occurred to me:

    (1) This seems very similar to the block chain innovation used in bitcoin.

    (2) One potential flaw is that the metrics (including eigenmoses and eigenjesus) seem to be external to the participants, if I understand. That is, they are absolute evaluations. But we humans are internal to the moral judgment space. It might be interesting to modify the evaluation of moral behaviour when viewed from each individual’s point of view, starting from the point that they judge themselves to be moral and hence anyone who cooperates with them is moral.

    (3) With the modification of #2, this might better describe eigentribalism.

    (4) It’s not clear to me that this method takes into account complex layering as we see socially. We don’t treat everybody simply based on cooperative behaviour but also on credentials; a climate scientist tends to hold more weight than a university dropout talk show host, not because of their own behaviours but because of their credentials and experience. But credentials require a trust in those certifying them (e.g., universities, accrediting organizations) and in process (e.g., scientific process, peer review), and trust in process is different from trust in individual. I also don’t see that reason fits into this mix, that we may trust or distrust somebody solely based on their ability to reason their propositions or lack thereof (e.g., obvious rhetoric and fallacies), completely independent from knowing anything about their trust relationships to other people.

    As far as I can tell, the eigenmorality is based solely on relationship information about who cooperates with whom and has no independent inputs from which to evaluate an individual without such relationship information. Or am I wrong about that?

  123. Mark Neyer Says:

    my friend and i had started working on an implementation of a system with what ad nausica describes as ” It might be interesting to modify the evaluaation of moral behaviour when viewed from each individual’s point of view, starting from the point that they judge themselves to be moral and hence anyone who cooperates with them is moral.”

    so then the network gives each person their own view of ‘who is trustworthy’ and who is not. we coupled that with the concept of ‘i trust this person in that field’.

    we are calling it ‘dewdrop’ in reference to http://en.wikipedia.org/wiki/Indra's_net – the idea that each drop (identity) has a reflection of all the other drops (statements about those other identities) is what runs through it.

  124. Mark Neyer Says:

    here is the implementation we started. we could really use some help on this project – we have the ability to make statements about ‘which user i trust’ so far. the backend lets you make any string as a statement.

    we have a chrome plugin that lets you make statements about people on facebook, but that’s it.

    we’d like to extend it to make it so you can make arbitrary statements about users (that’s there) and have those statements connected by a grammar.

    users could query the network to find out who it thinks _they_ would trust, based upon the people they already trust.

    https://github.com/neyer/dewDrop

  125. Mark Neyer Says:

    Ad Nausica: we planned to add the ability for users to say things like:

    – I trust people with a Ph.D. from an accreditd college
    – I trust people with n papers published
    – I do not trust college X,Y,Z

    _everything_ can be a node on this network. a university might publish a statement ‘we grant PhD to public key x’, and an accrediting institution might say ‘we don’t recognize university Y as being legitimate’

    you start with a node’s statements about what it trusts. queries submitted by that node are all evaulted by traveling outwards from trusted nodes and combining statements together.

    the great thing about this is that even the method for combining statements (transitive trust algorithms) can be identities which are trusted or untrusted!

    if mark neyer publishes a transitive trust algorithm and lots of people say ‘i trust mark neyer to publish transitive trust algorithms’ – then people might switch. if my algorithm screws some people, then they’ll distrust me and my algorithm loses credibility in the eyes of people who trust those who don’t trust me.

  126. Freddy Says:

    My Rabbi has spent the last year or so emphasizing:
    1) While intuitive morality is necessary.
    2) Intuitive morality is not enough. Obedience to God is right.
    3) There are several kinds of Jewish laws.
    4) Some, like the prohibition of murder, are intelligible to humans.
    5) Others, like the prohibition on mixing wool and linen, are intelligible only to God.

    My real beef with Aaronson is that his whole discussion of “Morality” leaves God out entirely. I guess that might be utilitarianism?

    No, I don’t really expect the readers of this blog to run out and convert to Judaism. But I might expect a blog named “Shtetl-Optimized” to have some concept of Jewish morality.

  127. Simplify Says:

    It seems that this ends up simply computing something like a majority vote when used on things like trustworthiness, unless you choose a starting point that’s biased in some direction.

  128. Scott Says:

    Freddy #126: A major issue is that the moral strictures that you call “intelligible only to God” are extremely variable across religions and cultures. So for example, mixing wool and linen is forbidden to (Orthodox) Jews, but even according to Judaism is not forbidden to Gentiles, who are bound only by the seven Noahide Laws (no murdering, no eating part of an animal while the animal is still alive (!), and five others). Another example of a culturally-specific rule, obeyed by most Orthodox Jews but not others—one that, interestingly, you didn’t follow in your comment—is write “G-d” rather than God to avoid taking His name in vain. 😉

    Now, if a rule is culturally specific and “intelligible only to God,” then by definition it’s not going to be open to rational examination, which is the only tool I have available in a blog post that’s trying to offer arguments and ideas that could in principle recommend themselves to all readers, rather than only those of a particular religion. So even supposing that I thought such culturally-specific rules had moral force (rather than just cultural importance), they still wouldn’t be things that I’d see how to address within the scope of this post.

  129. TheOtherHobbes Says:

    I think you need to close the loop, and include a measure of proven predictive power within the concept of trustworthiness.

    If someone says ‘Don’t allow sex education because it leads to teenage pregnancy’, when independent reality-based research proves that it actually lowers rates of teenage pregnancy, it should be clear that that person’s opinions and policies – and quite possibly the person themselves – are not trustworthy.

    Likewise for almost any hot-button issue you care to name, including climate change.

    If there’s ambiguity, you can assemble a sliding trustworthiness index that shows trends that develop as more data comes in.

    The big political issue of the day is that politics is about rhetoric and persuasion, not about selecting wise policy based on evidence collection and pattern recognition.

    As long as this continues we will continue to have bad policy.

    Of course, makers of bad policy understand cause and effect, at least to some extent. They simply lie about their real aims to the public.

    This becomes much harder to do if aims are stated explicitly, and moves towards them are evidence-tested.

  130. Tyler Says:

    fred #67:

    Avoiding assignment of mathematical formulas to human elements isn’t going to allow us to avoid evils. Similarly, trying to mathematically define these things is not what causes evils. I think Scott said it well in #106 when he said that it’s not asking the questions that is the problem, it’s how you answer them. Even if someone tries to justify their evil actions by appealing to math/science, that doesn’t mean thinking about human elements as mathematical/scientific formulas is the problem (especially if the math/science was only an external justification and they would have done the evil actions regardless of the math/science supposedly behind them). Basically, don’t blame math for humans’ inability to apply it well.

    I think having mathematically explicit views of morality does not lead to evil, but rather it *highlights what about a view is flawed* when a view leads to something evil (or when it logically violates the paradigm cases one takes as assumption). When a view is found to violate such a case, you can follow the math right back to the source and identify why such a view lead to such an outcome. That’s what I find exciting about this way of thinking.

    It seems to me people worry about defining human elements mathematically because of the strictness. As in, if a mathematical formula seemed really good for most cases, but then on some edge case someone’s intuition told them one thing but their mathematical formula told them another, that it is scary because they might forgo intuition (the “correct” answer) in favor of this unmerciful, inhuman, math/science. Math doesn’t have human compassion so how could it know that in *this* situation it is dictating something terrible? Well I say the solution is not to be scared of giving mathematical formulas to human elements but instead to *figure out why the formula lead to something incorrect* and *modify the formula* to include this apparently tricky case. The alternative is to reconsider why you think the correct answer is what you think it is, and I guess that’s the solution that scares you, because people could end up slaves to a mathematical formula that makes them change their minds about obviously terrible things. But as long as one is willing in both directions, changing their mind OR changing the formula, I think we’re safe, or at least as safe as we were without the formula at all.

  131. Scott Says:

    Tyler #130: That’s incredibly well-said! Wish I said it myself.

  132. EigenNoah Says:

    I foresee a great flood coming in the future, perhaps from melting polar ice caps. While the others debate I will build (by build I mean tell a grad student to build) a great boat and take 2 of every animal to preserve life, and because they can’t build their own boats. People are capable of building boats, thus I won’t take any of them (I won’t be the only one left, surely others will be wise enough to build boats in the face of obvious imminent danger). When the flood ends (our boats just float & don’t produce much CO2 so the ice caps come back), our civilization will be left only with boat builders and flood foreseers, no eigendemocracy or linear algebra necessary.

    😉

  133. Peter Donis Says:

    Scott:
    reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.

    Yes, but the problem is that reasonable people do *not* agree about *why* these things need to be done. For example, you think they need to be done to fix climate change; I think they need to be done for national security, because we can’t afford to have our foreign policy corrupted by the need for oil. (As for deforestation, I think we need to stop it because it reduces the complexity of the overall biosphere, which I think is a bad thing regardless of whether or not it contributes to climate change.)

    And because of the way debates get framed in a representative democracy like ours, what’s important is not what policy you advocate, but *why* you advocate it. To climate change alarmists, the fact that I favor weaning ourselves off fossil fuels for any other reason than to fix climate change makes me evil. And because they have framed the debate that way, I can’t *afford* to agree with them, because it will be taken, not as agreement on the one particular point that we ought to wean ourselves off of fossil fuels, but on their whole public policy agenda. So reasonable people have no choice but to proclaim disagreement, even if they happen to agree on a particular policy choice, because the people controlling the debate won’t let it be about just that one particular policy choice.

  134. fred Says:

    The idea of morality ought to be intimately linked to the forces of natural selection though.

    At the species level, could morality be the expression of the complex tensions at play when various subgroups within a same species are about to take divergent evolutionary paths?
    On one hand a strong urge of sticking together (persistence of co-operation), i.e. keep cross-breeding to improve chances of survival through mixing of the genetic pool.
    On the other hand there is often a strong urge in creating independent subgroups (end of co-operation), taking separate genetic evolution paths, eventually reaching a point of no-return where cross-breeding is no longer an option, effectively creating two separate species (at which point the other species becomes a resource).
    At the clan/family/individual level, morality is expressed through similar tensions: wolves have more to gain by not eating each other (they have strong internal inhibition, primordial forms of morality). A mother lion doesn’t eat her own babies, etc. Cells in my body don’t feed on each other. Etc.

  135. Tyler Says:

    AdamT #84:

    So to morally judge an IPD bot B you could perform 2 simulations: one of the environment with B and one of the environment without B. And you could then measure the overall cooperation rate of each tournament and compare the two, and assign a value to B based on the ratio. I see two variations: one where B’s actions are included and one where they are not. The first is basically B’s effect on the morality of the world (which includes himself), and the second is B’s effect on the morality of others around him. These would be simple enough to code up, so I think I might!

    Scott #88:

    If B is taken to be the terrorist in your situation, are you imagining the variation where B’s actions do not count, only his effect on others’ actions do? Then I agree that the terrorist might be given a good value and thus it seems silly.

    But if you *do* include B’s actions in the calculation, then the proposed morality metric takes into account the terrorist’s awful direct effect on people as well as the terrorist’s indirect effect on people’s actions with one another. I imagine that the indirect effects would NOT end up outweighing the direct terror, and thus the terrorist would appropriately be given a bad value. The only way the terrorist could be given a good value is if the indirect effects on the people he terrorizes end up outweighing the terror he inflicts, and I don’t see that as such an unreasonable metric.

  136. Scott Says:

    Peter #133:

      To climate change alarmists, the fact that I favor weaning ourselves off fossil fuels for any other reason than to fix climate change makes me evil.

    Uh, which “alarmists” are you talking about, exactly? I guess I’m not an alarmist, since I’ll gladly take your agreement about what needs to be done and your disagreement about why, any hour of the day and any day of the week. Indeed, I agree that foreign policy considerations also provide compelling reasons for weaning ourselves off fossil fuels. Even more compelling than not sending the entire planet off a cliff of runaway warming, when the melting glaciers release their methane? I don’t feel that way, but if you do, you can still be my ally, no problem at all…

    So I’m left wondering whether the people who agree with you about the bottom line, but consider you “evil” because you disagree about the relative importance of the arguments in favor of it, exist mostly in your imagination.

  137. fred Says:

    Tyler #130
    Well said indeed. Thanks.

  138. Tyler Says:

    AdamT #84:

    I actually just realized an interesting distinction. There is the effect that B has on the total *morality* of the environment (sum of the morality scores of all the bots, or the cooperation rates, or those might be the same thing) and the total *utility* of the environment (sum of the objective scores of all the bots). These metrics would probably be more interesting when run on an environment where the pair interactions were not independent (i.e. to model situations like the one where you are mean to me, so I take that as a lesson not to trust anyone ever again and that of course affects my interactions with other people even if you and I never meet again).

    And then there is if we use this effect that B has on the entire environment as the actual morality score being summed in the description above, we get something recursive that I find very confusing and will let others ponder for now (and I will think about later 🙂 ).

  139. Scott Says:

    Tyler #135: Yes, the implicit assumption in that thought experiment is that the indirect positive effect of the terrorist attack, in making hundreds of millions of parents hug their children more, outweighs—massively outweighs, let’s suppose—the direct negative effect of the attack. That’s where you get an interesting moral conundrum. Suppose it could be argued persuasively that terrorist attacks did have such an overall positive effect. Would you really then want to give a green light to terrorism?

    I wouldn’t. And if you asked me why, I’d say something like: society as a whole will be unable to function if we open to door to murdering people “for the greater good,” even if those murders really are “for the greater good” in this or that particular case. Sometimes upholding a general rule takes precedence over a straightforward application of utilitarianism.

  140. fred Says:

    Scott #139

    A curious point is that if you replace the terrorist by something external to the species, like a pack of wolves or a natural disaster (e.g. the end of a lot of bad blood between Greece and Turkey over Macedonia after the great earth quake that hit turkey), and you get a similar big boost on the fabric of society.

    Thinking about war vs terrorism (both involving a species hurting itself at different levels), it seems that when it comes to war, morality levels at level N+1 are invoked to break morality at level N (when fighting for your country, you’re given a free pass to kill other individuals in your species).
    When it comes to terrorism, it seems that morality at level N (agenda of a minority) is seen as valid to break morality at level N+1. Like an individual taking it into his own hands to declare war on a nation as a political statement (e.g. the Norway 2011 massacre).
    War is top to bottom, terrorism is bottom-up.

  141. Tyler Says:

    Scott #139

    I agree with that.
    I should clarify that when I say “I don’t see that as such an unreasonable metric,” I mean that, while I woudn’t hold it as my own view or base my own actions on it, I find it reasonable/interesting enough to simulate in an IPD tournament.

  142. AdamT Says:

    Scott #139,

    That is exactly what wars are usually. The people that launch them and take part in them almost always justify their actions as “for the greater good.” Look at the historical arguments for Hiroshima and nagasaki and you will see exactly that . The problem with the real world is we have no way to test the counterfactual.

    I suspect it will be very difficult for someone to program a bot that will *reliably* demonstrate an altruistic terrorist. This is why we should be very wary of such violence as a means to good in the real world. Good luck Tyler!

  143. Travis Hance Says:

    Having had the same idea to compute morality using eigenvectors a few years ago, I was really happy to read this post.

  144. Peter Donis Says:

    @Scott #136:
    which “alarmists” are you talking about, exactly?

    Um, everybody who is worried about climate change but never even mentions the foreign policy reasons for reducing our fossil fuel consumption? You say you agree that those are also good reasons, but would you have mentioned them if I hadn’t brought it up?

    My point is that the whole debate is being framed as “We should reduce our fossil fuel consumption because otherwise we will have a climate catastrophe“, which immediately alienates all the people who don’t agree with the qualifier, instead of just “Who agrees that we should reduce our fossil fuel consumption? Oh, just about everybody? Good, then let’s talk about how best to do that.”

    I’ll gladly take your agreement about what needs to be done and your disagreement about why, any hour of the day and any day of the week.

    We may disagree at the next stage of “what needs to be done”, since I don’t think cap and trade is a good idea. But you did mention nuclear, which is good; a lot of people who are concerned about climate change refuse to even think about that.

  145. Elliot Temple Says:

    Scott #47:

    > No system for aggregating preferences whatsoever—neither direct democracy, nor representative democracy, nor eigendemocracy, nor anything else—can possibly deal with the “Nazi Germany problem,” wherein basically an entire society’s value system becomes inverted to the point where evil is good and good evil.

    In _The Beginning of Infinity_, DD explains:

    HERMES: Imagine a specific case, for the sake of argument. Suppose that they were somehow firmly persuaded that thieving is a high virtue from which many practical benefits flow, and that they abolished all laws forbidding it. What would happen?
    SOCRATES: Everyone would start thieving. Very soon those who were best at thieving (and at living among thieves) would become the wealthiest citizens. But most people would no longer be secure in their property (even most thieves), and all the farmers and artisans and traders would soon find it impossible to continue to produce anything worth stealing. So disaster and starvation would follow, while the promised benefits would not, and they would all realize that they had been mistaken.
    HERMES: Would they? Let me remind you again of the fallibility of human nature, Socrates. Given that they were firmly persuaded that thievery was beneficial, wouldn’t their first reaction to those setbacks be that there was not enough thievery going on? Wouldn’t they enact laws to encourage it still further?
    SOCRATES: Alas, yes – at first. Yet, no matter how firmly they were persuaded, these setbacks would be problems in their lives, which they would want to solve. A few among them would eventually begin to suspect that increased thievery might not be the solution after all. So they would think about it more. They would have been convinced of the benefits of thievery by some explanation or other. Now they would try to explain why the supposed solution didn’t seem to be working. Eventually they would find an explanation that seemed better. So gradually they would persuade others of that – and so on until a majority again opposed thievery.
    HERMES: Aha! So salvation would come about through persuasion.
    SOCRATES: If you like. Thought, explanation and persuasion. And now they would understand better why thievery is harmful, through their new explanations.
    HERMES: By the way, the little story we have just imagined is exactly how Athens really does look, from my point of view.

    Society already is massively wrong about many very very important thievery-equivalent things. Morally inverted, or whatever you want to call it. Good systems of organizing people, dealing with ideas, or whatever else have to be able to deal with massive entrenched error and irrationality. When you give up on that specific case – which is the real world – you invent dangerous systems which don’t worry enough about error correction.

    What you specifically wrote about was a “system for aggregating preferences”. You may be right that a system *of that type* can’t solve the problem, I haven’t considered that carefully. But there are other things to be considered instead, rather than accepting this unacceptable weakness.

  146. aviti Says:

    I have to read this (can’t wait). I want to make a head and tail of the eigenmoses/jesus things. Also, this delays my starting to read the qcsd book!

  147. qwertyuio Says:

    Wasn’t Ghandi a pedophile?

  148. Scott Says:

    qwertyuio #147: I don’t know about pedophile, but he was certainly a weirdo, who asked teenage girls to sleep naked with him in order to test his commitment to celibacy. So, OK then, replace Gandhi with any other favorite mega-altruist in my example if you prefer. 🙂

  149. Michael Nielsen Says:

    Rob Spekkens’ viewgraphs on using PageRank in the electoral process:

    http://wici.ca/new/2010/10/on-ranking-merit-applying-the-page-rank-algorithm-to-the-electoral-process-robert-spekkens/

  150. András Says:

    Scott, as you and Rahul both mention, an eigenvector-based analysis of a network simply summarises what is already in the network, and if the network is “wrong” to begin with, then the summary simply reflects that. I agree that the eigenvector approach is definitely worthwhile, for studying morality, democracy, or any other phenomenon where a network approach can be applied. Given the proliferation of network science over the last 15 years, this is not an unusual position to take. However, I’d like to emphasize Elliot Temple in #145 and Rahul’s comment in #48, both related to “fixing” the network when it is “wrong”.

    Doris Lessing in Prisons We Choose To Live Inside strongly argued against complacency in believing that our present eigenmorality network is “right”. I therefore cannot agree with your apparent contention that somehow we can “know” that the state of a particular network is not “wrong”, or that a “core of reasonableness” can be identified reliably. Decisions have been made in accordance with the networks in existence at various times of history, and yet these decisions are reprehensible to us today, so those networks seem likely to have been “wrong”. How can we therefore say that our particular network, right now, is “right”? The morality network in the Anglo-Saxon part of the world has within living memory changed to (predominantly) no longer abhor women having the vote, to abhor treating people with dark skin colour worse than people with light skin colour, and to no longer abhor marriage between two people of the same sex. It seems entirely possible this eigenmorality network will shift in future to abhor the rearing and killing of animals with significant amounts of consciousness, to no longer abhor long term relationships involving more than two people, to condone ill-treatment of people who cannot speak fluent Mandarin, or to abhor refusing to formally pledge loyalty on a daily basis to the clique currently in power.

    As Rahul stated in #48, having a stark and easily-available summary of the current state of the relevant network is not necessarily a good thing. For instance, some inefficiency in determining the eigenmorality provides the time for seeds of change to germinate. Intelligence agencies of various countries have been documented to be investigating network approaches to nearly every form of data (for instance, check the jobs advertised for intelligence analysts or academic research projects funded by these agencies). I have no doubt that these projects have eigenvector approaches in mind. Yet efficiently locating the core of reasonableness (that we should perhaps use to guide our own moral compass) seems to be no different from identifying the people who are likely to resist majority opinion that is unreasonable (and therefore are natural targets to be neutralised before they pose a threat to those in power).

  151. Scott Says:

    Michael #149: Thanks so much for the link to Rob Spekkens’s slides! This does indeed look like an idea that many, many people have had versions of independently, and I hope my post will do a small part to increase the world’s eigenawareness of it.

  152. pschwede Says:

    I would like to see the *moral* that evolved in the IPD tournaments tested against ground-truth *ethics*. Maybe you could do that with a model of “cost-of-persuation”where denying the ground-truth costs more than agreeing. I bet, EATHERLY won’t win that as well.

    I’m not sure, but I think, IPD tournaments –as you did– solely show, what strategy could be the best for a turncoat. A turncoat profits by joining the most established party of a population ignoring ethics at all (often with a fatal destiny). What I’d really like to show instead is that ethical behaviour is the best in goodness, and in persuasiveness.

  153. Harrison Ainsworth Says:

    On the question of how to ‘anchor’ such a system — the way PageRank is based on “using structure already inherent in the data and not created for rating purposes” (Rahul #73, Scott #92).

    I have been thinking for a while about doing it as an economic system: the anchoring is the giving/receiving of goods and services. Two immediate points: 1, these are real activities with solid value, covering a great range, that people do anyway; 2, the fact that every transaction is verified by *both* parties seems to create a basic natural resistance to deception (etc.).

    Is this like eigen-morality/democracy? Yes: an economic system is a network/graph: activities/products have multiple inputs and outputs, and all feeding-back. And the aim is broadly to allocate most resources to those who themselves give most back to the community. It is a cooperative system.

    But one realises it is different to a normal market system: you do not need money, and each ‘trade’ is one-way, i.e. no payment is needed in exchange for anything — all that needs to happen is the network as a whole is in some kind of equilibrium. Which is what these eigen-… thought experiments are about: you derive a rank-measure from the graph which is used to steer or direct uses of it.

    My very casual (and otherwise questionable) notes are here: http://www.hxa.name/notes/note-hxa7241-20140119T1109Z.html

  154. Michael Nielsen Says:

    Scott:

    It’s worth chattting to Rob about. I don’t know if he’s written anything up, but those slides were for an introductory talk to a non-technical audience, and I know he’s taken the idea much further than you can see in the slides. Some of his thinking is in similar directions to your post; much of it is orthogonal.

  155. JimV Says:

    This post had me laughing and thinking. Sweet eigenjesus, what a great post.

  156. Philip White Says:

    I didn’t bother to read the entire article or many of the comments, but I think it’s important to point out that this concept is flawed.

    PageRank doesn’t work as a morality metric; PageRank is a measure of popularity.

    In particular, it’s important to think about what a “link” is in this proposed eigenmorality scheme. If I, a moral entity, choose to cooperate with you, that is not something that makes you more moral; rather, it makes *me* more moral.

    In PageRank, a vote/link from one node to another indicates the voting node is sharing some of its rank with the node it votes for, not the other way around. Thus, if I cooperate with you or acknowledge that I owe you a favor, I am merely adding to your power, not to my own morality.

    Surely you would not suggest that merely having many moral people cooperate with you makes you more moral.

    Further, the notion that morality is about whom you cooperate with is absurd. Morality is largely about behavior–as Batman would say, “it’s not who you…[cooperate with], but what you do that defines you[r morality].”

  157. John Merryman Says:

    Or we can approach it bottom up, as it actually functions. In that good is attraction to the beneficial and bad is repulsion of the detrimental. For instance, what is good for the fox, is bad for the chicken. Now we then consider that what is good on a local level, such as ‘go forth and multiply,’ is bad on a global level, such as when the bacteria hit the edge of the petri dish. Then you can have the opposite effect, that serious bad in the immediate can provide a longer term benefit, such as war or disease reducing the population, leaving room for further expansion.
    This creates feedback loops of expansion and contraction. Nature manifests this as individuals growing up/expanding and dying as the species propagates.
    Now in society we have situations where people do understand the long term is bad, but continue with it because it is short term necessary/beneficial. Such as the current stock market bubble, where sitting on the sidelines means you loose relative value, or with climate emissions where many of those people how do realize the long term dangers, also continue to perpetuate them, since doing otherwise would be seriously detrimental to their participation in society.
    So what you have are bubbles that expand and contract. The alternative is just a flatline, with no positive or negative.
    If there is a primary unnecessary bad operating in the world today, it is that our monetary medium has been configured as a value extraction device and not just a medium of exchange as public utility. This serves to burn through resources at a much faster rate than would occur otherwise. Yet it is highly beneficial to those riding this wave and so they will do everything to perpetuate it.

  158. Brennan Says:

    Thanks for starting a neat discussion.

    I wonder if an important issue missing in the solution space discussion is stability. I think Tyler(?) back in ~comment 71 noted it, but this gets to some core objections. Is the solution vector/morality stable over time in implied decisions. That is the core of this sort of utilitarian calculus.

    On a different side topic, don’t you already have a ‘trust network’ prototyped in scientific citations? You can answer many questions about exclusivity, fairness, groupthink, and flexibility by looking at the evolution of these sort of citation networks.

    Probably similar and more interesting for blog rolls or twitter feeds. There, you get the complex layering of different interests. Hmm. Is the twitter graph acessible? That would make a great student project.

  159. Tyler Says:

    Philip #156

    “If I, a moral entity, choose to cooperate with you, that is not something that makes you more moral; rather, it makes *me* more moral”

    Then perhaps you should have read the entire article/paper, because what you suggest is already precisely the case.

    In PageRank, A linking to B increases B’s PageRank. And in eigenmorality, A cooperating with B increases A’s eigenmorality.
    ——-

    “Surely you would not suggest that merely having many moral people cooperate with you makes you more moral.”

    Correct. Neither Scott nor I would suggest this, though I’ll let Scott speak for himself.
    ——-

    ‘Further, the notion that morality is about whom you cooperate with is absurd. Morality is largely about behavior–as Batman would say, “it’s not who you…[cooperate with], but what you do that defines you[r morality].”’

    The word “cooperate” is an IPD word, and abstracts out what the actual interactions between players are. “Cooperating” or “defecting” IS “what you do” in this model. Those words could be replaced by “be nice” and “be mean”.
    ——-

    There may be many flaws in eigenmorality and even modeling humans with Iterated Prisoner’s Dilemmas in general, but they are not the ones you suggest.

  160. Ilya Shpitser Says:

    Scott, sorry, still confused by the last update.

    Are you claiming that for any (most?) stories that could be made to fit into the iterated PD setting, you will be able to correctly classify who the good guy was just based on analyzing the iterated PD history, and nothing else? Is this a fair assessment of your claim?

    If so, what would falsify this, a class of stories where this does not work?

  161. Vít Tuček Says:

    A great place for experiments would be stackexchange network. That is assuming they actually save the id of each upvote/downvote.

  162. Philip White Says:

    Tyler #159: Ok, I understand the point you’ve made, although I still haven’t read the whole paper (I skimmed the intro though).

    It still doesn’t make sense.

    I will accept (for the sake of argument, even though it is unusual) the claim that in this version, a vote/link from one person to another improves the rank/morality of the voter, not the person being voted for.

    My concern is with the use of the term “morality” to describe the notion that “being mean” to a mean person is morally helpful. I am not sure if it is meant that “being mean” means “withholding kindness from” or “actively hurting” the other person; however, this again sounds like a metric of popularity or power rather than morality.

    I think a better term than “moral” would be “politically savvy.” Refusal to cooperate with a person who is not moral has nothing to do with morality; in fact, often cooperation with an immoral person is a difficult but immoral task that one must approach with a certain sense of gravitas. For example, the Allied powers in WW2 cooperated with Stalin to defeat Hitler; Stalin wasn’t moral, but this is how we won WW2. Failure to work with Stalin would have led to a greater loss of life and possible defeat, and would have been extremely stupid/immoral, as it would have left France, Britain, and others without the help they needed to defeat the Nazis.

    Returning to my point that a proper term is “politically savvy”: Consider a “very politically savvy person” to be someone who cooperates with other politically savvy people and avoids cooperation with the foolish. This Machiavellian sense of morality might lead one to appear moral and rise to the top of some political organization, but it has nothing to do with the sort morality that any Christian, Jewish, Greek, or secular philosopher would talk about.

    It’s a nice idea for a paper–it actually reminds me a lot of a startup idea I had in 2011 pertaining to a gift culture/IOU-driven website–but the author (and Scott) are not correct to state that PageRank has anything whatsoever to do with morality.

  163. Philip White Says:

    (I meant to say, “difficult but moral” task above, not “difficult but immoral.”)

  164. grickm Says:

    In reading the analysis of morality and cooperation, I did not see references to the notion that we are all imperfect, therefore may be moral in some situations but not others. There is also the idea that an otherwise moral person may act based on misinformation or disinformation, thus making a choice that would be interpreted as immoral. Perhaps over a large set of acts the good vs bad would balance out.

    Then, since you got on a soapbox about climate change: as one who is certain that the climate does in fact change but skeptical of the evidence presented so far that human activity has made a significant contribution to such change, let me suggest you talk to a known, trustworthy authority on climate such as Richard Lindzen, and let us know if your soapbox can still support you strictly on the science. It’s easy for politicians to tell us the sky is falling (or warming), but they don’t necessarily want to use the scientific method or the results of applying that method if it interferes with their agenda. I have reviewed sufficient of the available information to conclude that the matter of AGW is far from settled. I have also noted that there is a huge amount of cronyism and money being moved about due to this debate. In my view science has been pushed aside by the politics and potential for monetary gain.

  165. Scott Says:

    Vit #161:

      A great place for experiments would be stackexchange network. That is assuming they actually save the id of each upvote/downvote.

    That’s a great idea! In retrospect, the incredible speed of the StackExchange sites (and MathOverflow) not only at converging on truth, but also on converging on consensus that truth has been reached, was probably one of the factors that inspired me to revive this old idea.

  166. sdde Says:

    It seems that somehow PageRank is useful as you say because there is an “ultimate” authority on what is “important” … the Google users … so in that sense, it works because importance can be adjusted …

    But for morality, the “ultimate authority” is arguably non-existent (ignoring an ultimate on morality, like a God) and so picking a metric, as you said, would itself be a moral decision, and therefore render the notion of trivial morality not useful.

    These are just some thoughts that came to mind when I read the paragraph about PageRank “still being useful”.

  167. Tyler Says:

    Philip White #162

    There is nothing weird or backward about it. It still seems backward to you because you are saying that A gives some amount of “vote” for B by cooperating with B, but in my paper I say that the analogy is that when A cooperates with B, A receives some amount of “vote” from B. The unusual/backward nature you are seeing is from your own use of the word “vote” in the opposite direction of how I used it and how you yourself suggest makes more sense.

    Re the Stalin/Hitler situation: your intuition is saying that it was not immoral to cooperate with Stalin because it was with the goal of defeating Hitler. That is precisely the kind of phenomena captured by eigenmorality, because Stalin would be seen as moral in this situation because he was against Hitler, who was immoral. Perhaps a flaw in the simple model used in my paper is the missing element of time and distinction between certain cooperations and others. Your intuition says that cooperating with Stalin was good *this time*, but otherwise not, and that is not covered by the paper’s model. That would be a valid criticism, but so far you’ve just stated that things don’t make sense when they are actually in line with your own suggestions.

    I am ok with the fact that eigenmoses and eigenjesus are not the moral metrics to which you would appeal when making decisions. I am not even asserting that those are the metrics *I* would consider the best. But I think *someone*, even many, could see them as valid moral metrics (for example, Machiavelli?) and that’s why they are interesting to me, or why I would take issue with someone saying that they “don’t make sense.” I’m certainly open to hearing clarifications or suggestions for more appropriate vocabulary to describe this stuff, but it seems like you are throwing away the whole concept simply because a few of the words are not to your taste.

  168. John Merryman Says:

    Scott,
    How about the observation that good and bad are ultimately the biological binary code of attraction to the beneficial and repulsion of the detrimental, rather than a cosmic dual between the forces of righteousness and evil, as I argued above?
    It may not be politically/socially correct, but it might help to clarify our actions/goals, means/ends.

  169. John Merryman Says:

    Make that addressed to Tyler#167

  170. Jair Says:

    Maybe someone has brought this up, but a problem I see with this model is that refusing to cooperate with an “evil” person does not increase the happiness levels of anyone else. It only serves as punishment for the wrongdoer. I suppose depending on the algorithms involved, this might “teach it a lesson” and make the bad guy more altruistic in the future, but this is pretty indirect.

    I think I would change the rules so that the payoffs are proportional to wealth/happiness of the two parties, so that a richer player gets a better reward in all cases. That way, defecting could actually help the community by hurting the ability of the evil players to cause harm. Then it’s more morally meaningful. You could also give more “morality points” to someone who cooperates with a weaker player, thus distinguishing the bullies from the knights in shining armor. I think in this case higher morality would correlate better with overall success for the community.

  171. Jair Says:

    I should also say that the sucker’s reward should be worse when interacting with a rich defector, so that powerful evil players hurt the community more.

  172. Jr Says:

    Nice post. A couple of disconnected comments

    1) I am a little unclear on how this eigen-democracy is supposed to work. Do I understand it correctly to say that it is the good decision-makers, as recognized by other good decision-makers? How does the trustworthy sources of information enter into the political process?

    I realize that the details have not been decided but some more clarification would be interesting.

    2) I have thought before that membership of ethnic
    groups is a similar circular definition. I, as an outsider, should accept that you are a Sioux (if you claim to be) if you are accepted as a Sioux by other Sioux.

  173. Philip Thrift Says:

    I noticed the code for eigenmoses, eigenjesus is written in Python (the snake from the Garden of Eden?). Coincidence?

  174. Peter D Jones Says:

    Liquid Democracy sounds OK if everyone is equal and all transactions are voluntary,, but those are naive assumptions. I can envisage situations, where, eg, some Victorian style paterfamilias browbeats his family into handing over their votes. Perhaps listening to well informed commentators and then casting your own vote is the
    low hanging fruit here.

  175. Scott Says:

    Philip #173: LOL!

    (The answer to your question is yes: coincidence.)

  176. Scott Says:

    Jr #172: Your suspicion that the details haven’t really been thought through yet is entirely correct. Indeed, you could see this post largely as an invitation for you to start thinking about the details!

    (Rest assured that if I ever make progress on the problem, or have an eigendemocracy trial system to try, I’ll write a followup post about it…)

  177. AdamT Says:

    Scott in update,

    “It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust.

    Actually, I don’t think this is true at all. No doubt, such a trust network would be more vulnerable to gaming the system than a revealed trust network. However, it does not automatically follow that this gaming would be terribly effective. Certainly it doesn’t follow that it would be any more effective gaming than the current *myriad* ways that our democratic system is *already* being gamed.

    I wouldn’t concentrate on trying to build a ‘revealed preferences’ eigendemocracy at first nor try to repurpose facebook. This strikes me as pre-optimization. Why not go with setting up a network and see how robust it is to various forms of gaming first?

    To be clear, I will admit that it is possible that eigendemocracy through a non-revealed network might be subject to gaming, but it is not at all clear to me that this would be more subject to gaming than our current electoral democracies.

  178. Rahul Says:

    Scott #165:

    In retrospect, the incredible speed of the StackExchange sites (and MathOverflow) not only at converging on truth, but also on converging on consensus that truth has been reached, was probably one of the factors that inspired me to revive this old idea.

    I heard murmurs lately that all’s not well at StackExchange. There seem to be a lot of Meta threads discussing how the system has been driving away many good commentators etc.

    Not sure if all of it is true but there might be lessons to be learnt. Not all short term optimism about these things might be lasting.

  179. Rahul Says:

    Scott writes:

    It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities. The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either! The players are bots, which do whatever their code tells them to do.

    I’m not sure I get this argument. Yes, the players were bots.

    But the humans who coded the bots did know that the bots were going to be graded on total scores. So maximizing utility was natural and preintended.

    OTOH, no one was imagining the bots to be graded for morality when the strategies were being coded in.

    On those grounds morality is different from utility, at least in this specific context.

  180. Scott Says:

    Rahul #179: Yeah, I thought about that too; it’s a fair objection. In our particular case, however, at least half of the bots we were testing (ALL_COOPERATE, ALL_DEFECT, RANDOM, almost anything dissimilar from TIT_FOR_TAT) weren’t particularly designed to maximize their scores either! 🙂 So then it seems to make just as make sense to ask about the morality of those bots, as to ask how they score.

  181. Rahul Says:

    I’m just trying to understand the eigendemocracy idea a bit better: So to strip down the concept to a bare minimum, say there were only three voters in a nation. On some issue two vote Yes and one No.

    So in a conventional democracy the outcome is a “Yes” decision. The interesting situations are only those where an eigentrust system will lead to a different result than the conventional system.

    Now imagine a situation where both Voter-1 and Voter-2 vote yes but point to Voter-3 in terms of trust. And voter-3 votes No. Might this be the analog where the eigentrust system concludes “No” even when the majority vote was an “Yes”?

    But if so, then adopting an eigentrust system is a conscious decision by the voter to intentionally dilute some of his own voting equity. Would voters do that?

    Further, why / how is this different than if we just allowed voters to transfer fractional parts of their voting rights to different people? Wouldn’t that be an identical system but without the complication of having to solve an eigenvalue problem?

  182. lylebot Says:

    Whether PageRank is still useful—and by “still useful” I mean “continuing to be useful for ranking to this day”, not “useful despite not being ground truth”—may be an open question. It’s only one of hundreds (maybe thousands) of features Google uses in its ranking algorithm, after all. A number of IR people I talk to seem to suspect that it may not be used at all anymore, and that its main use these days is for public relations. If that’s the case, before applying the idea to anything else we should ask when and why it stopped being useful.

  183. Timothy Johnson Says:

    First, I really enjoyed this! You earned a new follower for your blog. 🙂

    But mathematically, I’m not quite satisfied. In your update on 6/20, you changed the definition of a moral person to include both time and knowledge. I think that’s a strong improvement, but it also seems to make the problem much more difficult computationally. Your claim that “linear algebra is used to unravel the definition’s apparent circularity” seems to gloss over some important problems.

    For one thing, we won’t have just a single matrix to describe each person’s interactions with every other person. Instead, we’ll have a separate matrix for every participant at every time step, describing what they know at that point in time about each of these interactions.

    Each participant can use linear algebra to compute their own belief at any given time about how moral each other person is. But is there really a simple way to compute the global solution describing how moral each person actually is? I haven’t tried working it out for myself on paper yet, but I don’t quite see it.

    The conditions for PageRank are that our matrix is connected and aperiodic. But it seems like we would need to have these conditions on each participant’s matrix as well. And then, what if they meet someone new and know absolutely nothing about them? It seems like any action could be justified by some prior belief.

    Or maybe we could add a new condition that you should always assume someone is moral until proven otherwise. Is that enough to fix the problem?

  184. David Kagan Says:

    I haven’t had time to sit down and carefully think this through, but I’d be interested to hear which criteria of Arrow’s Theorem eigendemocracy satisfies and which fail. Have you thought about this at all Scott and/or Tyler?

  185. GLGR Says:

    I was under the impression that mathematicians use a similar algorithm for judging whether something “is mathematics” (or the mathematical value of a problem); namely, its implications to and interconnection with other branches of mathematics.

    Which naturally leads to the question: is P != NP mathematics in the above sense?

  186. Jim Says:

    As a workaday university mathematician, I usually think of applications of pagerank whenever the dean comes round: to citation metrics (good papers are those that are cited by other good papers) and to weightings of GPA (hard classes are the ones that only students who do well in other hard classes can do well in).

    Compared to eigenmorality, or for that matter links on the worldwide web, this stuff should be peanuts. All the data is already collected. Someone should do it! I’d do it myself, but I haven’t written a computer program since 1994, and anyway the machine I have now doesn’t run Borland C++.

    Of course, some might be afraid of a fred-like dystopia: once non-laughable citation metrics exist, evil and simple-minded deans will care only about metrics, ultimately to the detriment of mathematical progress.

  187. Casey Detrio Says:

    An incentive scheme which could “suss out the truth” is TruthCoin, a bitcoin-like incentive scheme that rewards coins to those whose “votes” align with the “truth”/consensus. TruthCoin is an ongoing project: https://github.com/psztorc/Truthcoin

    The coin is probably the most important incentive scheme in history. The State itself shares its origin with the invention of coins. The problem in organizing a state was, how do you supply your army with food and weapons, without conscripting an additional military arm of farmers and craftsmen? The solution was to stamp coins and require subjects to pay taxes in those coins. Then by distributing the coins to the army, the citizens were incentivized to supply the army with goods (clothes, armor, weapons) and food, in exchange for coins.

    If history is a precedent, perhaps a p2p incentive scheme like “DemocracyCoin” or “FutarchyCoin” would be a stepping-stone to Hansonian utopia.

  188. Joshua Cook Says:

    Is the implication that the incidence matrix for the entire internet is singular?

  189. sf Says:

    Jim #186

    you might find this interesting, it’s close to
    what you suggested:

    http://en.wikipedia.org/wiki/Eigenfactor

  190. KimKardashian Says:

    Wait, am I eigenfamous?

  191. Jan Paul Posma Says:

    Wow, what a great article! Last year I worked on a system very similar to your eigendemocracy, intended to uncover the credibility of statements. A (simplified) writeup here: https://factlink.com/blog/factlinks-fact-graph

    Like you say yourself, it’s very hard to get such a system off the ground, if you can’t use an existing structure such as PageRank did with links. In our case we tried to give people motivation by awarding “authority points” if others thought the evidence they added to statements was relevant, but alas, that was not enough.

  192. Jim Says:

    sf #189

    Thanks! They even have a whole web site: eigenfactor.org

    The EF score apparently scales linearly with the number of articles, so it’s a bit silly to measure a paper’s quality by the EF score of the journal it’s in, but the AI (Article Influence) score seems quite accurate, at least when is comes to journals in pure mathematics.

  193. Oleg S. Says:

    A very thought-provoking post! A couple of remarks:

    1. Turing test must be an example of circular definition of what it is to be a human (eigenhuman?). The similarity is even more striking given that his agrument was motivated by moral implications of AI.
    2. I wonder, whether extortion IPD strategies will be judged as moral in bots competitions.

  194. Tyler Says:

    John Merryman #168

    (Sorry for late reply, been very busy during the week)

    I think you’re right about that being a crucial distinction to make here. If I understand correctly, it’s related to this question: can morality metrics only “matter” if objective scores “matter”, or could some morality metric “matter” as a standalone concept? (the first situation being the one you describe where “good” and “bad” are really just related to preference of benefit over detriment, and the second being the one involving some cosmic righteousness/evil battle that “just is” important)

    I think that is perhaps one of the biggest questions out there (where a “big” question is one that many people ask and there is no proven or accepted answer). You say that you “observe” the answer to be the first one. What do you mean by this?

  195. Rob Silverton Says:

    Thanks for an interesting read! Regarding the problems of measuring ‘goodness’ as a means of affecting positive social change, I thought you may be interested in this website I stumbled across yesterday: http://www.goodcountry.org/.
    Obviously it’s fraught with all the sorts of issues you raise in your post, but I’m hopeful that such efforts (when they expose their inner workings for review and criticism) might provide ‘good enough’ systems to allow us to make reasonable comparisons. Would love to hear your thoughts on it!

  196. Babak Says:

    I didn’t see it referenced in the post or in the comments (but I haven’t read all the comments), but EigenTrust has already been proposed in the Computational Trust community:

    S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina, The EigenTrust Algorithm for Reputation Management in P2P Networks, In Proceedings of the Twelfth International World Wide Web Conference, 2003.

    It’s perhaps not surprising that Garcia-Molina was Sergey Brin’s advisor at Stanford!

    The issue of “labeling” trust according to the agent’s competence has also been discussed in the Trust literature, e.g., by Marsh. Issues of gaming/attacking the trust system have been studied among others by Josang.

    Apologies if this was obvious or that I missed it. I’m new to this great blog!

  197. Scott Says:

    Babak #196: Thanks so much for bringing EigenTrust to my attention! I just added an update to the post about it (as well as about Rob Spekkens’ earlier musings on eigendemocracy).

  198. Elliot Temple Says:

    Criticism of my comment above not being answered and Scott’s lack of openness to debate about climate change and other issues: http://www.curi.us/1633-people-are-wrong-then-ignore-criticism

  199. Terrymac Says:

    Democracy is far from the best way to make decisions; there is a large body of wisdom about its flaws, including Arrow’s Impossibility Theorem, Caplan’s Myth of the Rational Voter, the insights of Public Choice Economics, and so forth. Nor is democracy the way scientific discovery works. Science is best viewed as a sort of intellectual trial by combat – you trot out your hypotheses, and opponents try to knock it down. Rinse and repeat.

  200. JT Says:

    I was surprised that searching this page for “fixed point” had no results, cause the “vicious cycles” Scott talks about sound an awful lot like fixed point calculations to me.

    So I am here to remedy this situation. First post | fixed point, etc.

  201. Michael P Says:

    Hi Scott,

    While appeasing my 2-year-old I recalled your blog entry about eigenmorality and asked myself: “what would eigenmoses do?”

    Treatment of kids, if they were to be taken as subjects in eigenmorality, seems to be the case where eigenmoses (and to a lesser extent eigenjesus too) would recommend actions that most people would disagree with. Doesn’t that imply that in the eyes of eigenmorality, in order for it to resemble human judgement, not all subjects should be created equal from the beginning? If so, wouldn’t the problem of assigning pre-judgement to the subject cause a circular reference problem similar to the problem that eigenmorality aims to resolve?

  202. D E Says:

    Great post ! Thanks !

    But we can take it further and say it is a classical Data Mining problem, and instead of coming up ourselves with a model that works, trust Big Data to derive it for us.

    One source of data for Google could be gMail.
    I remember reading about email conversation analysis to identify hubs, experts etc.

    There are also all these companies identifying fb / twitter posts emotional charge and direction for better sales/advertisement – one can use the tools to see how this affect relationships – the “who started it” questions.

    I.e. my point is that since all our communications are digitized and documented, one can derive the model using data mining tricks rather than discussing what’s right about eigenmoses e1 vs eigenjesus e2. may be the collective mind will say log(e1^2 * e2^2) or a1*e1 + a2*e2 or may be that will have totally different e3, e4, e..N.

  203. D E Says:

    There is also an issue of morality not being static, or global.
    I mean, same problem presented in different language or at different time /place will elicit different moral response, even from the same person.

    Extra reason for having Big Data approach, it can deal with ambiguity like this.

  204. Philip White Says:

    This was a very thought-provoking piece; three months later I am still thinking about it, and I have revised my conviction about eigenmetrics for people based on some difficult-to-cite sources (I don’t think these ideas were originally all my own).

    I still strenuously object to the notion that “cooperating with someone who is immoral is immoral,” and the claim that “cooperating with a moral person makes you moral.”

    Instead, I make the following two claims:

    1 > A *powerful* person is someone is cooperated with by many other powerful people. They have the capacity to be very helpful to whomever they lend their power to.
    2 > A *[politically] skilled* person is someone who cooperates with many other skillful people (and avoids cooperation/collaboration with the unskilled). These people are intelligently selective.

    Even if my idea about this is something that the contributors to the paper does not agree with, I think that it would be interesting to think about “the other direction” of eigen-whatever-it-is. I.e., you consider “moral” to solve the informal equation: “An X person is one who cooperates with other X people.” But what is the solution to the informal equation: “An X person is one who IS COOPERATED WITH BY other X people.” ?

  205. Michael P Says:

    Hi Scott,

    I found the ideas behind Eigenmorality and Eigendemocracy very clever (pardon the pun), and wonder what would happen if one would develop a CLEVER style algorithm to measure the quality of Arts, Eigenaesthetic if you will.

    Sometimes one gets engaged in those seemingly futile conversations on the relative quality of this or that writer or artist or musician. Given the variety of tastes one doesn’t really expect a conclusion. However, I find these chats occasionally useful because once somebody whose opinion I would learn to respect would mention a book that I would read based on that person’s recommendation and find it interesting as well.

    It seems that finding the people with agreeable taste in Arts falls squarely into the Eigen- realm…

  206. Eigenmorality | Prediction Markets Says:

    […] and so on. Or you can get the principal eigenvector of the ‘cooperation matrix’. Scott Aaronson extrapolates that as a method for determining who the moral participants are in a  group, i.e. […]

  207. Defining empathy, sympathy, and compassion | Theory, Evolution, and Games Group Says:

    […] to a feedback loop that is typical of game-theoretic reasoning and recursive theories of mind. Scott Aaronson toyed with this in the more general case of ethics, and playfully called his solution eigenmorality after the idea of eigenvalues and eigenvectors […]

  208. Stack Ranking - BiCentenniel Man Says:

    […] wonderful meritocratic setup. It uses relative comparison with peers(not unlike pagerank algorithm/ eigen morality. . On the face of it is a very brilliant idea or a good idea that works well, when measuring […]

  209. Eigenmorality | Scott Ahronson | Note To Self Says:

    […] Ahronson; Eigenmorality; In His Blog entitled Shtetl-Optimized; […]