Archive for June, 2014

A Physically Universal Cellular Automaton

Thursday, June 26th, 2014

It’s been understood for decades that, if you take a simple discrete rule—say, a cellular automaton like Conway’s Game of Life—and iterate it over and over, you can very easily get the capacity for universal computation.  In other words, your cellular automaton becomes able to implement any desired sequence of AND, OR, and NOT gates, store and retrieve bits in a memory, and even (in principle) run Windows or Linux, albeit probably veerrryyy sloowwllyyy, using a complicated contraption of thousands or millions of cells to represent each bit of the desired computation.  If I’m not mistaken, a guy named Wolfram even wrote an entire 1200-page-long book about this phenomenon (see here for my 2002 review).

But suppose we want more than mere computational universality.  Suppose we want “physical” universality: that is, the ability to implement any transformation whatsoever on any finite region of the cellular automaton’s state, by suitably initializing the complement of that region.  So for example, suppose that, given some 1000×1000 square of cells, we’d like to replace every “0” cell within that square by a “1” cell, and vice versa.  Then physical universality would mean that we could do that, eventually, by some “machine” we could build outside the 1000×1000 square of interest.

You might wonder: are we really asking for more here than just ordinary computational universality?  Indeed we are.  To see this, consider Conway’s famous Game of Life.  Even though Life has been proved to be computationally universal, it’s not physically universal in the above sense.  The reason is simply that Life’s evolution rule is not time-reversible.  So if, for example, there were a lone “1” cell deep inside the 1000×1000 square, surrounded by a sea of “0” cells, then that “1” cell would immediately disappear without a trace, and no amount of machinery outside the square could possibly detect that it was ever there.

Furthermore, even cellular automata that are both time-reversible and computationally universal could fail to be physically universal.  Suppose, for example, that our CA allowed for the construction of “impenetrable walls,” through which no signal could pass.  And suppose that our 1000×1000 region contained a hollow box built out of these impenetrable walls.  Then, by definition, no amount of machinery that we built outside the region could ever detect whether there was a particle bouncing around inside the box.

So, in summary, we now face a genuinely new question:

Does there exist a physically universal cellular automaton, or not?

This question had sort of vaguely bounced around in my head (and probably other people’s) for years.  But as far as I know, it was first asked, clearly and explicitly, in a lovely 2010 preprint by Dominik Janzing.

Today, I’m proud to report that Luke Schaeffer, a first-year PhD student in my group, has answered Janzing’s question in the affirmative, by constructing the first cellular automaton (again, to the best of our knowledge) that’s been proved to be physically universal.  Click here for Luke’s beautifully-written preprint about his construction, and click here for a webpage that he’s prepared, explaining the details of the construction using color figures and videos.  Even if you don’t have time to get into the nitty-gritty, the videos on the webpage should give you a sense for the intricacy of what he accomplished.

Very briefly, Luke first defines a reversible, two-dimensional CA involving particles that move diagonally across a square lattice, in one of four possible directions (northeast, northwest, southeast, or southwest).  The number of particles is always conserved.  The only interesting behavior occurs when three of the particles “collide” in a single 2×2 square, and Luke gives rules (symmetric under rotations and reflections) that specify what happens then.

Given these rules, it’s possible to prove that any configuration whatsoever of finitely many particles will “diffuse,” after not too many time steps, into four unchanging clouds of particles, which thereafter simply move away from each other in the four diagonal directions for all eternity.  This has the interesting consequence that Luke’s CA, when initialized with finitely many particles, cannot be capable of universal computation in Turing’s sense.  In other words, there’s no way, using n initial particles confined to an n×n box, to set up a computation that continues to do something interesting after 2n or 22^n time steps, let alone forever. On the other hand, using finitely many particles, one can also prove that the CA can perform universal computation in the Boolean circuit sense.  In other words, we can implement AND, OR, and NOT gates, and by chaining them together, can compute any Boolean function that we like on any fixed number of input bits (with the number of input bits generally much smaller than the number of particles).  And this “circuit universality,” rather than Turing-machine universality, is all that’s implied anyway by physical universality in Janzing’s sense.  (As a side note, the distinction between circuit and Turing-machine universality seems to deserve much more attention than it usually gets.)

Anyway, while the “diffusion into four clouds” aspect of Luke’s CA might seem annoying, it turns out to be extremely useful for proving physical universality.  For it has the consequence that, no matter what the initial state was inside the square we cared about, that state will before too long be encoded into the states of four clouds headed away from the square.  So then, “all” we need to do is engineer some additional clouds of particles, initially outside the square, that

  1. intercept the four escaping clouds,
  2. “decode” the contents of those clouds into a flat sequence of bits,
  3. apply an arbitrary Boolean circuit to that bit sequence, and then
  4. convert the output bits of the Boolean circuit into new clouds of particles converging back onto the square.

So, well … that’s exactly what Luke did.  And just in case there’s any doubt about the correctness of the end result, Luke actually implemented his construction in the cellular-automaton simulator Golly, where you can try it out yourself (he explains how on his webpage).

So far, of course, I’ve skirted past the obvious question of “why.”  Who cares that we now know that there exists a physically-universal CA?  Apart from the sheer intrinsic coolness, a second reason is that I’ve been interested for years in how to make finer (but still computer-sciencey) distinctions, among various “candidate laws of physics,” than simply saying that some laws are computationally universal and others aren’t, or some are easy to simulate on a standard Turing machine and others hard.  For ironically, the very pervasiveness of computational universality (the thing Wolfram goes on and on about) makes it of limited usefulness in distinguishing physical laws: almost any sufficiently-interesting set of laws will turn out to be computationally universal, at least in the circuit sense if not the Turing-machine one!

On the other hand, many of these laws will be computationally universal only because of extremely convoluted constructions, which fall apart if even the tiniest error is introduced.  And in other cases, we’ll be able to build a universal computer, all right, but that computer will be relatively impotent to obtain interesting input about its physical environment, or to make its output affect the gross features of the CA’s physical state.  If you like, we’ll have a recipe for creating a universe full of ivory-tower, eggheaded nerds, who can search for counterexamples to Goldbach’s Conjecture but can’t build a shelter to protect themselves from a hail of “1” bits, or even learn whether such a hail is present or not, or decide which other part of the CA to travel to.

As I see it, Janzing’s notion of physical universality is directly addressing this “egghead” problem, by asking whether we can build not merely a universal computer but a particularly powerful kind of robot: one that can effect a completely arbitrary transformation (given enough time, of course) on any part of its physical environment.  And the answer turns out to be that, at least in a weird CA consisting of clouds of diagonally-moving particles, we can indeed do that.  The question of whether we can also achieve physical universality in more natural CAs remains open (and in his Future Work section, Luke discusses several ways of formalizing what we mean by “more natural”).

As Luke mentions in his introduction, there’s at least a loose connection here to David Deutsch’s recent notion of constructor theory (see also this followup paper by Deutsch and Chiara Marletto).  Basically, Deutsch and Marletto want to reconstruct all of physics taking what can and can’t be constructed (i.e., what kinds of transformations are possible) as the most primitive concept, rather than (as in ordinary physics) what will or won’t happen (i.e., how the universe’s state evolves with time).  The hope is that, once physics was reconstructed in this way, we could then (for example) state and answer the question of whether or not scalable quantum computers can be built as a principled question of physics, rather than as a “mere” question of engineering.

Now, regardless of what you think about these audacious goals, or about Deutsch and Marletto’s progress (or lack of progress?) so far toward achieving them, it’s certainly a worthwhile project to study what sorts of machines can and can’t be constructed, as a matter of principle, both in the real physical world and in other, hypothetical worlds that capture various aspects of our world.  Indeed, one could say that that’s what many of us in quantum information and theoretical computer science have been trying to do for decades!  However, Janzing’s “physical universality” problem hints at a different way to approach the project: starting with some far-reaching desire (say, to be able to implement any transformation whatsoever on any finite region), can we engineer laws of physics that make that desire possible?  If so, then how close can we make those laws to “our” laws?

Luke has now taken a first stab at answering these questions.  Whether his result ends up merely being a fun, recreational “terminal branch” on the tree of science, or a trunk leading to something more, probably just depends on how interested people get.  I have no doubt that our laws of physics permit the creation of additional papers on this topic, but whether they do or don’t is (as far as I can see) merely a question of contingency and human will, not a constructor-theoretic question.

Integrated Information Theory: Virgil Griffith opines

Wednesday, June 25th, 2014

Remember the two discussions about Integrated Information Theory that we had a month ago on this blog?  You know, the ones where I argued that IIT fails because “the brain might be an expander, but not every expander is a brain”; where IIT inventor Giulio Tononi wrote a 14-page response biting the bullet with mustard; and where famous philosopher of mind David Chalmers, and leading consciousness researcher (and IIT supporter) Christof Koch, also got involved in the comments section?

OK, so one more thing about that.  Virgil Griffith recently completed his PhD under Christof Koch at Caltech—as he puts it, “immersing [him]self in the nitty-gritty of IIT for the past 6.5 years.”  This morning, Virgil sent me two striking letters about his thoughts on the recent IIT exchanges on this blog.  He asked me to share them here, something that I’m more than happy to do:

Reading these letters, what jumped out at me—given Virgil’s long apprenticeship in the heart of IIT-land—was the amount of agreement between my views and his.  In particular, Virgil agrees with my central contention that Φ, as it stands, can at most be a necessary condition for consciousness, not a sufficient condition, and remarks that “[t]o move IIT from talked about to accepted among hard scientists, it may be necessary for [Tononi] to wash his hands of sufficiency claims.”  He agrees that a lack of mathematical clarity in the definition of Φ is a “major problem in the IIT literature,” commenting that “IIT needs more mathematically inclined people at its helm.”  He also says he agrees “110%” that the lack of a derivation of the form of Φ from IIT’s axioms is “a pothole in the theory,” and further agrees 110% that the current prescriptions for computing Φ contain many unjustified idiosyncrasies.

Indeed, given the level of agreement here, there’s not all that much for me to rebut, defend, or clarify!

I suppose there are a few things.

  1. Just as a clarifying remark, in a few places where it looks from the formatting like Virgil is responding to something I said (for example, “The conceptual structure is unified—it cannot be decomposed into independent components” and “Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts”), he’s actually responding to something Giulio said (and that I, at most, quoted).
  2. Virgil says, correctly, that Giulio would respond to my central objection against IIT by challenging my “intuition for things being unconscious.”  (Indeed, because Giulio did respond, there’s no need to speculate about how he would respond!)  However, Virgil then goes on to explicate Giulio’s response using the analogy of temperature (interestingly, the same analogy I used for a different purpose).  He points out how counterintuitive it would be for Kelvin’s contemporaries to accept that “even the coldest thing you’ve touched actually has substantial heat in it,” and remarks: “I find this ‘Kelvin scale for C’ analogy makes the panpsychism much more palatable.”  The trouble is that I never objected to IIT’s panpsychism per se: I only objected to its seemingly arbitrary and selective panpsychism.  It’s one thing for a theory to ascribe some amount of consciousness to a 2D grid or an expander graph.  It’s quite another for a theory to ascribe vastly more consciousness to those things than it ascribes to a human brain—even while denying consciousness to things that are intuitively similar but organized a little differently (say, a 1D grid).  A better analogy here would be if Kelvin’s theory of temperature had predicted, not merely that all ordinary things had some heat in them, but that an ice cube was hotter than the Sun, even though a popsicle was, of course, colder than the Sun.  (The ice cube, you see, “integrates heat” in a way that the popsicle doesn’t…)
  3. Virgil imagines two ways that an IIT proponent could respond to my argument involving the cerebellum—the argument that accuses IIT proponents of changing the rules of the game according to convenience (a 2D grid has a large Φ?  suck it up and accept it; your intuitions about a grid’s lack of consciousness are irrelevant.  the human cerebellum has a small Φ?  ah, that’s a victory for IIT, since the cerebellum is intuitively unconscious).  The trouble is that both of Virgil’s imagined responses are by reference to the IIT axioms.  But I wasn’t talking about the axioms themselves, but about whether we’re allowed to validate the axioms, by checking their consequences against earlier, pre-theoretic intuitions.  And I was pointing out that Giulio seemed happy to do so when the results “went in IIT’s favor” (in the cerebellum example), even though he lectured me against doing so in the cases of the expander and the 2D grid (cases where IIT does less well, to put it mildly, at capturing our intuitions).
  4. Virgil chastises me for ridiculing Giulio’s phenomenological argument for the consciousness of a 2D grid by way of nursery rhymes: “Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall.  You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you.”  Virgil amusingly comments: “Even when both are inebriated, I’ve never heard [Giulio] nor [Christof] separately or collectively imply anything like this.  Moreover, they’re each far too clueful to fall for something so trivial.”  For my part, I agree that neither Giulio nor Christof would ever advocate something as transparently silly as, “if you have a rich inner experience when thinking about X, then that’s evidence X itself is conscious.”  And I apologize if I seemed to suggest they would.  To clarify, my point was not that Giulio was making such an absurd statement, but rather that, assuming he wasn’t, I didn’t know what he was trying to say in the passages of his that I’d just quoted at length.  The silly thing seemed like the “obvious” reading of his words, and my hermeneutic powers were unequal to the task of figuring out the non-silly, non-obvious reading that he surely intended.

Anyway, there’s much more to Virgil’s letters than the above—including answers to some of my subsidiary questions about the details of IIT (e.g., how to handle unbalanced partitions, and the mathematical meanings of terms like “mechanism” and “system of mechanisms”).  Also, in parts of the letters, Virgil’s main concern is neither to agree with me nor to agree with Giulio, but rather to offer his own ideas, developed in the course of his PhD work, for how to move forward and fix some of the problems with IIT.  All in all, these are recommended reads for anyone who’s been following this debate.

Eigenmorality

Wednesday, June 18th, 2014

This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell.  Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden.  The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query.  Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.”  At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix.  Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.

I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page.  Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar.  At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.

In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”

Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”  After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.”  Yet they both managed to use math to defeat the circularity.  All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map.  And such an equilibrium could be shown to exist—indeed, to exist uniquely.

Searching for other circular notions to elucidate using linear algebra, I hit on morality.  Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer.  Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here?  If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them?  What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice?  Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them?  Should we consider intent, or only outcomes?  Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling?  Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community?  If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?

For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity.  How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?

Ah, I thought—this is precisely where linear algebra can come to the rescue!  Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”  Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already.  We apply the rule over and over, until the number of morality credits per person converges to an equilibrium.  (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.)  We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.

The next step, I figured, would be to hack together some code that computed this “eigenmorality” metric, and then see what happened when I ran the code to measure the morality of each participant in a simulated society.  What would happen?  Would the results conform to my pre-theoretic intuitions about what sort of behavior was moral and what wasn’t?  If not, then would watching the simulation give me new ideas about how to improve the morality metric?  Or would it be my intuitions themselves that would change?

Unfortunately, I never got around to the “coding it up” part—there’s a reason why I became a theorist!  The eigenmorality idea went onto my back burner, where it stayed for the next 16 years: 16 years in which our world descended ever further into darkness, lacking a principled way to quantify morality.  But finally, this year, just two separate things have happened on the eigenmorality front, and that’s why I’m blogging about it now.

Eigenjesus and Eigenmoses

The first thing that’s happened is that Tyler Singer-Clark, my superb former undergraduate advisee, did code up eigenmorality metrics and test them out on a simulated society, for his MIT senior thesis project.  You can read Tyler’s 12-page report here—it’s a fun, enjoyable, thought-provoking first research paper, one that I wholeheartedly recommend.  Or, if you’d like to experiment yourself with the Python code, you can download it here from github.  (Of course, all opinions expressed in this post are mine alone, not necessarily Tyler’s.)

Briefly, Tyler examined what eigenmorality has to say in the setting of an Iterated Prisoner’s Dilemma (IPD) tournament.  The Iterated Prisoner’s Dilemma is the famous game in which two players meet repeatedly, and in each turn can either “Cooperate” or “Defect.”  The absolute best thing, from your perspective, is if you defect while your partner cooperates.  But you’re also pretty happy if you both cooperate.  You’re less happy if you both defect, while the worst (from your standpoint) is if you cooperate while your partner defects.  At each turn, when contemplating what to do, you have the entire previous history of your interaction with this partner available to you.  And thus, for example, you can decide to “punish” your partner for past defections, “reward” her for past cooperations, or “try to take advantage” by unilaterally defecting and seeing what happens.  At each turn, the game has some small constant probability of ending—so you know approximately how many times you’ll meet this partner in the future, but you don’t know exactly when the last turn will be.  Your score, in the game, is then the sum-total of your score over all turns and all partners (where each player meets each other player once).

In the late 1970s, as recounted in his classic work The Evolution of Cooperation, Robert Axelrod invited people all over the world to submit computer programs for playing this game, which were then pit against each other in the world’s first serious IPD tournament.  And, in a tale that’s been retold in hundreds of popular books, while many people submitted complicated programs that used machine learning, etc. to try to suss out their opponents, the program that won—hands-down, repeatedly—was TIT_FOR_TAT, a few lines of code submitted by the psychologist Anatol Rapaport to implement an ancient moral maxim.  TIT_FOR_TAT starts out by cooperating; thereafter, it simply does whatever its opponent did in the last move, swiftly rewarding every cooperation and punishing every defection, and ignoring the entire previous history.  In the decades since Axelrod, running Iterated Prisoners’ Dilemma tournaments has become a minor industry, with countless variations explored (for example, “evolutionary” versions, and versions allowing side-communication between the players), countless new strategies invented, and countless papers published.  To make a long story short, TIT_FOR_TAT continues to do quite well across a wide range of environments, but depending on the mix of players present, other strategies can sometimes beat TIT_FOR_TAT.  (As one example, if there’s a sizable minority of colluding players, who recognize each other by cooperating and defecting in a prearranged sequence, then those players can destroy TIT_FOR_TAT and other “simple” strategies, by cooperating with one another while defecting against everyone else.)

Anyway, Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly.  However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners?

It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities.  The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either!  The players are bots, which do whatever their code tells them to do.  So in some sense, utility—no less than morality—is “merely an interpretation” that we impose on the raw cooperate/defect sequences!  There’s nothing to stop us from imposing some other interpretation (say, one that explicitly tries to measure morality) and seeing what happens.

In an attempt to measure the players’ morality, Tyler uses the eigenmorality idea from before.  The extent to which player A “cooperates” with player B is simply measured by the percentage of times A cooperates.  (One acknowledged limitation of this work is that, when two players both defect, there’s no attempt to take into account “who started it,” and to judge the aggressor more harshly than the retaliator—or to incorporate time in any other way.)  This then gives us a “cooperation matrix,” whose (i,j) entry records the total amount of niceness that player i displayed to player j.  Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores.

Now, there’s a very interesting ambiguity in what I said above.  Namely, should we define the “niceness scores” to lie in [0,1] (so that the lowest, meanest possible score is 0), or in [-1,1] (so that it’s possible to have negative niceness)?  This might sound like a triviality, but in our setting, it’s precisely the mathematical reflection of one of the philosophical conundrums I mentioned earlier.  The conundrum can be stated as follows: is your morality a monotone function of your niceness?  We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler.  But do you have a positive obligation to be not-nice to Hitler: to make him suffer because he made others suffer?  Or, OK, how about not Hitler, but someone who’s somewhat bad?  Consider, for example, a woman who falls in love with, and marries, an unrepentant armed robber (with full knowledge of who he is, and with other options available to her).  Is the woman morally praiseworthy for loving her husband despite his bad behavior?  Or is she blameworthy because, by rewarding his behavior with her love, she helps to enable it?

To capture two possible extremes of opinion about such questions, Tyler and I defined two different morality metrics, which we called … wait for it … eigenmoses and eigenjesus.  Eigenmoses has the niceness scores in [-1,1], which means that you’re actively rewarded for punishing evildoers: that is, for defecting against those who defect against many moral players.  Eigenjesus, by contrast, has the niceness scores in [0,1], which means that you always do at least as well by “turning the other cheek” and cooperating.  (Though note that, even with eigenjesus, you get more morality credits by cooperating with moral players than by cooperating with immoral ones.)

This is probably a good place to mention a second limitation of Tyler’s current study.  Namely, with the current system, there’s no direct way for a player to find out how its partner has been behaving toward third parties.  The only information that A gets about the goodness or evilness of player B, comes from A and B’s direct interaction.  Ideally, one would like to design bots that take into account, not only the other bots’ behavior toward them, but the other bots’ behavior toward each other.  So for example, even if someone is unfailingly nice to you, if that person is an asshole to everyone else, then the eigenmoses moral code would demand that you return the person’s cooperation with icy defection.  Conversely, even if Gandhi is mean and hateful to you, you would still be morally obliged (interestingly, on both the eigenmoses and eigenjesus codes) to be nice to him, because of the amount of good he does for everyone else.

Anyway, you can read Tyler’s paper if you want to see the results of computing the eigenmoses and eigenjesus scores for a diverse population of bots.  Briefly, the results accord pretty well with intuition.  When we look at eigenjesus scores, the all-cooperate bot comes out on top and the all-defect bot on the bottom (as is mathematically necessary), with TIT_FOR_TAT somewhere in the middle, and generous versions of TIT_FOR_TAT higher up.  When we look at eigenmoses, by contrast, TIT_FOR_TWO_TATS comes out on top, with TIT_FOR_TAT in sixth place, and the all-cooperate bot scoring below the median.  Interestingly, once again, the all-defect bot gets the lowest score (though in this case, it wasn’t mathematically necessary).

Even though the measures acquit themselves well in this particular tournament, it’s admittedly easy to construct scenarios where the prescriptions of eigenjesus and eigenmoses alike violently diverge from most people’s moral intuitions.  We’ve already touched on a few such scenarios above (for example, are you really morally obligated to lick the boots of someone who kicks you, just because that person is a saint to everyone other than you?).  Another type of scenario involves minorities.  Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that).  Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%.  Who, in this scenario, is moral, and who’s immoral?  The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil.  After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone.  Of course, for much of human history, this is precisely how morality worked, in many people’s minds.  But I dare say it’s a result that would make moderns uncomfortable.

In summary, it seems clear to me that neither eigenmoses nor eigenjesus correctly captures our intuitions about morality, any more than Φ captures our intuitions about consciousness.  But as they say, I think there’s plenty of scope here for further research: for coming up with new mathematical measures that sharpen our intuitive judgments about morality, and (if we like) testing those measures out using IPD tournaments.  It also seems to me that there’s something fundamentally right about the eigenvector idea: all else being equal, we’d like to say, being nice to others is good, except that aiding and abetting evildoers is not good, and the way we can recognize the evildoers in our midst is that they’re not nice to others—except that, if the people who someone isn’t nice to are themselves evildoers, then the person might again be good, and so on.  The only way to cut off the infinite regress, it seems, is to demand some sort of “reflective equilibrium” in our moral judgments, and that’s precisely what eigenmorality tries to capture.  On the other hand, no such idea can ever make moral debate obsolete—if for no other reason than that we still need to decide which specific eigenmorality metric to use, and that choice is itself a moral judgment.

Scooped by Plato

Which brings me, finally, to the second new thing that’s happened this year on the eigenmorality front.  Namely, Rebecca Newberger Goldstein—who’s far and away my favorite contemporary novelist—published a charming new book entitled Plato at the Googleplex: Why Philosophy Won’t Go Away.  Here she imagines that Plato has reappeared in present-day America (she doesn’t bother to explain how), where he’s taught himself English and the basics of modern science, learned how to use the Internet, and otherwise gotten himself up to speed.  The book recounts Plato’s dialogues with various modern interlocutors, as he volunteers to have his brain scanned, guest-writes a relationship advice column, participates in a panel discussion on child-rearing, and gets interviewed on cable news by “Roy McCoy” (a thinly veiled Bill O’Reilly).  Often, Goldstein has Plato answer the moderns’ questions using direct quotes from the Timaeus, the Gorgias, the Meno, etc., which makes her Plato into a very intelligent sort of chatbot.  This is a genre that’s not often seriously attempted, and that I’d love to read more of (possible subjects: Shakespeare, Galileo, Jefferson, Lincoln, Einstein, Turing…).

Anyway, my favorite episode in the book is the first, eponymous one, where Plato visits the Googleplex in Mountain View.  While eating lunch in one of the many free cafeterias, Plato is cornered by a somewhat self-important, dreadlocked coder named Marcus, who tries to convince Plato that Google PageRank has finally solved the problem agonized over in the Republic, of how to define justice.  By using the Internet, we can simply crowd-source the answer, Marcus declares: get millions of people to render moral judgments on every conceivable question, and also moral judgments on each other’s judgments.  Then declare those judgments the most morally reliable, that are judged the most reliable by the people who are themselves the most morally reliable.  The circularity, as usual, is broken by taking the principal eigenvector of the graph of moral judgments (Goldstein doesn’t have Marcus put it that way, but it’s what she means).

Not surprisingly, Plato is skeptical.  Through Socratic questioning—the method he learned from the horse’s mouth—Plato manages to make Marcus realize that, in the very act of choosing which of several variants of PageRank to use in our crowd-sourced justice engine, we’ll implicitly be making moral choices already.  And therefore, we can’t use PageRank, or anything like it, as the ultimate ground of morality.

Whereas I imagined that the raw data for an “eigenmorality” metric would consist of numerical measures of how nice people had been to each other, Goldstein imagines the raw data to consist of abstract moral judgments, and of judgments about judgments.  Also, whereas the output of my kind of metric would be a measure of the “goodness” of each individual person, the outputs of hers would presumably be verdicts about general moral and political questions.  But, much like with CLEVER versus PageRank, it’s obvious that the ideas are similar—and that I should credit Goldstein with independently discovering my nerdy 16-year-old vision, in order to put it in the mouth of a nerdy character in her story.

As I said before, I agree with Goldstein’s Plato that eigenmorality can’t serve as the ultimate ground of morality.  But that’s a bit like saying that Google rank can’t serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google’s engineers must have some prior notion of “importance” to serve as a standard.  That’s true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years.  Goldstein’s book has the wonderful property that even the ideas she gives to her secondary characters, the ones who serve as foils to Plato, are sometimes interesting enough to deserve book-length treatments of their own, and crowd-sourced morality strikes me as a perfect example.

In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization’s survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives.  (Obviously, such ideas are nonstarters in the current political climate of the US, but I’m not talking here about what’s feasible, only about what’s necessary.)  As several commenters pointed out, my view raises an obvious question: who is to decide how much “damage” each activity causes, and thus how much it should be taxed?  Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever?

For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn’t decree the answer—has been representative democracy.  Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences.  But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it’s ever faced: namely, that of leaving our descendants a livable planet.  Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals, and in many cases we’re moving rapidly in the opposite direction.  Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening.

So what’s the solution?  To put the world under the thumb of an environmentalist dictator?  Absolutely not.  In all of history, I don’t think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren’t too keen on alternative energy either).  The problem, I think, is epistemological.  Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn’t yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history.  Because our domination of the earth’s climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win.  If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won’t be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it’s too late.  If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other.  (Of course, all this is a problem for many fields, not just climate change.  Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.)

Yet for all that, I’m an optimist—sort of.  For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society.  Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little.  We could improve it a lot more.  Consider, for example, the following attempted definitions:

A trustworthy source of information is one that’s considered trustworthy by many sources who are themselves trustworthy (on the same topic or on closely related topics).  The current scientific consensus, on any given issue, is what the trustworthy sources consider to be the consensus.  A good decision-maker is someone who’s considered to be a good decision-maker by many other good decision-makers.

At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that’s needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.  And the computation of such an eigenvector need be no more “Orwellian” than … well, Google.  If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that “usually just works” in much the same way Google usually just works.

Now, would those with axes to grind try to subvert such a system the instant it went online?  Certainly.  For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy.  So there would arise a parallel world of trust and consensus and “expertise,” mutually-reinforcing yet nearly disjoint from the world of the real.  But here’s the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one.  They’d see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe.  The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy.  It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I’d cheer that outcome as well.

Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they’re in a minority!  And far from being worried about it, they treat it as a badge of honor.  They think they’re Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers.

I admit all this.  But the point of an eigentrust system wouldn’t be to convince everyone.  As long as I’m fantasizing, the point would be that, once people’s individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law.  The formation of the giant component would be the signal that there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse.  Conversely, it’s essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority’s objections might be enough to veto action, even if it was numerically small.  This is still democracy; it’s just democracy enhanced by linear algebra.

Other people will object that, while we should use the Internet to improve the democratic process, the idea we’re looking for is not eigentrust or eigenmorality but rather prediction markets.  Such markets would allow us to, as my friend Robin Hanson advocates, “vote on values but bet on beliefs.”  For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise.  But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there’s a serious penalty for being wrong, namely losing your shirt.  If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law.

I actually like the idea of prediction markets—I have ever since I heard about them—but I consider them limited in scope.  My example above, involving the year 2200, gives a hint as to why.  Prediction markets are great whenever our disagreements are over something that will be settled one way or the other, to everyone’s assent, in the near future (e.g., who will win the World Cup, or next year’s GDP).  But most of our important disagreements aren’t like that: they’re over which direction society should move in, which issues to care about, which statistical indicators are even the right ones to measure a country’s health.  Now, those broader questions can sometimes be settled empirically, in a sense: they can be settled by the overwhelming judgment of history, as the slavery, women’s suffrage, and fascism debates were.  But that kind of empirical confirmation typically takes way too long to set up a decent betting market around it.  And for the non-bettable questions, a carefully-crafted eigendemocracy really is the best system I can think of.

Again, I think Rebecca Goldstein’s Plato is completely right that such a system, were it implemented, couldn’t possibly solve the philosophical problem of finding the “ultimate ground of justice,” just like Google can’t provide us with the “ultimate ground of importance.”  If nothing else, we’d still need to decide which of the many possible eigentrust metrics to use, and we couldn’t use eigentrust for that without risking an infinite regress.  But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might—just might—work well enough to save civilization.


Update (6/20): If you haven’t been following, there’s an excellent discussion in the comments, with, as I’d hoped, many commenters raising strong and pertinent objections to the eigenmorality and eigendemocracy concepts, while also proposing possible fixes.  Let me now mention what I think are the most important problems with eigenmorality and eigendemocracy respectively—both of them things that had occurred to me also, but that the commenters have brought out very clearly and explicitly.

With eigenmorality, perhaps the most glaring problem is that, as I mentioned before, there’s no notion of time-ordering, or of “who started it,” in the definition that Tyler and I were using.  As Luca Trevisan aptly points out in the comments, this has the consequence that eigenmorality, as it stands, is completely unable to distinguish between a crime syndicate that’s hated by the majority because of its crimes, and an equally-large ethnic minority that’s hated by the majority solely because it’s different, and that therefore hates the majority.  However, unlike with mathematical theories of consciousness—where I used counterexamples to try to show that no mathematical definition of a certain kind could possibly capture our intuitions about consciousness—here the problem strikes me as much more circumscribed and bounded.  It’s far from obvious to me that we can’t easily improve the definition of eigenmorality so that it does agree with most people’s moral intuition, whenever intuition renders a clear verdict, at least in the limited setting of Iterated Prisoners’ Dilemma tournaments.

Let’s see, in particular, how to solve the problem that Luca stressed.  As a first pass, we could do so as follows:

A moral agent is one who only initiates defection against agents who it has good reason to believe are immoral (where, as usual, linear algebra is used to unravel the definition’s apparent circularity).

Notice that I’ve added two elements to the setup: not only time but also knowledge.  If you shun someone solely because you don’t like how they look, then we’d like to say that reflects poorly on you, even if (unbeknownst to you) it turns out that the person really is an asshole.  Now, several more clauses would need to be added to the above definition to flesh it out: for example, if you’ve initiated defection against an immoral person, but then the person stops being immoral, at what point do you have a moral duty to “forgive and forget”?  Also, just like with the eigenmoses/eigenjesus distinction, do you have a positive duty to initiate defection against an agent who you learn is immoral, or merely no duty not to do so?

OK, so after we handle the above issues, will there still be examples that our time-sensitive, knowledge-sensitive eigenmorality definition gets badly, egregiously wrong?  Maybe—I don’t know!  Please let me know in the comments.

Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul.  Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages’ Google rank.  In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people’s “revealed preferences,” and exploits that structure for its own purposes.  Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings.  But Google still isn’t the only reason for linking, so the link structure still contains real information.

By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject.  If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the “experts” such that claiming to “trust” them would do the most for their favored outcome.  It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust.  So, how would an eigendemocracy suss out the truth about who trusts whom on which subject?  I don’t have a very good answer to this, and am open to suggestions.  The best idea so far is to use Facebook for this purpose, but I don’t know exactly how.


Update (6/22): Many commenters, both here and on Hacker News, interpreted me to be saying something obviously stupid: namely, that any belief identified as “the consensus” by an eigenvector analysis is therefore the morally right one. They then energetically knocked down this strawman, with the standard examples (Hitler, slavery, discrimination against gays).

Admittedly, I probably contributed to this confusion by my ill-advised decision to discuss eigenmorality and eigendemocracy in the same blog post—solely because of their mathematical similarity, and the ease with which thinking about one leads to thinking about the other. But the two are different, as are my claims about them. For the record:

  • Eigenmorality: Within the stylized setting of an Iterated Prisoner’s Dilemma tournament, with side-channels allowing agents to learn who are doing what to each other, I believe it ought to be possible, by looking at who initiated rounds of defection and forgiveness, and then doing an eigenvector analysis on the result, to identify the “moral” and “immoral” agents in a way that more-or-less accords with our moral intuitions. Even if true, of course, this wouldn’t have any obvious moral implications for hot-button issues such as abortion, gun control, or climate change, which it’s far from obvious how to encode in terms of IPD tournaments.
  • Eigendemocracy: By doing an eigenvector analysis, to identify who people implicitly acknowledge as the “experts” within each field, I believe that it might be possible to produce results that, on average, in practice, and in contemporary society, are better and more rational than those produced by ordinary majority-voting. Obviously, there’s no guarantee whatsoever that the results of eigendemocracy would be morally acceptable ones: if the public acknowledges as “experts” people who believe evil things (as in Nazi Germany), then eigendemocracy will produce evil results. But democracy itself suffers from a precisely analogous problem. The situation that interests me is one that’s been with us since the time of ancient Athens: one where there is a consensus among the experts about the wisest course of action, and there’s also an implicit consensus among the public that those experts are indeed the experts, but the democratic system is somehow “unable to complete the modus ponens,” because of manipulation by powerful interests and the sway of demagogues. In such cases, it seems possible to me that an eigendemocracy could improve on the results of ordinary democracy—perhaps dramatically so—while still avoiding the evils of dictatorship.

Crucially, in neither of the above bullet points, nor in their combination, is there any hint of a belief that “the will of the majority always defines what’s morally right” (if anything, there’s a belief in the opposite).


Update (7/4): While this isn’t really a surprise—I’d astonished if it weren’t the case—I’ve now learned that several people, besides me and Rebecca Goldstein, have previously written about the ideas of eigentrust and eigendemocracy. Perhaps more surprising is that one of the earlier groups—consisting of Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina from Stanford—literally called the idea “EigenTrust,” when they published about it in 2003. (Note that Garcia-Molina, in a likely non-coincidence, was Larry Page and Sergey Brin’s PhD adviser.) Kamvar et al.’s intended application for EigenTrust was to determine which nodes are trustworthy in a peer-to-peer file-sharing network, rather than (say) to reinvent democracy, or to address conundrums of epistemology and ethics that have been with us since Plato. But while the scope might be more modest, the core idea is the same. (Hat tip to commenter Babak.)

As for enhancing democracy using linear algebra, it turns out that that too has already been discussed: see for example this presentation by Rob Spekkens of the Perimeter Institute, which Michael Nielsen pointed me to. (In yet another small-world phenomenon, Rob’s main interest is in quantum foundations, and in that context I’ve known him for a decade! But his interest in eigendemocracy was news to me.)

If you’re wondering whether anything in this post was original … well, so far, I haven’t learned of prior work specifically about eigenmorality (e.g., in Iterated Prisoners Dilemma tournaments), much less about eigenmoses and eigenjesus.

Randomness Rules in Quantum Mechanics

Monday, June 16th, 2014

So, Part II of my two-part series for American Scientist magazine about how to recognize random numbers is now out.  This part—whose original title was the one above, but was changed to “Quantum Randomness” to fit the allotted space—is all about quantum mechanics and the Bell inequality, and their use in generating “Einstein-certified random numbers.”  I discuss the CHSH game, the Free Will Theorem, and Gerard ‘t Hooft’s “superdeterminism” (just a bit), before explaining the striking recent protocols of Colbeck, Pironio et al., Vazirani and Vidick, Couldron and Yuen, and Miller and Shi, all of which expand a short random seed into additional random bits that are “guaranteed to be random unless Nature resorted to faster-than-light communication to bias them.”  I hope you like it.

[Update: See here for Hacker News thread]

In totally unrelated news, President Obama’s commencement speech at UC Irvine, about climate change and the people who still deny its reality, is worth reading.

My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test

Monday, June 9th, 2014

If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper.  See for example here, here, here, here.  In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.”  Much more importantly, Turing’s famous example dialogue, involving Mr. Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve.

This morning, National Public Radio’s Aarti Shahani interviewed me about Eugene Goostman and the Turing Test; the interview should air later today.

Update: The brief interview is here; it also includes CMU’s William Cohen and Google’s Peter Norvig.

I tried to redirect Ms. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice.  Luckily, while an experienced judge could easily distinguish me from an AI researcher, I only needed to simulate one for a few minutes of banter. 🙂

Prompted by Ms. Shahani’s inquiries, I had the “conversation” below with the version of Eugene Goostman available at http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/.  It’s possible that there’s a more recent version somewhere else, but this is the only version I was able to access.  Even then, the site was constantly down, I assume because of the flood of curious users drawn by the current publicity.

Scott: Which is bigger, a shoebox or Mount Everest?

Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…

Scott: How many legs does a camel have?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Scott: How many legs does a millipede have?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.

Scott: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-)

Scott: Do you understand why I’m asking such basic questions?  Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”?

Eugene: …wait

Scott: Do you think your ability to fool unsophisticated judges indicates a flaw with the Turing Test itself, or merely with the way people have interpreted the test?

Eugene: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Scott: Do you think Alan Turing, brilliant though he was, had trouble imagining that the judges of his “imitation game” wouldn’t think to ask commonsense questions like the ones above—or that, if they did, they’d actually accept evasion or irrelevant banter as answers?

Eugene: No, not really. I don’t think alan turing brilliant although this guy was had trouble imagining that the judges of his imitation game would not consider to Oooh. Anything else?

Scott: In your opinion, does your existence demonstrate any idea or principle that wasn’t demonstrated just as convincingly by ELIZA, Joseph Weizenbaum’s chatbot from the 1960s?

Eugene: Eliza was a break-thru. All the bots after it were nothing but weak parodies, claiming to have “revolutionary improvements”.

Scott: Hey, that’s the first sensible thing you’ve said!

Eugene: Did I really say so? Well, I must have forgotten about it. Could you tell me about your job, by the way?

Scott: OK, don’t push your luck.  Why do you think journalists are now hyping you?  Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?

Eugene: Ask the Creator about it (call Him by phone, for example:-). He knows all hidden reasons of human-beings’ life and behavior.

Scott: OK, I’ll count that as the second sensible thing you’ve said (grading generously).  Don’t you think that a more convincing chatbot could be created, by using Google to trawl the entire web for plausible-sounding answers (analogous to what IBM’s Watson did)?

Eugene: Yes, I do think that a more convincing chatbot can be built by using google to trawl the all web for plausible – sounding – I hope I have right to have my own opinion. Maybe, let’s talk about something else? What would you like to discuss?

Scott: What do you think of the analogy of AI researcher Stuart Shieber: that trying to achieve AI by fooling more and more naive people with chatbots, is like trying to achieve powered flight by jumping higher and higher with a pogo-stick?

Eugene: “Artificial Intelligence”? I guess you mean that stupid joke about a blonde who dyed her hair!

Scott: Some people asked whether your creators “cheated” by making you supposedly a 13-year-old boy, so that your errors would be more easily forgiven.  Do people actually have that low an opinion of 13-year-old boys?  I mean, I knew when I was 13 that camels have four legs!

Eugene: I wasn’t created by anyone. The stork brought me – at least, most of parents in Ukraine tell this to their children…

Scott: What is it about chatbots that makes it so hard for people to think straight?  Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?

Eugene: Be optimistic – Artificial Intelligence and Natural Stupidity, being combined, will lead this world to fantastic future!

Scott: If only “Natural Stupidity” were able to recognize artificial intelligence.  Speaking of which, did you see the movie “Her”?  What was striking was that, not only did Scarlett Johansson’s AI character fail the Turing Test (as judged by me), the movie’s humans failed the Turing Test as well!

Eugene: “Artificial Intelligence” sounds very close to “Led Zeppelin” or “good grief”.

Scott: Let’s end our conversation on that thought-provoking note.  Thanks for chatting, Eugene!

Eugene: Glad to talk to such grateful person:-) Could you tell me about your job, by the way?

Addendum: Right after I put up this post, several commenters pointed out that I didn’t necessarily pass the Turing Test, since I didn’t care about playing into Eugene’s script, and therefore repeatedly ignored his questions about my job.  Well, Turing didn’t say anything in his paper about the interrogator also being tested!  I was trying to illustrate how to unmask a robot as quickly as possible, not how to distinguish yourself from one.  But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response:

Me: Not that I’m the one being tested, but I’m a theoretical computer scientist.

Eugene: I have nice Celeron-667, by the way. I play games on it!

Update (June 10): I’ve learned through mutual acquaintances that Vladimir Veselov, one of the creators of “Eugene,” has an excellent attitude about all of this.  He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator.  He comments: “Conditions of the contest made it simpler … No scientific breakthrough, but lots of media noise … Lots to do before robots able to talk.”  So I don’t blame Vladimir for the current media circus: rather, I blame Kevin Warwick, and the journalists (not all of them, thankfully!) who uncritically repeated Warwick’s pronouncements.

Incidentally, I strongly encourage people to read Stuart Shieber’s classic paper, Lessons from a Restricted Turing Test (about Shieber’s experiences with the Loebner Prize competition).  This is the paper where Shieber introduces the pogo-stick analogy, and where he crisply explains why AI researchers don’t currently focus their energies on chatbot competitions.

Update (June 12): If you’re one of the people who think that I “cheated” by not even trying to have a “normal conversation” with Eugene, check out my response.

CCC’s Declaration of Independence

Friday, June 6th, 2014

Recently, the participants of the Conference on Computational Complexity (CCC)—the latest iteration of which I’ll be speaking at next week in Vancouver—voted to declare their independence from the IEEE, and to become a solo, researcher-organized conference.  See this open letter for the reasons why (basically, IEEE charged a huge overhead, didn’t allow open access to the proceedings, and increased rather than decreased the administrative burden on the organizers).  As a former member of the CCC Steering Committee, I’m in violent agreement with this move, and only wish we’d managed to do it sooner.

Now, Dieter van Melkebeek (the current Steering Committee chair) is asking complexity theorists to sign a public Letter of Support, to make it crystal-clear that the community is behind the move to independence.  And Jeff Kinne has asked me to advertise the letter on my blog.  So, if you’re a complexity theorist who agrees with the move, please go there and sign (it already has 111 signatures, but could use more).

Meanwhile, I wish to express my profound gratitude to Dieter, Jeff, and the other Steering Committee members for their efforts toward independence.  The only thing I might’ve done differently would be to add a little more … I dunno, pizzazz to the documents explaining the reasons for the move.  Like:

When in the Course of human events, it becomes necessary for a conference to dissolve the organizational bands that have connected it with the IEEE, and to assume among the powers of the earth, the separate and equal station to which the Laws of Mathematics and the CCC Charter entitle it, a decent respect to the opinions of theorist-kind requires that the participants should declare the causes which impel them to the separation.

We hold these truths to be self-evident (but still in need of proof), that P and NP are created unequal, that one-way functions exist, that the polynomial hierarchy is infinite…