## Archive for the ‘Nerd Interest’ Category

### Steven Pinker’s inflammatory proposal: universities should prioritize academics

Thursday, September 11th, 2014

If you haven’t yet, I urge you to read Steven Pinker’s brilliant piece in The New Republic about what’s broken with America’s “elite” colleges and how to fix it.  The piece starts out as an evisceration of an earlier New Republic article on the same subject by William Deresiewicz.  Pinker agrees with Deresiewicz that something is wrong, but finds Deresiewicz’s diagnosis of what to be lacking.  The rest of Pinker’s article sets out his own vision, which involves America’s top universities taking the radical step of focusing on academics, and returning extracurricular activities like sports to their rightful place as extras: ways for students to unwind, rather than a university’s primary reason for existing, or a central criterion for undergraduate admissions.  Most controversially, this would mean that the admissions process at US universities would become more like that in virtually every other advanced country: a relatively-straightforward matter of academic performance, rather than an exercise in peering into the applicants’ souls to find out whether they have a special je ne sais quoi, and the students (and their parents) desperately gaming the intentionally-opaque system, by paying consultants tens of thousands of dollars to develop souls for them.

(Incidentally, readers who haven’t experienced it firsthand might not be able to understand, or believe, just how strange the undergraduate admissions process in the US has become, although Pinker’s anecdotes give some idea.  I imagine anthropologists centuries from now studying American elite university admissions, and the parenting practices that have grown up around them, alongside cannibalism, kamikaze piloting, and other historical extremes of the human condition.)

Pinker points out that a way to assess students’ ability to do college coursework—much more quickly and accurately than by relying on the soul-detecting skills of admissions officers—has existed for a century.  It’s called the standardized test.  But unlike in the rest of the world (even in ultraliberal Western Europe), standardized tests are politically toxic in the US, seen as instruments of racism, classism, and oppression.  Pinker reminds us of the immense irony here: standardized tests were invented as a radical democratizing tool, as a way to give kids from poor and immigrant families the chance to attend colleges that had previously only been open to the children of the elite.  They succeeded at that goal—too well for some people’s comfort.

We now know that the Ivies’ current emphasis on sports, “character,” “well-roundedness,” and geographic diversity in undergraduate admissions was consciously designed (read that again) in the 1920s, by the presidents of Harvard, Princeton, and Yale, as a tactic to limit the enrollment of Jews.  Nowadays, of course, the Ivies’ “holistic” admissions process no longer fulfills that original purpose, in part because American Jews learned to play the “well-roundedness” game as well as anyone, shuttling their teenage kids between sports, band practice, and faux charity work, while hiring professionals to ghostwrite application essays that speak searingly from the heart.  Today, a major effect of “holistic” admissions is instead to limit the enrollment of Asian-Americans (especially recent immigrants), who tend disproportionately to have superb SAT scores, but to be deficient in life’s more meaningful dimensions, such as lacrosse, student government, and marching band.  More generally—again, pause to wallow in the irony—our “progressive” admissions process works strongly in favor of the upper-middle-class families who know how to navigate it, and against the poor and working-class families who don’t.

Defenders of the status quo have missed this reality on the ground, it seems to me, because they’re obsessed with the notion that standardized tests are “reductive”: that is, that they reduce a human being to a number.  Aren’t there geniuses who bomb standardized tests, they ask, as well as unimaginative grinds who ace them?  And if you make test scores a major factor in admissions, then won’t students and teachers train for the tests, and won’t that pervert open-ended intellectual curiosity?  The answer to both questions, I think, is clearly “yes.”  But the status-quo-defenders never seem to take the next step, of examining the alternatives to standardized testing, to see whether they’re even worse.

I’d say the truth is this: spots at the top universities are so coveted, and so much rarer than the demand, that no matter what you use as your admissions criterion, that thing will instantly get fetishized and turned into a commodity by students, parents, and companies eager to profit from their anxiety.  If it’s grades, you’ll get a grades fetish; if sports, you’ll get a sports fetish; if community involvement, you’ll get soup kitchens sprouting up for the sole purpose of giving ambitious 17-year-olds something to write about in their application essays.  If Harvard and Princeton announced that from now on, they only wanted the most laid-back, unambitious kids, the ones who spent their summers lazily skipping stones in a lake, rather than organizing their whole lives around getting in to Harvard and Princeton, tens of thousands of parents in the New York metropolitan area would immediately enroll their kids in relaxation and stone-skipping prep courses.  So, given that reality, why not at least make the fetishized criterion one that’s uniform, explicit, predictively valid, relatively hard to game, and relevant to universities’ core intellectual mission?

(Here, I’m ignoring criticisms specific to the SAT: for example, that it fails to differentiate students at the extreme right end of the bell curve, thereby forcing the top schools to use other criteria.  Even if those criticisms are true, they could easily be fixed by switching to other tests.)

I admit that my views on this matter might be colored by my strange (though as I’ve learned, not at all unique) experience, of getting rejected from almost every “top” college in the United States, and then, ten years later, getting recruited for faculty jobs by the very same institutions that had rejected me as a teenager.  Once you understand how undergraduate admissions work, the rejections were unsurprising: I was a 15-year-old with perfect SATs and a published research paper, but not only was I young and immature, with spotty grades and a weird academic trajectory, I had no sports, no music, no diverse leadership experiences.  I was a narrow, linear, A-to-B thinker who lacked depth and emotional intelligence: the exact opposite of what Harvard and Princeton were looking for in every way.  The real miracle is that despite these massive strikes against me, two schools—Cornell and Carnegie Mellon—were nice enough to give me a chance.  (I ended up going to Cornell, where I got a great education.)

Some people would say: so then what’s the big deal?  If Harvard or MIT reject some students that maybe they should have admitted, those students will simply go elsewhere, where—if they’re really that good—they’ll do every bit as well as they would’ve done at the so-called “top” schools.  But to me, that’s uncomfortably close to saying: there are millions of people who go on to succeed in life despite childhoods of neglect and poverty.  Indeed, some of those people succeed partly because of their rough childhoods, which served as the crucibles of their character and resolve.  Ergo, let’s neglect our own children, so that they too can have the privilege of learning from the school of hard knocks just like we did.  The fact that many people turn out fine despite unfairness and adversity doesn’t mean that we should inflict unfairness if we can avoid it.

Let me end with an important clarification.  Am I saying that, if I had dictatorial control over a university (ha!), I would base undergraduate admissions solely on standardized test scores?  Actually, no.  Here’s what I would do: I would admit the majority of students mostly based on test scores.  A minority, I would admit because of something special about them that wasn’t captured by test scores, whether that something was musical or artistic talent, volunteer work in Africa, a bestselling smartphone app they’d written, a childhood as an orphaned war refugee, or membership in an underrepresented minority.  Crucially, though, the special something would need to be special.  What I wouldn’t do is what’s done today: namely, to turn “specialness” and “well-roundedness” into commodities that the great mass of applicants have to manufacture before they can even be considered.

Other than that, I would barely look at high-school grades, regarding them as too variable from one school to another.  And, while conceding it might be impossible, I would try hard to keep my university in good enough financial shape that it didn’t need any legacy or development admits at all.

Update (Sep. 14): For those who feel I’m exaggerating the situation, please read the story of commenter Jon, about a homeschooled 15-year-old doing graduate-level work in math who, three years ago, was refused undergraduate admission to both Berkeley and Caltech, with the math faculty powerless to influence the admissions officers. See also my response.

### Do theoretical computer scientists despise practitioners? (Answer: no, that’s crazy)

Thursday, August 28th, 2014

A roboticist and Shtetl-Optimized fan named Jon Groff recently emailed me the following suggestion for a blog entry:

I think a great idea for an entry would be the way that in fields like particle physics the theoreticians and experimentalists get along quite well but in computer science and robotics in particular there seems to be a great disdain for the people that actually do things from the people that like to think about them. Just thought I’d toss that out there in case you are looking for some subject matter.

After I replied (among other things, raising my virtual eyebrows over his rosy view of the current state of theoretician/experimentalist interaction in particle physics), Jon elaborated on his concerns in a subsequent email:

[T]here seems to be this attitude in CS that getting your hands dirty is unacceptable. You haven’t seen it because you sit a lofty heights and I tend to think you always have. I have been pounding out code since ferrite cores. Yes, Honeywell 1648A, so I have been looking up the posterior of this issue rather than from the forehead as it were. I guess my challenge would be to find a noteworthy computer theoretician somewhere and ask him:
1) What complete, working, currently functioning systems have you designed?
2) How much of the working code did you contribute?
3) Which of these systems is still operational and in what capacity?
Or say, if the person was a famous robotics professor or something you may ask:
1) Have you ever actually ‘built’ a ‘robot’?
2) Could you, if called upon, design and build an easily tasked robot safe for home use using currently available materials and code?

So I wrote a second reply, which Jon encouraged me to turn into a blog post (kindly giving me permission to quote him).  In case it’s of interest to anyone else, my reply is below.

Dear Jon,

For whatever it’s worth, when I was an undergrad, I spent two years working as a coder for Cornell’s RoboCup robot soccer team, handling things like the goalie.  (That was an extremely valuable experience, one reason being that it taught me how badly I sucked at meeting deadlines, documenting my code, and getting my code to work with other people’s code.)   Even before that, I wrote shareware games with my friend Alex Halderman (now a famous computer security expert at U. of Michigan); we made almost $30 selling them. And I spent several summers working on applied projects at Bell Labs, back when that was still a thing. And by my count, I’ve written four papers that involved code I personally wrote and experiments I did (one on hypertext, one on stylometric clusteringone on Boolean function query properties, one on improved simulation of stabilizer circuits—for the last of these, the code is actually still used by others). While this is all from the period 1994-2004 (these days, if I need any coding done, I use the extremely high-level programming language called “undergrad”), I don’t think it’s entirely true to say that I “never got my hands dirty.” But even if I hadn’t had any of those experiences, or other theoretical computer scientists hadn’t had analogous ones, your questions still strike me as unfair. They’re no more fair than cornering a star coder or other practical person with questions like, “Have you ever proved a theorem? A nontrivial theorem? Why is BPP contained in P/poly? What’s the cardinality of the set of Turing-degrees?” If the coder can’t easily answer these questions, would you say it means that she has “disdain for theorists”? (I was expecting some discussion of this converse question in your email, and was amused when I didn’t find any.) Personally, I’d say “of course not”: maybe the coder is great at coding, doesn’t need theory very much on a day-to-day basis and doesn’t have much free time to learn it, but (all else equal) would be happy to know more. Maybe the coder likes theory as an outsider, even has friends from her student days who are theorists, and who she’d go to if she ever did need their knowledge for her work. Or maybe not. Maybe she’s an asshole who looks down on anyone who doesn’t have the exact same skill-set that she does. But I certainly couldn’t conclude that from her inability to answer basic theory questions. I’d say just the same about theorists. If they don’t have as much experience building robots as they should have, don’t know as much about large software projects as they should know, etc., then those are all defects to add to the long list of their other, unrelated defects. But it would be a mistake to assume that they failed to acquire this knowledge because of disdain for practical peoplerather than for mundane reasons like busyness or laziness. Indeed, it’s also possible that they respect practical people all the more, because they tried to do the things the practical people are good at, and discovered for themselves how hard they were. Maybe they became theorists partly because of that self-discovery—that was certainly true in my case. Maybe they’d be happy to talk to or learn from a practical roboticist like yourself, but are too shy or too nerdy to initiate the conversation. Speaking of which: yes, let’s let bloom a thousand collaborations between theorists and practitioners! Those are the lifeblood of science. On the other hand, based on personal experience, I’m also sensitive to the effect where, because of pressures from funding agencies, theorists have to try to pretend their work is “practically relevant” when they’re really just trying to discover something cool, while meantime, practitioners have to pretend their work is theoretically novel or deep, when really, they’re just trying to write software that people will want to use. I’d love to see both groups freed from this distorting influence, so that they can collaborate for real reasons rather than fake ones. (I’ve also often remarked that, if I hadn’t gravitated to the extreme theoretical end of computer science, I think I might have gone instead to the extreme practical end, rather than to any of the points in between. That’s because I hate the above-mentioned distorting influence: if I’m going to try to understand the ultimate limits of computation, then I should pursue that wherever it leads, even if it means studying computational models that won’t be practical for a million years. And conversely, if I’m going to write useful software, I should throw myself 100% into that, even if it means picking an approach that’s well-understood, clunky, and reliable over an approach that’s new, interesting, elegant, and likely to fail.) Best, Scott ### US State Department: Let in cryptographers and other scientists Saturday, July 26th, 2014 Predictably, my last post attracted plenty of outrage (some of it too vile to let through), along with the odd commenter who actually agreed with what I consider my fairly middle-of-the-road, liberal Zionist stance. But since the outrage came from both sides of the issue, and the two sides were outraged about the opposite things, I guess I should feel OK about it. Still, it’s hard not to smart from the burns of vituperation, so today I’d like to blog about a very different political issue: one where hopefully almost all Shtetl-Optimized readers will actually agree with me (!). I’ve learned from colleagues that, over the past year, foreign-born scientists have been having enormously more trouble getting visas to enter the US than they used to. The problem, I’m told, is particularly severe for cryptographers: embassy clerks are now instructed to ask specifically whether computer scientists seeking to enter the US work in cryptography. If an applicant answers “yes,” it triggers a special process where the applicant hears nothing back for months, and very likely misses the workshop in the US that he or she had planned to attend. The root of the problem, it seems, is something called the Technology Alert List (TAL), which has been around for a while—the State Department beefed it up in response to the 9/11 attacks—but which, for some unknown reason, is only now being rigorously enforced. (Being marked as working in one of the sensitive fields on this list is apparently called “getting TAL’d.”) The issue reached a comical extreme last October, when Adi Shamir, the “S” in RSA, Turing Award winner, and foreign member of the US National Academy of Sciences, was prevented from entering the US to speak at a “History of Cryptology” conference sponsored by the National Security Agency. According to Shamir’s open letter detailing the incident, not even his friends at the NSA, or the president of the NAS, were able to grease the bureaucracy at the State Department for him. It should be obvious to everyone that a crackdown on academic cryptographers serves no national security purpose whatsoever, and if anything harms American security and economic competitiveness, by diverting scientific talent to other countries. (As Shamir delicately puts it, “the number of terrorists among the members of the US National Academy of Science is rather small.”) So: 1. Any readers who have more facts about what’s going on, or personal experiences, are strongly encouraged to share them in the comments section. 2. Any readers who might have any levers of influence to pull on this issue—a Congressperson to write to, a phone call to make, an Executive Order to issue (I’m talking to you, Barack), etc.—are strongly encouraged to pull them. ### A Physically Universal Cellular Automaton Thursday, June 26th, 2014 It’s been understood for decades that, if you take a simple discrete rule—say, a cellular automaton like Conway’s Game of Life—and iterate it over and over, you can very easily get the capacity for universal computation. In other words, your cellular automaton becomes able to implement any desired sequence of AND, OR, and NOT gates, store and retrieve bits in a memory, and even (in principle) run Windows or Linux, albeit probably veerrryyy sloowwllyyy, using a complicated contraption of thousands or millions of cells to represent each bit of the desired computation. If I’m not mistaken, a guy named Wolfram even wrote an entire 1200-page-long book about this phenomenon (see here for my 2002 review). But suppose we want more than mere computational universality. Suppose we want “physical” universality: that is, the ability to implement any transformation whatsoever on any finite region of the cellular automaton’s state, by suitably initializing the complement of that region. So for example, suppose that, given some 1000×1000 square of cells, we’d like to replace every “0” cell within that square by a “1” cell, and vice versa. Then physical universality would mean that we could do that, eventually, by some “machine” we could build outside the 1000×1000 square of interest. You might wonder: are we really asking for more here than just ordinary computational universality? Indeed we are. To see this, consider Conway’s famous Game of Life. Even though Life has been proved to be computationally universal, it’s not physically universal in the above sense. The reason is simply that Life’s evolution rule is not time-reversible. So if, for example, there were a lone “1” cell deep inside the 1000×1000 square, surrounded by a sea of “0” cells, then that “1” cell would immediately disappear without a trace, and no amount of machinery outside the square could possibly detect that it was ever there. Furthermore, even cellular automata that are both time-reversible and computationally universal could fail to be physically universal. Suppose, for example, that our CA allowed for the construction of “impenetrable walls,” through which no signal could pass. And suppose that our 1000×1000 region contained a hollow box built out of these impenetrable walls. Then, by definition, no amount of machinery that we built outside the region could ever detect whether there was a particle bouncing around inside the box. So, in summary, we now face a genuinely new question: Does there exist a physically universal cellular automaton, or not? This question had sort of vaguely bounced around in my head (and probably other people’s) for years. But as far as I know, it was first asked, clearly and explicitly, in a lovely 2010 preprint by Dominik Janzing. Today, I’m proud to report that Luke Schaeffer, a first-year PhD student in my group, has answered Janzing’s question in the affirmative, by constructing the first cellular automaton (again, to the best of our knowledge) that’s been proved to be physically universal. Click here for Luke’s beautifully-written preprint about his construction, and click here for a webpage that he’s prepared, explaining the details of the construction using color figures and videos. Even if you don’t have time to get into the nitty-gritty, the videos on the webpage should give you a sense for the intricacy of what he accomplished. Very briefly, Luke first defines a reversible, two-dimensional CA involving particles that move diagonally across a square lattice, in one of four possible directions (northeast, northwest, southeast, or southwest). The number of particles is always conserved. The only interesting behavior occurs when three of the particles “collide” in a single 2×2 square, and Luke gives rules (symmetric under rotations and reflections) that specify what happens then. Given these rules, it’s possible to prove that any configuration whatsoever of finitely many particles will “diffuse,” after not too many time steps, into four unchanging clouds of particles, which thereafter simply move away from each other in the four diagonal directions for all eternity. This has the interesting consequence that Luke’s CA, when initialized with finitely many particles, cannot be capable of universal computation in Turing’s sense. In other words, there’s no way, using n initial particles confined to an n×n box, to set up a computation that continues to do something interesting after 2n or 22^n time steps, let alone forever. On the other hand, using finitely many particles, one can also prove that the CA can perform universal computation in the Boolean circuit sense. In other words, we can implement AND, OR, and NOT gates, and by chaining them together, can compute any Boolean function that we like on any fixed number of input bits (with the number of input bits generally much smaller than the number of particles). And this “circuit universality,” rather than Turing-machine universality, is all that’s implied anyway by physical universality in Janzing’s sense. (As a side note, the distinction between circuit and Turing-machine universality seems to deserve much more attention than it usually gets.) Anyway, while the “diffusion into four clouds” aspect of Luke’s CA might seem annoying, it turns out to be extremely useful for proving physical universality. For it has the consequence that, no matter what the initial state was inside the square we cared about, that state will before too long be encoded into the states of four clouds headed away from the square. So then, “all” we need to do is engineer some additional clouds of particles, initially outside the square, that 1. intercept the four escaping clouds, 2. “decode” the contents of those clouds into a flat sequence of bits, 3. apply an arbitrary Boolean circuit to that bit sequence, and then 4. convert the output bits of the Boolean circuit into new clouds of particles converging back onto the square. So, well … that’s exactly what Luke did. And just in case there’s any doubt about the correctness of the end result, Luke actually implemented his construction in the cellular-automaton simulator Golly, where you can try it out yourself (he explains how on his webpage). So far, of course, I’ve skirted past the obvious question of “why.” Who cares that we now know that there exists a physically-universal CA? Apart from the sheer intrinsic coolness, a second reason is that I’ve been interested for years in how to make finer (but still computer-sciencey) distinctions, among various “candidate laws of physics,” then simply saying that some laws are computationally universal and others aren’t, or some are easy to simulate on a standard Turing machine and others hard. For ironically, the very pervasiveness of computational universality (the thing Wolfram goes on and on about) makes it of limited usefulness in distinguishing physical laws: almost any sufficiently-interesting set of laws will turn out to be computationally universal, at least in the circuit sense if not the Turing-machine one! On the other hand, many of these laws will be computationally universal only because of extremely convoluted constructions, which fall apart if even the tiniest error is introduced. And in other cases, we’ll be able to build a universal computer, all right, but that computer will be relatively impotent to obtain interesting input about its physical environment, or to make its output affect the gross features of the CA’s physical state. If you like, we’ll have a recipe for creating a universe full of ivory-tower, eggheaded nerds, who can search for counterexamples to Goldbach’s Conjecture but can’t build a shelter to protect themselves from a hail of “1” bits, or even learn whether such a hail is present or not, or decide which other part of the CA to travel to. As I see it, Janzing’s notion of physical universality is directly addressing this “egghead” problem, by asking whether we can build not merely a universal computer but a particularly powerful kind of robot: one that can effect a completely arbitrary transformation (given enough time, of course) on any part of its physical environment. And the answer turns out to be that, at least in a weird CA consisting of clouds of diagonally-moving particles, we can indeed do that. The question of whether we can also achieve physical universality in more natural CAs remains open (and in his Future Work section, Luke discusses several ways of formalizing what we mean by “more natural”). As Luke mentions in his introduction, there’s at least a loose connection here to David Deutsch’s recent notion of constructor theory (see also this followup paper by Deutsch and Chiara Marletto). Basically, Deutsch and Marletto want to reconstruct all of physics taking what can and can’t be constructed (i.e., what kinds of transformations are possible) as the most primitive concept, rather than (as in ordinary physics) what will or won’t happen (i.e., how the universe’s state evolves with time). The hope is that, once physics was reconstructed in this way, we could then (for example) state and answer the question of whether or not scalable quantum computers can be built as a principled question of physics, rather than as a “mere” question of engineering. Now, regardless of what you think about these audacious goals, or about Deutsch and Marletto’s progress (or lack of progress?) so far toward achieving them, it’s certainly a worthwhile project to study what sorts of machines can and can’t be constructed, as a matter of principle, both in the real physical world and in other, hypothetical worlds that capture various aspects of our world. Indeed, one could say that that’s what many of us in quantum information and theoretical computer science have been trying to do for decades! However, Janzing’s “physical universality” problem hints at a different way to approach the project: starting with some far-reaching desire (say, to be able to implement any transformation whatsoever on any finite region), can we engineer laws of physics that make that desire possible? If so, then how close can we make those laws to “our” laws? Luke has now taken a first stab at answering these questions. Whether his result ends up merely being a fun, recreational “terminal branch” on the tree of science, or a trunk leading to something more, probably just depends on how interested people get. I have no doubt that our laws of physics permit the creation of additional papers on this topic, but whether they do or don’t is (as far as I can see) merely a question of contingency and human will, not a constructor-theoretic question. ### Eigenmorality Wednesday, June 18th, 2014 This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell. Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden. The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query. Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.” At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix. Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them. I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page. Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar. At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy. In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?” Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?” After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.” Yet they both managed to use math to defeat the circularity. All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map. And such an equilibrium could be shown to exist—indeed, to exist uniquely. Searching for other circular notions to elucidate using linear algebra, I hit on morality. Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following: A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people. Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here? If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them? What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice? Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them? Should we consider intent, or only outcomes? Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling? Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community? If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child? For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity. How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define? Ah, I thought—this is precisely where linear algebra can come to the rescue! Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy. The next step, I figured, would be to hack together some code that computed this “eigenmorality” metric, and then see what happened when I ran the code to measure the morality of each participant in a simulated society. What would happen? Would the results conform to my pre-theoretic intuitions about what sort of behavior was moral and what wasn’t? If not, then would watching the simulation give me new ideas about how to improve the morality metric? Or would it be my intuitions themselves that would change? Unfortunately, I never got around to the “coding it up” part—there’s a reason why I became a theorist! The eigenmorality idea went onto my back burner, where it stayed for the next 16 years: 16 years in which our world descended ever further into darkness, lacking a principled way to quantify morality. But finally, this year, just two separate things have happened on the eigenmorality front, and that’s why I’m blogging about it now. Eigenjesus and Eigenmoses The first thing that’s happened is that Tyler Singer-Clark, my superb former undergraduate advisee, did code up eigenmorality metrics and test them out on a simulated society, for his MIT senior thesis project. You can read Tyler’s 12-page report here—it’s a fun, enjoyable, thought-provoking first research paper, one that I wholeheartedly recommend. Or, if you’d like to experiment yourself with the Python code, you can download it here from github. (Of course, all opinions expressed in this post are mine alone, not necessarily Tyler’s.) Briefly, Tyler examined what eigenmorality has to say in the setting of an Iterated Prisoner’s Dilemma (IPD) tournament. The Iterated Prisoner’s Dilemma is the famous game in which two players meet repeatedly, and in each turn can either “Cooperate” or “Defect.” The absolute best thing, from your perspective, is if you defect while your partner cooperates. But you’re also pretty happy if you both cooperate. You’re less happy if you both defect, while the worst (from your standpoint) is if you cooperate while your partner defects. At each turn, when contemplating what to do, you have the entire previous history of your interaction with this partner available to you. And thus, for example, you can decide to “punish” your partner for past defections, “reward” her for past cooperations, or “try to take advantage” by unilaterally defecting and seeing what happens. At each turn, the game has some small constant probability of ending—so you know approximately how many times you’ll meet this partner in the future, but you don’t know exactly when the last turn will be. Your score, in the game, is then the sum-total of your score over all turns and all partners (where each player meets each other player once). In the late 1970s, as recounted in his classic work The Evolution of Cooperation, Robert Axelrod invited people all over the world to submit computer programs for playing this game, which were then pit against each other in the world’s first serious IPD tournament. And, in a tale that’s been retold in hundreds of popular books, while many people submitted complicated programs that used machine learning, etc. to try to suss out their opponents, the program that won—hands-down, repeatedly—was TIT_FOR_TAT, a few lines of code submitted by the psychologist Anatol Rapaport to implement an ancient moral maxim. TIT_FOR_TAT starts out by cooperating; thereafter, it simply does whatever its opponent did in the last move, swiftly rewarding every cooperation and punishing every defection, and ignoring the entire previous history. In the decades since Axelrod, running Iterated Prisoners’ Dilemma tournaments has become a minor industry, with countless variations explored (for example, “evolutionary” versions, and versions allowing side-communication between the players), countless new strategies invented, and countless papers published. To make a long story short, TIT_FOR_TAT continues to do quite well across a wide range of environments, but depending on the mix of players present, other strategies can sometimes beat TIT_FOR_TAT. (As one example, if there’s a sizable minority of colluding players, who recognize each other by cooperating and defecting in a prearranged sequence, then those players can destroy TIT_FOR_TAT and other “simple” strategies, by cooperating with one another while defecting against everyone else.) Anyway, Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly. However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners? It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities. The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either! The players are bots, which do whatever their code tells them to do. So in some sense, utility—no less than morality—is “merely an interpretation” that we impose on the raw cooperate/defect sequences! There’s nothing to stop us from imposing some other interpretation (say, one that explicitly tries to measure morality) and seeing what happens. In an attempt to measure the players’ morality, Tyler uses the eigenmorality idea from before. The extent to which player A “cooperates” with player B is simply measured by the percentage of times A cooperates. (One acknowledged limitation of this work is that, when two players both defect, there’s no attempt to take into account “who started it,” and to judge the aggressor more harshly than the retaliator—or to incorporate time in any other way.) This then gives us a “cooperation matrix,” whose (i,j) entry records the total amount of niceness that player i displayed to player j. Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores. Now, there’s a very interesting ambiguity in what I said above. Namely, should we define the “niceness scores” to lie in [0,1] (so that the lowest, meanest possible score is 0), or in [-1,1] (so that it’s possible to have negative niceness)? This might sound like a triviality, but in our setting, it’s precisely the mathematical reflection of one of the philosophical conundrums I mentioned earlier. The conundrum can be stated as follows: is your morality a monotone function of your niceness? We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler. But do you have a positive obligation to be not-nice to Hitler: to make him suffer because he made others suffer? Or, OK, how about not Hitler, but someone who’s somewhat bad? Consider, for example, a woman who falls in love with, and marries, an unrepentant armed robber (with full knowledge of who he is, and with other options available to her). Is the woman morally praiseworthy for loving her husband despite his bad behavior? Or is she blameworthy because, by rewarding his behavior with her love, she helps to enable it? To capture two possible extremes of opinion about such questions, Tyler and I defined two different morality metrics, which we called … wait for it … eigenmoses and eigenjesus. Eigenmoses has the niceness scores in [-1,1], which means that you’re actively rewarded for punishing evildoers: that is, for defecting against those who defect against many moral players. Eigenjesus, by contrast, has the niceness scores in [0,1], which means that you always do at least as well by “turning the other cheek” and cooperating. (Though note that, even with eigenjesus, you get more morality credits by cooperating with moral players than by cooperating with immoral ones.) This is probably a good place to mention a second limitation of Tyler’s current study. Namely, with the current system, there’s no direct way for a player to find out how its partner has been behaving toward third parties. The only information that A gets about the goodness or evilness of player B, comes from A and B’s direct interaction. Ideally, one would like to design bots that take into account, not only the other bots’ behavior toward them, but the other bots’ behavior toward each other. So for example, even if someone is unfailingly nice to you, if that person is an asshole to everyone else, then the eigenmoses moral code would demand that you return the person’s cooperation with icy defection. Conversely, even if Gandhi is mean and hateful to you, you would still be morally obliged (interestingly, on both the eigenmoses and eigenjesus codes) to be nice to him, because of the amount of good he does for everyone else. Anyway, you can read Tyler’s paper if you want to see the results of computing the eigenmoses and eigenjesus scores for a diverse population of bots. Briefly, the results accord pretty well with intuition. When we look at eigenjesus scores, the all-cooperate bot comes out on top and the all-defect bot on the bottom (as is mathematically necessary), with TIT_FOR_TAT somewhere in the middle, and generous versions of TIT_FOR_TAT higher up. When we look at eigenmoses, by contrast, TIT_FOR_TWO_TATS comes out on top, with TIT_FOR_TAT in sixth place, and the all-cooperate bot scoring below the median. Interestingly, once again, the all-defect bot gets the lowest score (though in this case, it wasn’t mathematically necessary). Even though the measures acquit themselves well in this particular tournament, it’s admittedly easy to construct scenarios where the prescriptions of eigenjesus and eigenmoses alike violently diverge from most people’s moral intuitions. We’ve already touched on a few such scenarios above (for example, are you really morally obligated to lick the boots of someone who kicks you, just because that person is a saint to everyone other than you?). Another type of scenario involves minorities. Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that). Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%. Who, in this scenario, is moral, and who’s immoral? The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil. After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone. Of course, for much of human history, this is precisely how morality worked, in many people’s minds. But I dare say it’s a result that would make moderns uncomfortable. In summary, it seems clear to me that neither eigenmoses nor eigenjesus correctly captures our intuitions about morality, any more than Φ captures our intuitions about consciousness. But as they say, I think there’s plenty of scope here for further research: for coming up with new mathematical measures that sharpen our intuitive judgments about morality, and (if we like) testing those measures out using IPD tournaments. It also seems to me that there’s something fundamentally right about the eigenvector idea: all else being equal, we’d like to say, being nice to others is good, except that aiding and abetting evildoers is not good, and the way we can recognize the evildoers in our midst is that they’re not nice to others—except that, if the people who someone isn’t nice to are themselves evildoers, then the person might again be good, and so on. The only way to cut off the infinite regress, it seems, is to demand some sort of “reflective equilibrium” in our moral judgments, and that’s precisely what eigenmorality tries to capture. On the other hand, no such idea can ever make moral debate obsolete—if for no other reason than that we still need to decide which specific eigenmorality metric to use, and that choice is itself a moral judgment. Scooped by Plato Which brings me, finally, to the second new thing that’s happened this year on the eigenmorality front. Namely, Rebecca Newberger Goldstein—who’s far and away my favorite contemporary novelist—published a charming new book entitled Plato at the Googleplex: Why Philosophy Won’t Go Away. Here she imagines that Plato has reappeared in present-day America (she doesn’t bother to explain how), where he’s taught himself English and the basics of modern science, learned how to use the Internet, and otherwise gotten himself up to speed. The book recounts Plato’s dialogues with various modern interlocutors, as he volunteers to have his brain scanned, guest-writes a relationship advice column, participates in a panel discussion on child-rearing, and gets interviewed on cable news by “Roy McCoy” (a thinly veiled Bill O’Reilly). Often, Goldstein has Plato answer the moderns’ questions using direct quotes from the Timaeus, the Gorgias, the Meno, etc., which makes her Plato into a very intelligent sort of chatbot. This is a genre that’s not often seriously attempted, and that I’d love to read more of (possible subjects: Shakespeare, Galileo, Jefferson, Lincoln, Einstein, Turing…). Anyway, my favorite episode in the book is the first, eponymous one, where Plato visits the Googleplex in Mountain View. While eating lunch in one of the many free cafeterias, Plato is cornered by a somewhat self-important, dreadlocked coder named Marcus, who tries to convince Plato that Google PageRank has finally solved the problem agonized over in the Republic, of how to define justice. By using the Internet, we can simply crowd-source the answer, Marcus declares: get millions of people to render moral judgments on every conceivable question, and also moral judgments on each other’s judgments. Then declare those judgments the most morally reliable, that are judged the most reliable by the people who are themselves the most morally reliable. The circularity, as usual, is broken by taking the principal eigenvector of the graph of moral judgments (Goldstein doesn’t have Marcus put it that way, but it’s what she means). Not surprisingly, Plato is skeptical. Through Socratic questioning—the method he learned from the horse’s mouth—Plato manages to make Marcus realize that, in the very act of choosing which of several variants of PageRank to use in our crowd-sourced justice engine, we’ll implicitly be making moral choices already. And therefore, we can’t use PageRank, or anything like it, as the ultimate ground of morality. Whereas I imagined that the raw data for an “eigenmorality” metric would consist of numerical measures of how nice people had been to each other, Goldstein imagines the raw data to consist of abstract moral judgments, and of judgments about judgments. Also, whereas the output of my kind of metric would be a measure of the “goodness” of each individual person, the outputs of hers would presumably be verdicts about general moral and political questions. But, much like with CLEVER versus PageRank, it’s obvious that the ideas are similar—and that I should credit Goldstein with independently discovering my nerdy 16-year-old vision, in order to put it in the mouth of a nerdy character in her story. As I said before, I agree with Goldstein’s Plato that eigenmorality can’t serve as the ultimate ground of morality. But that’s a bit like saying that Google rank can’t serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google’s engineers must have some prior notion of “importance” to serve as a standard. That’s true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years. Goldstein’s book has the wonderful property that even the ideas she gives to her secondary characters, the ones who serve as foils to Plato, are sometimes interesting enough to deserve book-length treatments of their own, and crowd-sourced morality strikes me as a perfect example. In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization’s survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives. (Obviously, such ideas are nonstarters in the current political climate of the US, but I’m not talking here about what’s feasible, only about what’s necessary.) As several commenters pointed out, my view raises an obvious question: who is to decide how much “damage” each activity causes, and thus how much it should be taxed? Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever? For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn’t decree the answer—has been representative democracy. Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences. But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it’s ever faced: namely, that of leaving our descendants a livable planet. Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals, and in many cases we’re moving rapidly in the opposite direction. Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening. So what’s the solution? To put the world under the thumb of an environmentalist dictator? Absolutely not. In all of history, I don’t think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren’t too keen on alternative energy either). The problem, I think, is epistemological. Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn’t yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history. Because our domination of the earth’s climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win. If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won’t be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it’s too late. If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other. (Of course, all this is a problem for many fields, not just climate change. Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.) Yet for all that, I’m an optimist—sort of. For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society. Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little. We could improve it a lot more. Consider, for example, the following attempted definitions: A trustworthy source of information is one that’s considered trustworthy by many sources who are themselves trustworthy (on the same topic or on closely related topics). The current scientific consensus, on any given issue, is what the trustworthy sources consider to be the consensus. A good decision-maker is someone who’s considered to be a good decision-maker by many other good decision-makers. At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that’s needed to unravel the circularity is a principal eigenvector computation on the matrix of trust. And the computation of such an eigenvector need be no more “Orwellian” than … well, Google. If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that “usually just works” in much the same way Google usually just works. Now, would those with axes to grind try to subvert such a system the instant it went online? Certainly. For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy. So there would arise a parallel world of trust and consensus and “expertise,” mutually-reinforcing yet nearly disjoint from the world of the real. But here’s the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one. They’d see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe. The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy. It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I’d cheer that outcome as well. Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they’re in a minority! And far from being worried about it, they treat it as a badge of honor. They think they’re Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers. I admit all this. But the point of an eigentrust system wouldn’t be to convince everyone. As long as I’m fantasizing, the point would be that, once people’s individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law. The formation of the giant component would be the signal that there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse. Conversely, it’s essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority’s objections might be enough to veto action, even if it was numerically small. This is still democracy; it’s just democracy enhanced by linear algebra. Other people will object that, while we should use the Internet to improve the democratic process, the idea we’re looking for is not eigentrust or eigenmorality but rather prediction markets. Such markets would allow us to, as my friend Robin Hanson advocates, “vote on values but bet on beliefs.” For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise. But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there’s a serious penalty for being wrong, namely losing your shirt. If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law. I actually like the idea of prediction markets—I have ever since I heard about them—but I consider them limited in scope. My example above, involving the year 2200, gives a hint as to why. Prediction markets are great whenever our disagreements are over something that will be settled one way or the other, to everyone’s assent, in the near future (e.g., who will win the World Cup, or next year’s GDP). But most of our important disagreements aren’t like that: they’re over which direction society should move in, which issues to care about, which statistical indicators are even the right ones to measure a country’s health. Now, those broader questions can sometimes be settled empirically, in a sense: they can be settled by the overwhelming judgment of history, as the slavery, women’s suffrage, and fascism debates were. But that kind of empirical confirmation typically takes way too long to set up a decent betting market around it. And for the non-bettable questions, a carefully-crafted eigendemocracy really is the best system I can think of. Again, I think Rebecca Goldstein’s Plato is completely right that such a system, were it implemented, couldn’t possibly solve the philosophical problem of finding the “ultimate ground of justice,” just like Google can’t provide us with the “ultimate ground of importance.” If nothing else, we’d still need to decide which of the many possible eigentrust metrics to use, and we couldn’t use eigentrust for that without risking an infinite regress. But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might—just might—work well enough to save civilization. Update (6/20): If you haven’t been following, there’s an excellent discussion in the comments, with, as I’d hoped, many commenters raising strong and pertinent objections to the eigenmorality and eigendemocracy concepts, while also proposing possible fixes. Let me now mention what I think are the most important problems with eigenmorality and eigendemocracy respectively—both of them things that had occurred to me also, but that the commenters have brought out very clearly and explicitly. With eigenmorality, perhaps the most glaring problem is that, as I mentioned before, there’s no notion of time-ordering, or of “who started it,” in the definition that Tyler and I were using. As Luca Trevisan aptly points out in the comments, this has the consequence that eigenmorality, as it stands, is completely unable to distinguish between a crime syndicate that’s hated by the majority because of its crimes, and an equally-large ethnic minority that’s hated by the majority solely because it’s different, and that therefore hates the majority. However, unlike with mathematical theories of consciousness—where I used counterexamples to try to show that no mathematical definition of a certain kind could possibly capture our intuitions about consciousness—here the problem strikes me as much more circumscribed and bounded. It’s far from obvious to me that we can’t easily improve the definition of eigenmorality so that it does agree with most people’s moral intuition, whenever intuition renders a clear verdict, at least in the limited setting of Iterated Prisoners’ Dilemma tournaments. Let’s see, in particular, how to solve the problem that Luca stressed. As a first pass, we could do so as follows: A moral agent is one who only initiates defection against agents who it has good reason to believe are immoral (where, as usual, linear algebra is used to unravel the definition’s apparent circularity). Notice that I’ve added two elements to the setup: not only time but also knowledge. If you shun someone solely because you don’t like how they look, then we’d like to say that reflects poorly on you, even if (unbeknownst to you) it turns out that the person really is an asshole. Now, several more clauses would need to be added to the above definition to flesh it out: for example, if you’ve initiated defection against an immoral person, but then the person stops being immoral, at what point do you have a moral duty to “forgive and forget”? Also, just like with the eigenmoses/eigenjesus distinction, do you have a positive duty to initiate defection against an agent who you learn is immoral, or merely no duty not to do so? OK, so after we handle the above issues, will there still be examples that our time-sensitive, knowledge-sensitive eigenmorality definition gets badly, egregiously wrong? Maybe—I don’t know! Please let me know in the comments. Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul. Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages’ Google rank. In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people’s “revealed preferences,” and exploits that structure for its own purposes. Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings. But Google still isn’t the only reason for linking, so the link structure still contains real information. By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject. If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the “experts” such that claiming to “trust” them would do the most for their favored outcome. It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust. So, how would an eigendemocracy suss out the truth about who trusts whom on which subject? I don’t have a very good answer to this, and am open to suggestions. The best idea so far is to use Facebook for this purpose, but I don’t know exactly how. Update (6/22): Many commenters, both here and on Hacker News, interpreted me to be saying something obviously stupid: namely, that any belief identified as “the consensus” by an eigenvector analysis is therefore the morally right one. They then energetically knocked down this strawman, with the standard examples (Hitler, slavery, discrimination against gays). Admittedly, I probably contributed to this confusion by my ill-advised decision to discuss eigenmorality and eigendemocracy in the same blog post—solely because of their mathematical similarity, and the ease with which thinking about one leads to thinking about the other. But the two are different, as are my claims about them. For the record: • Eigenmorality: Within the stylized setting of an Iterated Prisoner’s Dilemma tournament, with side-channels allowing agents to learn who are doing what to each other, I believe it ought to be possible, by looking at who initiated rounds of defection and forgiveness, and then doing an eigenvector analysis on the result, to identify the “moral” and “immoral” agents in a way that more-or-less accords with our moral intuitions. Even if true, of course, this wouldn’t have any obvious moral implications for hot-button issues such as abortion, gun control, or climate change, which it’s far from obvious how to encode in terms of IPD tournaments. • Eigendemocracy: By doing an eigenvector analysis, to identify who people implicitly acknowledge as the “experts” within each field, I believe that it might be possible to produce results that, on average, in practice, and in contemporary society, are better and more rational than those produced by ordinary majority-voting. Obviously, there’s no guarantee whatsoever that the results of eigendemocracy would be morally acceptable ones: if the public acknowledges as “experts” people who believe evil things (as in Nazi Germany), then eigendemocracy will produce evil results. But democracy itself suffers from a precisely analogous problem. The situation that interests me is one that’s been with us since the time of ancient Athens: one where there is a consensus among the experts about the wisest course of action, and there’s also an implicit consensus among the public that those experts are indeed the experts, but the democratic system is somehow “unable to complete the modus ponens,” because of manipulation by powerful interests and the sway of demagogues. In such cases, it seems possible to me that an eigendemocracy could improve on the results of ordinary democracy—perhaps dramatically so—while still avoiding the evils of dictatorship. Crucially, in neither of the above bullet points, nor in their combination, is there any hint of a belief that “the will of the majority always defines what’s morally right” (if anything, there’s a belief in the opposite). Update (7/4): While this isn’t really a surprise—I’d astonished if it weren’t the case—I’ve now learned that several people, besides me and Rebecca Goldstein, have previously written about the ideas of eigentrust and eigendemocracy. Perhaps more surprising is that one of the earlier groups—consisting of Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina from Stanford—literally called the idea “EigenTrust,” when they published about it in 2003. (Note that Garcia-Molina, in a likely non-coincidence, was Larry Page and Sergey Brin’s PhD adviser.) Kamvar et al.’s intended application for EigenTrust was to determine which nodes are trustworthy in a peer-to-peer file-sharing network, rather than (say) to reinvent democracy, or to address conundrums of epistemology and ethics that have been with us since Plato. But while the scope might be more modest, the core idea is the same. (Hat tip to commenter Babak.) As for enhancing democracy using linear algebra, it turns out that that too has already been discussed: see for example this presentation by Rob Spekkens of the Perimeter Institute, which Michael Nielsen pointed me to. (In yet another small-world phenomenon, Rob’s main interest is in quantum foundations, and in that context I’ve known him for a decade! But his interest in eigendemocracy was news to me.) If you’re wondering whether anything in this post was original … well, so far, I haven’t learned of prior work specifically about eigenmorality (e.g., in Iterated Prisoners Dilemma tournaments), much less about eigenmoses and eigenjesus. ### My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test Monday, June 9th, 2014 If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper. See for example here, here, here, here. In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.” Much more importantly, Turing’s famous example dialogue, involving Mr. Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve. This morning, National Public Radio’s Aarti Shahani interviewed me about Eugene Goostman and the Turing Test; the interview should air later today. Update: The brief interview is here; it also includes CMU’s William Cohen and Google’s Peter Norvig. I tried to redirect Ms. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice. Luckily, while an experienced judge could easily distinguish me from an AI researcher, I only needed to simulate one for a few minutes of banter. Prompted by Ms. Shahani’s inquiries, I had the “conversation” below with the version of Eugene Goostman available at http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/. It’s possible that there’s a more recent version somewhere else, but this is the only version I was able to access. Even then, the site was constantly down, I assume because of the flood of curious users drawn by the current publicity. Scott: Which is bigger, a shoebox or Mount Everest? Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from… Scott: How many legs does a camel have? Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it? Scott: How many legs does a millipede have? Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me. Scott: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have? Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-) Scott: Do you understand why I’m asking such basic questions? Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”? Eugene: …wait Scott: Do you think your ability to fool unsophisticated judges indicates a flaw with the Turing Test itself, or merely with the way people have interpreted the test? Eugene: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Scott: Do you think Alan Turing, brilliant though he was, had trouble imagining that the judges of his “imitation game” wouldn’t think to ask commonsense questions like the ones above—or that, if they did, they’d actually accept evasion or irrelevant banter as answers? Eugene: No, not really. I don’t think alan turing brilliant although this guy was had trouble imagining that the judges of his imitation game would not consider to Oooh. Anything else? Scott: In your opinion, does your existence demonstrate any idea or principle that wasn’t demonstrated just as convincingly by ELIZA, Joseph Weizenbaum’s chatbot from the 1960s? Eugene: Eliza was a break-thru. All the bots after it were nothing but weak parodies, claiming to have “revolutionary improvements”. Scott: Hey, that’s the first sensible thing you’ve said! Eugene: Did I really say so? Well, I must have forgotten about it. Could you tell me about your job, by the way? Scott: OK, don’t push your luck. Why do you think journalists are now hyping you? Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human? Eugene: Ask the Creator about it (call Him by phone, for example:-). He knows all hidden reasons of human-beings’ life and behavior. Scott: OK, I’ll count that as the second sensible thing you’ve said (grading generously). Don’t you think that a more convincing chatbot could be created, by using Google to trawl the entire web for plausible-sounding answers (analogous to what IBM’s Watson did)? Eugene: Yes, I do think that a more convincing chatbot can be built by using google to trawl the all web for plausible – sounding – I hope I have right to have my own opinion. Maybe, let’s talk about something else? What would you like to discuss? Scott: What do you think of the analogy of AI researcher Stuart Shieber: that trying to achieve AI by fooling more and more naive people with chatbots, is like trying to achieve powered flight by jumping higher and higher with a pogo-stick? Eugene: “Artificial Intelligence”? I guess you mean that stupid joke about a blonde who dyed her hair! Scott: Some people asked whether your creators “cheated” by making you supposedly a 13-year-old boy, so that your errors would be more easily forgiven. Do people actually have that low an opinion of 13-year-old boys? I mean, I knew when I was 13 that camels have four legs! Eugene: I wasn’t created by anyone. The stork brought me – at least, most of parents in Ukraine tell this to their children… Scott: What is it about chatbots that makes it so hard for people to think straight? Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them? Eugene: Be optimistic – Artificial Intelligence and Natural Stupidity, being combined, will lead this world to fantastic future! Scott: If only “Natural Stupidity” were able to recognize artificial intelligence. Speaking of which, did you see the movie “Her”? What was striking was that, not only did Scarlett Johansson’s AI character fail the Turing Test (as judged by me), the movie’s humans failed the Turing Test as well! Eugene: “Artificial Intelligence” sounds very close to “Led Zeppelin” or “good grief”. Scott: Let’s end our conversation on that thought-provoking note. Thanks for chatting, Eugene! Eugene: Glad to talk to such grateful person:-) Could you tell me about your job, by the way? Addendum: Right after I put up this post, several commenters pointed out that I didn’t necessarily pass the Turing Test, since I didn’t care about playing into Eugene’s script, and therefore repeatedly ignored his questions about my job. Well, Turing didn’t say anything in his paper about the interrogator also being tested! I was trying to illustrate how to unmask a robot as quickly as possible, not how to distinguish yourself from one. But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response: Me: Not that I’m the one being tested, but I’m a theoretical computer scientist. Eugene: I have nice Celeron-667, by the way. I play games on it! Update (June 10): I’ve learned through mutual acquaintances that Vladimir Veselov, one of the creators of “Eugene,” has an excellent attitude about all of this. He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator. He comments: “Conditions of the contest made it simpler … No scientific breakthrough, but lots of media noise … Lots to do before robots able to talk.” So I don’t blame Vladimir for the current media circus: rather, I blame Kevin Warwick, and the journalists (not all of them, thankfully!) who uncritically repeated Warwick’s pronouncements. Incidentally, I strongly encourage people to read Stuart Shieber’s classic paper, Lessons from a Restricted Turing Test (about Shieber’s experiences with the Loebner Prize competition). This is the paper where Shieber introduces the pogo-stick analogy, and where he crisply explains why AI researchers don’t currently focus their energies on chatbot competitions. Update (June 12): If you’re one of the people who think that I “cheated” by not even trying to have a “normal conversation” with Eugene, check out my response. ### Giulio Tononi and Me: A Phi-nal Exchange Friday, May 30th, 2014 You might recall that last week I wrote a post criticizing Integrated Information Theory (IIT), and its apparent implication that a simple Reed-Solomon decoding circuit would, if scaled to a large enough size, bring into being a consciousness vastly exceeding our own. On Wednesday Giulio Tononi, the creator of IIT, was kind enough to send me a fascinating 14-page rebuttal, and to give me permission to share it here: Why Scott should stare at a blank wall and reconsider (or, the conscious grid) If you’re interested in this subject at all, then I strongly recommend reading Giulio’s response before continuing further. But for those who want the tl;dr: Giulio, not one to battle strawmen, first restates my own argument against IIT with crystal clarity. And while he has some minor quibbles (e.g., apparently my calculations of Φ didn’t use the most recent, “3.0” version of IIT), he wisely sets those aside in order to focus on the core question: according to IIT, are all sorts of simple expander graphs conscious? There, he doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard. He affirms that, yes, according to IIT, a large network of XOR gates arranged in a simple expander graph is conscious. Indeed, he goes further, and says that the “expander” part is superfluous: even a network of XOR gates arranged in a 2D square grid is conscious. In my language, Giulio is simply pointing out here that a √n×√n square grid has decent expansion: good enough to produce a Φ-value of about √n, if not the information-theoretic maximum of n (or n/2, etc.) that an expander graph could achieve. And apparently, by Giulio’s lights, Φ=√n is sufficient for consciousness! While Giulio never mentions this, it’s interesting to observe that logic gates arranged in a 1-dimensional line would produce a tiny Φ-value (Φ=O(1)). So even by IIT standards, such a linear array would not be conscious. Yet the jump from a line to a two-dimensional grid is enough to light the spark of Mind. Personally, I give Giulio enormous credit for having the intellectual courage to follow his theory wherever it leads. When the critics point out, “if your theory were true, then the Moon would be made of peanut butter,” he doesn’t try to wiggle out of the prediction, but proudly replies, “yes, chunky peanut butter—and you forgot to add that the Earth is made of Nutella!” Yet even as we admire Giulio’s honesty and consistency, his stance might also prompt us, gently, to take another look at this peanut-butter-moon theory, and at what grounds we had for believing it in the first place. In his response essay, Giulio offers four arguments (by my count) for accepting IIT despite, or even because of, its conscious-grid prediction: one “negative” argument and three “positive” ones. Alas, while your Φ-lage may vary, I didn’t find any of the four arguments persuasive. In the rest of this post, I’ll go through them one by one and explain why. I. The Copernicus-of-Consciousness Argument Like many commenters on my last post, Giulio heavily criticizes my appeal to “common sense” in rejecting IIT. Sure, he says, I might find it “obvious” that a huge Vandermonde matrix, or its physical instantiation, isn’t conscious. But didn’t people also find it “obvious” for millennia that the Sun orbits the Earth? Isn’t the entire point of science to challenge common sense? Clearly, then, the test of a theory of consciousness is not how well it upholds “common sense,” but how well it fits the facts. The above position sounds pretty convincing: who could dispute that observable facts trump personal intuitions? The trouble is, what are the observable facts when it comes to consciousness? The anti-common-sense view gets all its force by pretending that we’re in a relatively late stage of research—namely, the stage of taking an agreed-upon scientific definition of consciousness, and applying it to test our intuitions—rather than in an extremely early stage, of agreeing on what the word “consciousness” is even supposed to mean. Since I think this point is extremely important—and of general interest, beyond just IIT—I’ll expand on it with some analogies. Suppose I told you that, in my opinion, the ε-δ definition of continuous functions—the one you learn in calculus class—failed to capture the true meaning of continuity. Suppose I told you that I had a new, better definition of continuity—and amazingly, when I tried out my definition on some examples, it turned out that ⌊x⌋ (the floor function) was continuous, whereas x2 had discontinuities, though only at 17.5 and 42. You would probably ask what I was smoking, and whether you could have some. But why? Why shouldn’t the study of continuity produce counterintuitive results? After all, even the standard definition of continuity leads to some famously weird results, like that x sin(1/x) is a continuous function, even though sin(1/x) is discontinuous. And it’s not as if the standard definition is God-given: people had been using words like “continuous” for centuries before Bolzano, Weierstrass, et al. formalized the ε-δ definition, a definition that millions of calculus students still find far from intuitive. So why shouldn’t there be a different, better definition of “continuous,” and why shouldn’t it reveal that a step function is continuous while a parabola is not? In my view, the way out of this conceptual jungle is to realize that, before any formal definitions, any ε’s and δ’s, we start with an intuition for we’re trying to capture by the word “continuous.” And if we press hard enough on what that intuition involves, we’ll find that it largely consists of various “paradigm-cases.” A continuous function, we’d say, is a function like 3x, or x2, or sin(x), while a discontinuity is the kind of thing that the function 1/x has at x=0, or that ⌊x⌋ has at every integer point. Crucially, we use the paradigm-cases to guide our choice of a formal definition—not vice versa! It’s true that, once we have a formal definition, we can then apply it to “exotic” cases like x sin(1/x), and we might be surprised by the results. But the paradigm-cases are different. If, for example, our definition told us that x2 was discontinuous, that wouldn’t be a “surprise”; it would just be evidence that we’d picked a bad definition. The definition failed at the only task for which it could have succeeded: namely, that of capturing what we meant. Some people might say that this is all well and good in pure math, but empirical science has no need for squishy intuitions and paradigm-cases. Nothing could be further from the truth. Suppose, again, that I told you that physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition. And, when I built a Scott-thermometer that measures true temperatures, it delivered the shocking result that boiling water is actually colder than ice. You’d probably tell me where to shove my Scott-thermometer. But wait: how do you know that I’m not the Copernicus of heat, and that future generations won’t celebrate my breakthrough while scoffing at your small-mindedness? I’d say there’s an excellent answer: because what we mean by heat is “whatever it is that boiling water has more of than ice” (along with dozens of other paradigm-cases). And because, if you use a thermometer to check whether boiling water is hotter than ice, then the term for what you’re doing is calibrating your thermometer. When the clock strikes 13, it’s time to fix the clock, and when the thermometer says boiling water’s colder than ice, it’s time to replace the thermometer—or if needed, even the entire theory on which the thermometer is based. Ah, you say, but doesn’t modern physics define heat in a completely different, non-intuitive way, in terms of molecular motion? Yes, and that turned out to be a superb definition—not only because it was precise, explanatory, and applicable to cases far beyond our everyday experience, but crucially, because it matched common sense on the paradigm-cases. If it hadn’t given sensible results for boiling water and ice, then the only possible conclusion would be that, whatever new quantity physicists had defined, they shouldn’t call it “temperature,” or claim that their quantity measured the amount of “heat.” They should call their new thing something else. The implications for the consciousness debate are obvious. When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared. The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean: • You are conscious (though not when anesthetized). • (Most) other people appear to be conscious, judging from their behavior. • Many animals appear to be conscious, though probably to a lesser degree than humans (and the degree of consciousness in each particular species is far from obvious). • A rock is not conscious. A wall is not conscious. A Reed-Solomon code is not conscious. Microsoft Word is not conscious (though a Word macro that passed the Turing test conceivably would be). Fetuses, coma patients, fish, and hypothetical AIs are the x sin(1/x)’s of consciousness: they’re the tougher cases, the ones where we might actually need a formal definition to adjudicate the truth. Now, given a proposed formal definition for an intuitive concept, how can we check whether the definition is talking about same thing we were trying to get at before? Well, we can check whether the definition at least agrees that parabolas are continuous while step functions are not, that boiling water is hot while ice is cold, and that we’re conscious while Reed-Solomon decoders are not. If so, then the definition might be picking out the same thing that we meant, or were trying to mean, pre-theoretically (though we still can’t be certain). If not, then the definition is certainly talking about something else. What else can we do? II. The Axiom Argument According to Giulio, there is something else we can do, besides relying on paradigm-cases. That something else, in his words, is to lay down “postulates about how the physical world should be organized to support the essential properties of experience,” then use those postulates to derive a consciousness-measuring quantity. OK, so what are IIT’s postulates? Here’s how Giulio states the five postulates leading to Φ in his response essay (he “derives” these from earlier “phenomenological axioms,” which you can find in the essay): 1. A system of mechanisms exists intrinsically if it can make a difference to itself, by affecting the probability of its past and future states, i.e. it has causal power (existence). 2. It is composed of submechanisms each with their own causal power (composition). 3. It generates a conceptual structure that is the specific way it is, as specified by each mechanism’s concept — this is how each mechanism affects the probability of the system’s past and future states (information). 4. The conceptual structure is unified — it cannot be decomposed into independent components (integration). 5. The conceptual structure is singular — there can be no superposition of multiple conceptual structures over the same mechanisms and intervals of time. From my standpoint, these postulates have three problems. First, I don’t really understand them. Second, insofar as I do understand them, I don’t necessarily accept their truth. And third, insofar as I do accept their truth, I don’t see how they lead to Φ. To elaborate a bit: I don’t really understand the postulates. I realize that the postulates are explicated further in the many papers on IIT. Unfortunately, while it’s possible that I missed something, in all of the papers that I read, the definitions never seemed to “bottom out” in mathematical notions that I understood, like functions mapping finite sets to other finite sets. What, for example, is a “mechanism”? What’s a “system of mechanisms”? What’s “causal power”? What’s a “conceptual structure,” and what does it mean for it to be “unified”? Alas, it doesn’t help to define these notions in terms of other notions that I also don’t understand. And yes, I agree that all these notions can be given fully rigorous definitions, but there could be many different ways to do so, and the devil could lie in the details. In any case, because (as I said) it’s entirely possible that the failure is mine, I place much less weight on this point than I do on the two points to follow. I don’t necessarily accept the postulates’ truth. Is consciousness a “unified conceptual structure”? Is it “singular”? Maybe. I don’t know. It sounds plausible. But at any rate, I’m far less confident about any these postulates—whatever one means by them!—than I am about my own “postulate,” which is that you and I are conscious while my toaster is not. Note that my postulate, though not phenomenological, does have the merit of constraining candidate theories of consciousness in an unambiguous way. I don’t see how the postulates lead to Φ. Even if one accepts the postulates, how does one deduce that the “amount of consciousness” should be measured by Φ, rather than by some other quantity? None of the papers I read—including the ones Giulio linked to in his response essay—contained anything that looked to me like a derivation of Φ. Instead, there was general discussion of the postulates, and then Φ just sort of appeared at some point. Furthermore, given the many idiosyncrasies of Φ—the minimization over all bipartite (why just bipartite? why not tripartite?) decompositions of the system, the need for normalization (or something else in version 3.0) to deal with highly-unbalanced partitions—it would be quite a surprise were it possible to derive its specific form from postulates of such generality. I was going to argue for that conclusion in more detail, when I realized that Giulio had kindly done the work for me already. Recall that Giulio chided me for not using the “latest, 2014, version 3.0″ edition of Φ in my previous post. Well, if the postulates uniquely determined the form of Φ, then what’s with all these upgrades? Or has Φ’s definition been changing from year to year because the postulates themselves have been changing? If the latter, then maybe one should wait for the situation to stabilize before trying to form an opinion of the postulates’ meaningfulness, truth, and completeness? III. The Ironic Empirical Argument Or maybe not. Despite all the problems noted above with the IIT postulates, Giulio argues in his essay that there’s a good a reason to accept them: namely, they explain various empirical facts from neuroscience, and lead to confirmed predictions. In his words: [A] theory’s postulates must be able to explain, in a principled and parsimonious way, at least those many facts about consciousness and the brain that are reasonably established and non-controversial. For example, we know that our own consciousness depends on certain brain structures (the cortex) and not others (the cerebellum), that it vanishes during certain periods of sleep (dreamless sleep) and reappears during others (dreams), that it vanishes during certain epileptic seizures, and so on. Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts. Such empirical facts, and not intuitions, should be its primary test… [I]n some cases we already have some suggestive evidence [of the truth of the IIT postulates' predictions]. One example is the cerebellum, which has 69 billion neurons or so — more than four times the 16 billion neurons of the cerebral cortex — and is as complicated a piece of biological machinery as any. Though we do not understand exactly how it works (perhaps even less than we understand the cerebral cortex), its connectivity definitely suggests that the cerebellum is ill suited to information integration, since it lacks lateral connections among its basic modules. And indeed, though the cerebellum is heavily connected to the cerebral cortex, removing it hardly affects our consciousness, whereas removing the cortex eliminates it. I hope I’m not alone in noticing the irony of this move. But just in case, let me spell it out: Giulio has stated, as “largely uncontroversial facts,” that certain brain regions (the cerebellum) and certain states (dreamless sleep) are not associated with our consciousness. He then views it as a victory for IIT, if those regions and states turn out to have lower information integration than the regions and states that he does take to be associated with our consciousness. But how does Giulio know that the cerebellum isn’t conscious? Even if it doesn’t produce “our” consciousness, maybe the cerebellum has its own consciousness, just as rich as the cortex’s but separate from it. Maybe removing the cerebellum destroys that other consciousness, unbeknownst to “us.” Likewise, maybe “dreamless” sleep brings about its own form of consciousness, one that (unlike dreams) we never, ever remember in the morning. Giulio might take the implausibility of those ideas as obvious, or at least as “largely uncontroversial” among neuroscientists. But here’s the problem with that: he just told us that a 2D square grid is conscious! He told us that we must not rely on “commonsense intuition,” or on any popular consensus, to say that if a square mesh of wires is just sitting there XORing some input bits, doing nothing at all that we’d want to call intelligent, then it’s probably safe to conclude that the mesh isn’t conscious. So then why shouldn’t he say the same for the cerebellum, or for the brain in dreamless sleep? By Giulio’s own rules (the ones he used for the mesh), we have no a-priori clue whether those systems are conscious or not—so even if IIT predicts that they’re not conscious, that can’t be counted as any sort of success for IIT. For me, the point is even stronger: I, personally, would be a million times more inclined to ascribe consciousness to the human cerebellum, or to dreamless sleep, than I would to the mesh of XOR gates. For it’s not hard to imagine neuroscientists of the future discovering “hidden forms of intelligence” in the cerebellum, and all but impossible to imagine them doing the same for the mesh. But even if you put those examples on the same footing, still the take-home message seems clear: you can’t count it as a “success” for IIT if it predicts that the cerebellum in unconscious, while at the same time denying that it’s a “failure” for IIT if it predicts that a square mesh of XOR gates is conscious. If the unconsciousness of the cerebellum can be considered an “empirical fact,” safe enough for theories of consciousness to be judged against it, then surely the unconsciousness of the mesh can also be considered such a fact. IV. The Phenomenology Argument I now come to, for me, the strangest and most surprising part of Giulio’s response. Despite his earlier claim that IIT need not dovetail with “commonsense intuition” about which systems are conscious—that it can defy intuition—at some point, Giulio valiantly tries to reprogram our intuition, to make us feel why a 2D grid could be conscious. As best I can understand, the argument seems to be that, when we stare at a blank 2D screen, we form a rich experience in our heads, and that richness must be mirrored by a corresponding “intrinsic” richness in 2D space itself: [I]f one thinks a bit about it, the experience of empty 2D visual space is not at all empty, but contains a remarkable amount of structure. In fact, when we stare at the blank screen, quite a lot is immediately available to us without any effort whatsoever. Thus, we are aware of all the possible locations in space (“points”): the various locations are right “there”, in front of us. We are aware of their relative positions: a point may be left or right of another, above or below, and so on, for every position, without us having to order them. And we are aware of the relative distances among points: quite clearly, two points may be close or far, and this is the case for every position. Because we are aware of all of this immediately, without any need to calculate anything, and quite regularly, since 2D space pervades most of our experiences, we tend to take for granted the vast set of relationship[s] that make up 2D space. And yet, says IIT, given that our experience of the blank screen definitely exists, and it is precisely the way it is — it is 2D visual space, with all its relational properties — there must be physical mechanisms that specify such phenomenological relationships through their causal power … One may also see that the causal relationships that make up 2D space obtain whether the elements are on or off. And finally, one may see that such a 2D grid is necessary not so much to represent space from the extrinsic perspective of an observer, but to create it, from its own intrinsic perspective. Now, it would be child’s-play to criticize the above line of argument for conflating our consciousness of the screen with the alleged consciousness of the screen itself. To wit: Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall. You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you. However, I actually prefer a different tack in criticizing Giulio’s “wall argument.” Suppose I accepted that my mental image of the relationships between certain entities was relevant to assessing whether those entities had their own mental life, independent of me or any other observer. For example, suppose I believed that, if my experience of 2D space is rich and structured, then that’s evidence that 2D space is rich and structured enough to be conscious. Then my question is this: why shouldn’t the same be true of 1D space? After all, my experience of staring at a rope is also rich and structured, no less than my experience of staring at a wall. I perceive some points on the rope as being toward the left, others as being toward the right, and some points as being between two other points. In fact, the rope even has a structure—namely, a natural total ordering on its points—that the wall lacks. So why does IIT cruelly deny subjective experience to a row of logic gates strung along a rope, reserving it only for a mesh of logic gates pasted to a wall? And yes, I know the answer: because the logic gates on the rope aren’t “integrated” enough. But who’s to say that the gates in the 2D mesh are integrated enough? As I mentioned before, their Φ-value grows only as the square root of the number of gates, so that the ratio of integrated information to total information tends to 0 as the number of gates increases. And besides, aren’t what Giulio calls “the facts of phenomenology” the real arbiters here, and isn’t my perception of the rope’s structure a phenomenological fact? When you cut a rope, does it not split? When you prick it, does it not fray? Conclusion At this point, I fear we’re at a philosophical impasse. Having learned that, according to IIT, 1. a square grid of XOR gates is conscious, and your experience of staring at a blank wall provides evidence for that, 2. by contrast, a linear array of XOR gates is not conscious, your experience of staring at a rope notwithstanding, 3. the human cerebellum is also not conscious (even though a grid of XOR gates is), and 4. unlike with the XOR gates, we don’t need a theory to tell us the cerebellum is unconscious, but can simply accept it as “reasonably established” and “largely uncontroversial,” I personally feel completely safe in saying that this is not the theory of consciousness for me. But I’ve also learned that other people, even after understanding the above, still don’t reject IIT. And you know what? Bully for them. On reflection, I firmly believe that a two-state solution is possible, in which we simply adopt different words for the different things that we mean by “consciousness”—like, say, consciousnessReal for my kind and consciousnessWTF for the IIT kind. OK, OK, just kidding! How about “paradigm-case consciousness” for the one and “IIT consciousness” for the other. Completely unrelated announcement: Some of you might enjoy this Nature News piece by Amanda Gefter, about black holes and computational complexity. ### Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton Tuesday, May 27th, 2014 Update (June 3): A few days after we posted this paper online, Brent Werness, a postdoc in probability theory at the University of Washington, discovered a serious error in the “experimental” part of the paper. Happily, Brent is now collaborating with us on producing a new version of the paper that fixes the error, which we hope to have available within a few months (and which will replace the version currently on the arXiv). To make a long story short: while the overall idea, of measuring “apparent complexity” by the compressed file size of a coarse-grained image, is fine, the “interacting coffee automaton” that we study in the paper is not an example where the apparent complexity becomes large at intermediate times. That fact can be deduced as a corollary of a result of Liggett from 2009 about the “symmetric exclusion process,” and can be seen as a far-reaching generalization of a result that we prove in our paper’s appendix: namely, that in the non-interacting coffee automaton (our “control case”), the apparent complexity after t time steps is upper-bounded by O(log(nt)). As it turns out, we were more right than we knew to worry about large-deviation bounds giving complete mathematical control over what happens when the cream spills into the coffee, thereby preventing the apparent complexity from ever becoming large! But what about our numerical results, which showed a small but unmistakable complexity bump for the interacting automaton (figure 10(a) in the paper)? It now appears that the complexity bump we saw in our data is likely to be explainable by an incomplete removal of what we called “border pixel artifacts”: that is, “spurious” complexity that arises merely from the fact that, at the border between cream and coffee, we need to round the fraction of cream up or down to the nearest integer to produce a grayscale. In the paper, we devoted a whole section (Section 6) to border pixel artifacts and the need to deal with them: something sufficiently non-obvious that in the comments of this post, you can find people arguing with me that it’s a non-issue. Well, it now appears that we erred by underestimating the severity of border pixel artifacts, and that a better procedure to get rid of them would also eliminate the complexity bump for the interacting automaton. Once again, this error has no effect on either the general idea of complexity rising and then falling in closed thermodynamic systems, or our proposal for how to quantify that rise and fall—the two aspects of the paper that have generated the most interest. But we made a bad choice of model system with which to illustrate those ideas. Had I looked more carefully at the data, I could’ve noticed the problem before we posted, and I take responsibility for my failure to do so. The good news is that ultimately, I think the truth only makes our story more interesting. For it turns out that apparent complexity, as we define it, is not something that’s trivial to achieve by just setting loose a bunch of randomly-walking particles, which bump into each other but are otherwise completely independent. If you want “complexity” along the approach to thermal equilibrium, you need to work a bit harder for it. One promising idea, which we’re now exploring, is to consider a cream tendril whose tip takes a random walk through the coffee, leaving a trail of cream in its wake. Using results in probability theory—closely related, or so I’m told, to the results for which Wendelin Werner won his Fields Medal!—it may even be possible to prove analytically that the apparent complexity becomes large in thermodynamic systems with this sort of behavior, much as one can prove that the complexity doesn’t become large in our original coffee automaton. So, if you’re interested in this topic, stay tuned for the updated version of our paper. In the meantime, I wish to express our deepest imaginable gratitude to Brent Werness for telling us all this. Good news! After nearly three years of procrastination, fellow blogger Sean Carroll, former MIT undergraduate Lauren Ouellette, and yours truly finally finished a paper with the above title (coming soon to an arXiv near you). PowerPoint slides are also available (as usual, you’re on your own if you can’t open them—sorry!). For the background and context of this paper, please see my old post “The First Law of Complexodynamics,” which discussed Sean’s problem of defining a “complextropy” measure that first increases and then decreases in closed thermodynamic systems, in contrast to entropy (which increases monotonically). In this exploratory paper, we basically do five things: 1. We survey several candidate “complextropy” measures: their strengths, weaknesses, and relations to one another. 2. We propose a model system for studying such measures: a probabilistic cellular automaton that models a cup of coffee into which cream has just been poured. 3. We report the results of numerical experiments with one of the measures, which we call “apparent complexity” (basically, the gzip file size of a smeared-out image of the coffee cup). The results confirm that the apparent complexity does indeed increase, reach a maximum, then turn around and decrease as the coffee and cream mix. 4. We discuss a technical issue that one needs to overcome (the so-called “border pixels” problem) before one can do meaningful experiments in this area, and offer a solution. 5. We raise the open problem of proving analytically that the apparent complexity ever becomes large for the coffee automaton. To underscore this problem’s difficulty, we prove that the apparent complexity doesn’t become large in a simplified version of the coffee automaton. Anyway, here’s the abstract: In contrast to entropy, which increases monotonically, the “complexity” or “interestingness” of closed systems seems intuitively to increase at first and then decrease as equilibrium is approached. For example, our universe lacked complex structures at the Big Bang and will also lack them after black holes evaporate and particles are dispersed. This paper makes an initial attempt to quantify this pattern. As a model system, we use a simple, two-dimensional cellular automaton that simulates the mixing of two liquids (“coffee” and “cream”). A plausible complexity measure is then the Kolmogorov complexity of a coarse-grained approximation of the automaton’s state, which we dub the “apparent complexity.” We study this complexity measure, and show analytically that it never becomes large when the liquid particles are non-interacting. By contrast, when the particles do interact, we give numerical evidence that the complexity reaches a maximum comparable to the “coffee cup’s” horizontal dimension. We raise the problem of proving this behavior analytically. Questions and comments more than welcome. In unrelated news, Shafi Goldwasser has asked me to announce that the Call for Papers for the 2015 Innovations in Theoretical Computer Science (ITCS) conference is now available. ### Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander) Wednesday, May 21st, 2014 Happy birthday to me! Recently, lots of people have been asking me what I think about IIT—no, not the Indian Institutes of Technology, but Integrated Information Theory, a widely-discussed “mathematical theory of consciousness” developed over the past decade by the neuroscientist Giulio Tononi. One of the askers was Max Tegmark, who’s enthusiastically adopted IIT as a plank in his radical mathematizing platform (see his paper “Consciousness as a State of Matter”). When, in the comment thread about Max’s Mathematical Universe Hypothesis, I expressed doubts about IIT, Max challenged me to back up my doubts with a quantitative calculation. So, this is the post that I promised to Max and all the others, about why I don’t believe IIT. And yes, it will contain that quantitative calculation. But first, what is IIT? The central ideas of IIT, as I understand them, are: (1) to propose a quantitative measure, called Φ, of the amount of “integrated information” in a physical system (i.e. information that can’t be localized in the system’s individual parts), and then (2) to hypothesize that a physical system is “conscious” if and only if it has a large value of Φ—and indeed, that a system is more conscious the larger its Φ value. I’ll return later to the precise definition of Φ—but basically, it’s obtained by minimizing, over all subdivisions of your physical system into two parts A and B, some measure of the mutual information between A’s outputs and B’s inputs and vice versa. Now, one immediate consequence of any definition like this is that all sorts of simple physical systems (a thermostat, a photodiode, etc.) will turn out to have small but nonzero Φ values. To his credit, Tononi cheerfully accepts the panpsychist implication: yes, he says, it really does mean that thermostats and photodiodes have small but nonzero levels of consciousness. On the other hand, for the theory to work, it had better be the case that Φ is small for “intuitively unconscious” systems, and only large for “intuitively conscious” systems. As I’ll explain later, this strikes me as a crucial point on which IIT fails. The literature on IIT is too big to do it justice in a blog post. Strikingly, in addition to the “primary” literature, there’s now even a “secondary” literature, which treats IIT as a sort of established base on which to build further speculations about consciousness. Besides the Tegmark paper linked to above, see for example this paper by Maguire et al., and associated popular article. (Ironically, Maguire et al. use IIT to argue for the Penrose-like view that consciousness might have uncomputable aspects—a use diametrically opposed to Tegmark’s.) Anyway, if you want to read a popular article about IIT, there are loads of them: see here for the New York Times’s, here for Scientific American‘s, here for IEEE Spectrum‘s, and here for the New Yorker‘s. Unfortunately, none of those articles will tell you the meat (i.e., the definition of integrated information); for that you need technical papers, like this or this by Tononi, or this by Seth et al. IIT is also described in Christof Koch’s memoir Consciousness: Confessions of a Romantic Reductionist, which I read and enjoyed; as well as Tononi’s Phi: A Voyage from the Brain to the Soul, which I haven’t yet read. (Koch, one of the world’s best-known thinkers and writers about consciousness, has also become an evangelist for IIT.) So, I want to explain why I don’t think IIT solves even the problem that it “plausibly could have” solved. But before I can do that, I need to do some philosophical ground-clearing. Broadly speaking, what is it that a “mathematical theory of consciousness” is supposed to do? What questions should it answer, and how should we judge whether it’s succeeded? The most obvious thing a consciousness theory could do is to explain why consciousness exists: that is, to solve what David Chalmers calls the “Hard Problem,” by telling us how a clump of neurons is able to give rise to the taste of strawberries, the redness of red … you know, all that ineffable first-persony stuff. Alas, there’s a strong argument—one that I, personally, find completely convincing—why that’s too much to ask of any scientific theory. Namely, no matter what the third-person facts were, one could always imagine a universe consistent with those facts in which no one “really” experienced anything. So for example, if someone claims that integrated information “explains” why consciousness exists—nope, sorry! I’ve just conjured into my imagination beings whose Φ-values are a thousand, nay a trillion times larger than humans’, yet who are also philosophical zombies: entities that there’s nothing that it’s like to be. Granted, maybe such zombies can’t exist in the actual world: maybe, if you tried to create one, God would notice its large Φ-value and generously bequeath it a soul. But if so, then that’s a further fact about our world, a fact that manifestly couldn’t be deduced from the properties of Φ alone. Notice that the details of Φ are completely irrelevant to the argument. Faced with this point, many scientifically-minded people start yelling and throwing things. They say that “zombies” and so forth are empty metaphysics, and that our only hope of learning about consciousness is to engage with actual facts about the brain. And that’s a perfectly reasonable position! As far as I’m concerned, you absolutely have the option of dismissing Chalmers’ Hard Problem as a navel-gazing distraction from the real work of neuroscience. The one thing you can’t do is have it both ways: that is, you can’t say both that the Hard Problem is meaningless, and that progress in neuroscience will soon solve the problem if it hasn’t already. You can’t maintain simultaneously that (a) once you account for someone’s observed behavior and the details of their brain organization, there’s nothing further about consciousness to be explained, and (b) remarkably, the XYZ theory of consciousness can explain the “nothing further” (e.g., by reducing it to integrated information processing), or might be on the verge of doing so. As obvious as this sounds, it seems to me that large swaths of consciousness-theorizing can just be summarily rejected for trying to have their brain and eat it in precisely the above way. Fortunately, I think IIT survives the above observations. For we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious. Namely, we can say that IIT “merely” aims to tell us which physical systems are associated with consciousness and which aren’t, purely in terms of the systems’ physical organization. The test of such a theory is whether it can produce results agreeing with “commonsense intuition”: for example, whether it can affirm, from first principles, that (most) humans are conscious; that dogs and horses are also conscious but less so; that rocks, livers, bacteria colonies, and existing digital computers are not conscious (or are hardly conscious); and that a room full of people has no “mega-consciousness” over and above the consciousnesses of the individuals. The reason it’s so important that the theory uphold “common sense” on these test cases is that, given the experimental inaccessibility of consciousness, this is basically the only test available to us. If the theory gets the test cases “wrong” (i.e., gives results diverging from common sense), it’s not clear that there’s anything else for the theory to get “right.” Of course, supposing we had a theory that got the test cases right, we could then have a field day with the less-obvious cases, programming our computers to tell us exactly how much consciousness is present in octopi, fetuses, brain-damaged patients, and hypothetical AI bots. In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”). Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on. In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are. To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway. We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness. More formally, given a partition (A,B) of {1,…,n}, let us write an input y=(y1,…,yn)∈Sn to f in the form (yA,yB), where yA consists of the y variables in A and yB consists of the y variables in B. Then we can think of f as mapping an input pair (yA,yB) to an output pair (zA,zB). Now, we define the “effective information” EI(A→B) as H(zB | A random, yB=xB). Or in words, EI(A→B) is the Shannon entropy of the output variables in B, if the input variables in A are drawn uniformly at random, while the input variables in B are fixed to their values in x. It’s a measure of the dependence of B on A in the computation of f(x). Similarly, we define EI(B→A) := H(zA | B random, yA=xA). We then consider the sum Φ(A,B) := EI(A→B) + EI(B→A). Intuitively, we’d like the integrated information Φ=Φ(f,x) be the minimum of Φ(A,B), over all 2n-2 possible partitions of {1,…,n} into nonempty sets A and B. The idea is that Φ should be large, if and only if it’s not possible to partition the variables into two sets A and B, in such a way that not much information flows from A to B or vice versa when f(x) is computed. However, no sooner do we propose this than we notice a technical problem. What if A is much larger than B, or vice versa? As an extreme case, what if A={1,…,n-1} and B={n}? In that case, we’ll have Φ(A,B)≤2log2|S|, but only for the boring reason that there’s hardly any entropy in B as a whole, to either influence A or be influenced by it. For this reason, Tononi proposes a fix where we normalize each Φ(A,B) by dividing it by min{|A|,|B|}. He then defines the integrated information Φ to be Φ(A,B), for whichever partition (A,B) minimizes the ratio Φ(A,B) / min{|A|,|B|}. (Unless I missed it, Tononi never specifies what we should do if there are multiple (A,B)’s that all achieve the same minimum of Φ(A,B) / min{|A|,|B|}. I’ll return to that point later, along with other idiosyncrasies of the normalization procedure.) Tononi gives some simple examples of the computation of Φ, showing that it is indeed larger for systems that are more “richly interconnected” in an intuitive sense. He speculates, plausibly, that Φ is quite large for (some reasonable model of) the interconnection network of the human brain—and probably larger for the brain than for typical electronic devices (which tend to be highly modular in design, thereby decreasing their Φ), or, let’s say, than for other organs like the pancreas. Ambitiously, he even speculates at length about how a large value of Φ might be connected to the phenomenology of consciousness. To be sure, empirical work in integrated information theory has been hampered by three difficulties. The first difficulty is that we don’t know the detailed interconnection network of the human brain. The second difficulty is that it’s not even clear what we should define that network to be: for example, as a crude first attempt, should we assign a Boolean variable to each neuron, which equals 1 if the neuron is currently firing and 0 if it’s not firing, and let f be the function that updates those variables over a timescale of, say, a millisecond? What other variables do we need—firing rates, internal states of the neurons, neurotransmitter levels? Is choosing many of these variables uniformly at random (for the purpose of calculating Φ) really a reasonable way to “randomize” the variables, and if not, what other prescription should we use? The third and final difficulty is that, even if we knew exactly what we meant by “the f and x corresponding to the human brain,” and even if we had complete knowledge of that f and x, computing Φ(f,x) could still be computationally intractable. For recall that the definition of Φ involved minimizing a quantity over all the exponentially-many possible bipartitions of {1,…,n}. While it’s not directly relevant to my arguments in this post, I leave it as a challenge for interested readers to pin down the computational complexity of approximating Φ to some reasonable precision, assuming that f is specified by a polynomial-size Boolean circuit, or alternatively, by an NC0 function (i.e., a function each of whose outputs depends on only a constant number of the inputs). (Presumably Φ will be #P-hard to calculate exactly, but only because calculating entropy exactly is a #P-hard problem—that’s not interesting.) I conjecture that approximating Φ is an NP-hard problem, even for restricted families of f’s like NC0 circuits—which invites the amusing thought that God, or Nature, would need to solve an NP-hard problem just to decide whether or not to imbue a given physical system with consciousness! (Alas, if you wanted to exploit this as a practical approach for solving NP-complete problems such as 3SAT, you’d need to do a rather drastic experiment on your own brain—an experiment whose result would be to render you unconscious if your 3SAT instance was satisfiable, or conscious if it was unsatisfiable! In neither case would you be able to communicate the outcome of the experiment to anyone else, nor would you have any recollection of the outcome after the experiment was finished.) In the other direction, it would also be interesting to upper-bound the complexity of approximating Φ. Because of the need to estimate the entropies of distributions (even given a bipartition (A,B)), I don’t know that this problem is in NP—the best I can observe is that it’s in AM. In any case, my own reason for rejecting IIT has nothing to do with any of the “merely practical” issues above: neither the difficulty of defining f and x, nor the difficulty of learning them, nor the difficulty of calculating Φ(f,x). My reason is much more basic, striking directly at the hypothesized link between “integrated information” and consciousness. Specifically, I claim the following: Yes, it might be a decent rule of thumb that, if you want to know which brain regions (for example) are associated with consciousness, you should start by looking for regions with lots of information integration. And yes, it’s even possible, for all I know, that having a large Φ-value is one necessary condition among many for a physical system to be conscious. However, having a large Φ-value is certainly not a sufficient condition for consciousness, or even for the appearance of consciousness. As a consequence, Φ can’t possibly capture the essence of what makes a physical system conscious, or even of what makes a system look conscious to external observers. The demonstration of this claim is embarrassingly simple. Let S=Fp, where p is some prime sufficiently larger than n, and let V be an n×n Vandermonde matrix over Fp—that is, a matrix whose (i,j) entry equals ij-1 (mod p). Then let f:Sn→Sn be the update function defined by f(x)=Vx. Now, for p large enough, the Vandermonde matrix is well-known to have the property that every submatrix is full-rank (i.e., “every submatrix preserves all the information that it’s possible to preserve about the part of x that it acts on”). And this implies that, regardless of which bipartition (A,B) of {1,…,n} we choose, we’ll get EI(A→B) = EI(B→A) = min{|A|,|B|} log2p, and hence Φ(A,B) = EI(A→B) + EI(B→A) = 2 min{|A|,|B|} log2p, or after normalizing, Φ(A,B) / min{|A|,|B|} = 2 log2p. Or in words: the normalized information integration has the same value—namely, the maximum value!—for every possible bipartition. Now, I’d like to proceed from here to a determination of Φ itself, but I’m prevented from doing so by the ambiguity in the definition of Φ that I noted earlier. Namely, since every bipartition (A,B) minimizes the normalized value Φ(A,B) / min{|A|,|B|}, in theory I ought to be able to pick any of them for the purpose of calculating Φ. But the unnormalized value Φ(A,B), which gives the final Φ, can vary greatly, across bipartitions: from 2 log2p (if min{|A|,|B|}=1) all the way up to n log2p (if min{|A|,|B|}=n/2). So at this point, Φ is simply undefined. On the other hand, I can solve this problem, and make Φ well-defined, by an ironic little hack. The hack is to replace the Vandermonde matrix V by an n×n matrix W, which consists of the first n/2 rows of the Vandermonde matrix each repeated twice (assume for simplicity that n is a multiple of 4). As before, we let f(x)=Wx. Then if we set A={1,…,n/2} and B={n/2+1,…,n}, we can achieve EI(A→B) = EI(B→A) = (n/4) log2p, Φ(A,B) = EI(A→B) + EI(B→A) = (n/2) log2p, and hence Φ(A,B) / min{|A|,|B|} = log2p. In this case, I claim that the above is the unique bipartition that minimizes the normalized integrated information Φ(A,B) / min{|A|,|B|}, up to trivial reorderings of the rows. To prove this claim: if |A|=|B|=n/2, then clearly we minimize Φ(A,B) by maximizing the number of repeated rows in A and the number of repeated rows in B, exactly as we did above. Thus, assume |A|≤|B| (the case |B|≤|A| is analogous). Then clearly EI(B→A) ≥ |A|/2, while EI(A→B) ≥ min{|A|, |B|/2}. So if we let |A|=cn and |B|=(1-c)n for some c∈(0,1/2], then Φ(A,B) ≥ [c/2 + min{c, (1-c)/2}] n, and Φ(A,B) / min{|A|,|B|} = Φ(A,B) / |A| = 1/2 + min{1, 1/(2c) – 1/2}. But the above expression is uniquely minimized when c=1/2. Hence the normalized integrated information is minimized essentially uniquely by setting A={1,…,n/2} and B={n/2+1,…,n}, and we get Φ = Φ(A,B) = (n/2) log2p, which is quite a large value (only a factor of 2 less than the trivial upper bound of n log2p). Now, why did I call the switch from V to W an “ironic little hack”? Because, in order to ensure a large value of Φ, I decreased—by a factor of 2, in fact—the amount of “information integration” that was intuitively happening in my system! I did that in order to decrease the normalized value Φ(A,B) / min{|A|,|B|} for the particular bipartition (A,B) that I cared about, thereby ensuring that that (A,B) would be chosen over all the other bipartitions, thereby increasing the final, unnormalized value Φ(A,B) that Tononi’s prescription tells me to return. I hope I’m not alone in fearing that this illustrates a disturbing non-robustness in the definition of Φ. But let’s leave that issue aside; maybe it can be ameliorated by fiddling with the definition. The broader point is this: I’ve shown that my system—the system that simply applies the matrix W to an input vector x—has an enormous amount of integrated information Φ. Indeed, this system’s Φ equals half of its entire information content. So for example, if n were 1014 or so—something that wouldn’t be hard to arrange with existing computers—then this system’s Φ would exceed any plausible upper bound on the integrated information content of the human brain. And yet this Vandermonde system doesn’t even come close to doing anything that we’d want to call intelligent, let alone conscious! When you apply the Vandermonde matrix to a vector, all you’re really doing is mapping the list of coefficients of a degree-(n-1) polynomial over Fp, to the values of the polynomial on the n points 0,1,…,n-1. Now, evaluating a polynomial on a set of points turns out to be an excellent way to achieve “integrated information,” with every subset of outputs as correlated with every subset of inputs as it could possibly be. In fact, that’s precisely why polynomials are used so heavily in error-correcting codes, such as the Reed-Solomon code, employed (among many other places) in CD’s and DVD’s. But that doesn’t imply that every time you start up your DVD player you’re lighting the fire of consciousness. It doesn’t even hint at such a thing. All it tells us is that you can have integrated information without consciousness (or even intelligence)—just like you can have computation without consciousness, and unpredictability without consciousness, and electricity without consciousness. It might be objected that, in defining my “Vandermonde system,” I was too abstract and mathematical. I said that the system maps the input vector x to the output vector Wx, but I didn’t say anything about how it did so. To perform a computation—even a computation as simple as a matrix-vector multiply—won’t we need a physical network of wires, logic gates, and so forth? And in any realistic such network, won’t each logic gate be directly connected to at most a few other gates, rather than to billions of them? And if we define the integrated information Φ, not directly in terms of the inputs and outputs of the function f(x)=Wx, but in terms of all the actual logic gates involved in computing f, isn’t it possible or even likely that Φ will go back down? This is a good objection, but I don’t think it can rescue IIT. For we can achieve the same qualitative effect that I illustrated with the Vandermonde matrix—the same “global information integration,” in which every large set of outputs depends heavily on every large set of inputs—even using much “sparser” computations, ones where each individual output depends on only a few of the inputs. This is precisely the idea behind low-density parity check (LDPC) codes, which have had a major impact on coding theory over the past two decades. Of course, one would need to muck around a bit to construct a physical system based on LDPC codes whose integrated information Φ was provably large, and for which there were no wildly-unbalanced bipartitions that achieved lower Φ(A,B)/min{|A|,|B|} values than the balanced bipartitions one cared about. But I feel safe in asserting that this could be done, similarly to how I did it with the Vandermonde matrix. More generally, we can achieve pretty good information integration by hooking together logic gates according to any bipartite expander graph: that is, any graph with n vertices on each side, such that every k vertices on the left side are connected to at least min{(1+ε)k,n} vertices on the right side, for some constant ε>0. And it’s well-known how to create expander graphs whose degree (i.e., the number of edges incident to each vertex, or the number of wires coming out of each logic gate) is a constant, such as 3. One can do so either by plunking down edges at random, or (less trivially) by explicit constructions from algebra or combinatorics. And as indicated in the title of this post, I feel 100% confident in saying that the so-constructed expander graphs are not conscious! The brain might be an expander, but not every expander is a brain. Before winding down this post, I can’t resist telling you that the concept of integrated information (though it wasn’t called that) played an interesting role in computational complexity in the 1970s. As I understand the history, Leslie Valiant conjectured that Boolean functions f:{0,1}n→{0,1}n with a high degree of “information integration” (such as discrete analogues of the Fourier transform) might be good candidates for proving circuit lower bounds, which in turn might be baby steps toward P≠NP. More strongly, Valiant conjectured that the property of information integration, all by itself, implied that such functions had to be at least somewhat computationally complex—i.e., that they couldn’t be computed by circuits of size O(n), or even required circuits of size Ω(n log n). Alas, that hope was refuted by Valiant’s later discovery of linear-size superconcentrators. Just as information integration doesn’t suffice for intelligence or consciousness, so Valiant learned that information integration doesn’t suffice for circuit lower bounds either. As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it. But our intuition is wrong. If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes. I should mention that I had the privilege of briefly speaking with Giulio Tononi (as well as his collaborator, Christof Koch) this winter at an FQXi conference in Puerto Rico. At that time, I challenged Tononi with a much cruder, handwavier version of some of the same points that I made above. Tononi’s response, as best as I can reconstruct it, was that it’s wrong to approach IIT like a mathematician; instead one needs to start “from the inside,” with the phenomenology of consciousness, and only then try to build general theories that can be tested against counterexamples. This response perplexed me: of course you can start from phenomenology, or from anything else you like, when constructing your theory of consciousness. However, once your theory has been constructed, surely it’s then fair game for others to try to refute it with counterexamples? And surely the theory should be judged, like anything else in science or philosophy, by how well it withstands such attacks? But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness. [Endnote: See also this related post, by the philosopher Eric Schwetzgebel: Why Tononi Should Think That the United States Is Conscious. While the discussion is much more informal, and the proposed counterexample more debatable, the basic objection to IIT is the same.] Update (5/22): Here are a few clarifications of this post that might be helpful. (1) The stuff about zombies and the Hard Problem was simply meant as motivation and background for what I called the “Pretty-Hard Problem of Consciousness”—the problem that I take IIT to be addressing. You can disagree with the zombie stuff without it having any effect on my arguments about IIT. (2) I wasn’t arguing in this post that dualism is true, or that consciousness is irreducibly mysterious, or that there could never be any convincing theory that told us how much consciousness was present in a physical system. All I was arguing was that, at any rate, IIT is not such a theory. (3) Yes, it’s true that my demonstration of IIT’s falsehood assumes—as an axiom, if you like—that while we might not know exactly what we mean by “consciousness,” at any rate we’re talking about something that humans have to a greater extent than DVD players. If you reject that axiom, then I’d simply want to define a new word for a certain quality that non-anesthetized humans seem to have and that DVD players seem not to, and clarify that that other quality is the one I’m interested in. (4) For my counterexample, the reason I chose the Vandermonde matrix is not merely that it’s invertible, but that all of its submatrices are full-rank. This is the property that’s relevant for producing a large value of the integrated information Φ; by contrast, note that the identity matrix is invertible, but produces a system with Φ=0. (As another note, if we work over a large enough field, then a random matrix will have this same property with high probability—but I wanted an explicit example, and while the Vandermonde is far from the only one, it’s one of the simplest.) (5) The n×n Vandermonde matrix only does what I want if we work over (say) a prime field Fp with p>>n elements. Thus, it’s natural to wonder whether similar examples exist where the basic system variables are bits, rather than elements of Fp. The answer is yes. One way to get such examples is using the low-density parity check codes that I mention in the post. Another common way to get Boolean examples, and which is also used in practice in error-correcting codes, is to start with the Vandermonde matrix (a.k.a. the Reed-Solomon code), and then combine it with an additional component that encodes the elements of Fp as strings of bits in some way. Of course, you then need to check that doing this doesn’t harm the properties of the original Vandermonde matrix that you cared about (e.g., the “information integration”) too much, which causes some additional complication. (6) Finally, it might be objected that my counterexamples ignored the issue of dynamics and “feedback loops”: they all consisted of unidirectional processes, which map inputs to outputs and then halt. However, this can be fixed by the simple expedient of iterating the process over and over! I.e., first map x to Wx, then map Wx to W2x, and so on. The integrated information should then be the same as in the unidirectional case. Update (5/24): See a very interesting comment by David Chalmers. ### Waiting for BQP Fever Tuesday, April 1st, 2014 Update (April 5): By now, three or four people have written in asking for my reaction to the preprint “Computational solution to quantum foundational problems” by Arkady Bolotin. (See here for the inevitable Slashdot discussion, entitled “P vs. NP Problem Linked to the Quantum Nature of the Universe.”) It gives me no pleasure to respond to this sort of thing—it would be far better to let papers this gobsmackingly uninformed about the relevant issues fade away in quiet obscurity—but since that no longer seems to be possible in the age of social media, my brief response is here. (note: sorry, no April Fools post, just a post that happens to have gone up on April Fools) This weekend, Dana and I celebrated our third anniversary by going out to your typical sappy romantic movie: Particle Fever, a documentary about the Large Hadron Collider. As it turns out, the movie was spectacularly good; anyone who reads this blog should go see it. Or, to offer even higher praise: If watching Particle Fever doesn’t cause you to feel in your bones the value of fundamental science—the thrill of discovery, unmotivated by any application—then you are not truly human. You are a barnyard animal who happens to walk on its hind legs. Indeed, I regard Particle Fever as one of the finest advertisements for science itself ever created. It’s effective precisely because it doesn’t try to tell you why science is important (except for one scene, where an economist asks a physicist after a public talk about the “return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”). Instead, the movie simply shows you the lives of particle physicists, of people who take for granted the urgency of knowing the truth about the basic constituents of reality. And in showing you the scientists’ quest, it makes you feel as they feel. Incidentally, the movie also shows footage of Congressmen ridiculing the uselessness of the Superconducting Supercollider, during the debates that led to the SSC’s cancellation. So, gently, implicitly, you’re invited to choose: whose side are you on? I do have a few, not quite criticisms of the movie, but points that any viewer should bear in mind while watching it. First, it’s important not to come away with the impression that Particle Fever shows “what science is usually like.” Sure, there are plenty of scenes that any scientist would find familiar: sleep-deprived postdocs; boisterous theorists correcting each other’s statements over Chinese food; a harried lab manager walking to the office oblivious to traffic. On the other hand, the decades-long quest to find the Higgs boson, the agonizing drought of new data before the one big money shot, the need for an entire field to coalesce around a single machine, the whole careers hitched to specific speculative scenarios that this one machine could favor or disfavor—all of that is a profoundly abnormal situation in the history of science. Particle physics didn’t used to be that way, and other parts of science are not that way today. Of course, the fact that particle physics became that way makes it unusually suited for a suspenseful movie—a fact that the creators of Particle Fever understood perfectly and exploited to the hilt. Second, the movie frames the importance of the Higgs search as follows: if the Higgs boson turned out to be relatively light, like 115 GeV, then that would favor supersymmetry, and hence an “elegant, orderly universe.” If, on the other hand, the Higgs turned out to be relatively heavy, like 140 GeV, then that would favor anthropic multiverse scenarios (and hence a “messy, random universe”). So the fact that the Higgs ended up being 125 GeV means the universe is coyly refusing to tell us whether it’s orderly or random, and more research is needed. In my view, it’s entirely appropriate for a movie like this one to relate its subject matter to big, metaphysical questions, to the kinds of questions anyone can get curious about (in contrast to, say, “what is the mechanism of electroweak symmetry breaking?”) and that the scientists themselves talk about anyway. But caution is needed here. My lay understanding, which might be wrong, is as follows: while it’s true that a lighter Higgs would tend to favor supersymmetric models, the only way to argue that a heavier Higgs would “favor the multiverse,” is if you believe that a multiverse is automatically favored by a lack of better explanations. More broadly, I wish the film had made clearer that the explanation for (some) apparent “fine-tunings” in the Standard Model might be neither supersymmetry, nor the multiverse, nor “it’s just an inexplicable accident,” but simply some other explanation that no one has thought of yet, but that would emerge from a better understanding of quantum field theory. As one example, on reading up on the subject after watching the film, I was surprised to learn that a very conservative-sounding idea—that of “asymptotically safe gravity”—was used in 2009 to predict the Higgs mass right on the nose, at 126.3 ± 2.2 GeV. Of course, it’s possible that this was just a lucky guess (there were, after all, lots of Higgs mass predictions). But as an outsider, I’d love to understand why possibilities like this don’t seem to get discussed more (there might, of course, be perfectly good reasons that I don’t know). Third, for understandable dramatic reasons, the movie focuses almost entirely on the “younger generation,” from postdocs working on ATLAS and CMS detectors, to theorists like Nima Arkani-Hamed who are excited about the LHC because of its ability to test scenarios like supersymmetry. From the movie’s perspective, the creation of the Standard Model itself, in the 60s and 70s, might as well be ancient history. Indeed, when Peter Higgs finally appears near the end of the film, it’s as if Isaac Newton has walked onstage. At several points, I found myself wishing that some of the original architects of the Standard Model, like Steven Weinberg or Sheldon Glashow, had been interviewed to provide their perspectives. After all, their model is really the one that’s been vindicated at the LHC, not (so far) any of the newer ideas like supersymmetry or large extra dimensions. OK, but let me come to the main point of this post. I confess that my overwhelming emotion on watching Particle Fever was one of regret—regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown. See, from my perspective, there’s a lot to envy about the high-energy physicists. Most importantly, they don’t perceive any need to justify what they do in terms of practical applications. Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN. But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you. Now, no marketing consultant would ever in a trillion years endorse such an out-of-touch, elitist sales pitch. But the remarkable fact is that the message has more-or-less worked. While the cancellation of the SSC was a setback, the high-energy physicists did succeed in persuading the world to pony up the$11 billion needed to build the LHC, and to gain the information that the mass of the Higgs boson is about 125 GeV.

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)  The reasons to build quantum computers are things that could make any buzzword-spouting dullard nod in recognition: cracking uncrackable encryption, finding bugs in aviation software, sifting through massive data sets, maybe even curing cancer, predicting the weather, or finding aliens.  And all of this could be yours in a few short years—or some say it’s even commercially available today.  So, if you check back in a few years and it’s still not on store shelves, probably it went the way of flying cars or moving sidewalks: another technological marvel that just failed to materialize for some reason.

Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field—because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself.  All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes—that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms?  What happens when it turns out that the real applications of quantum computing—like breaking RSA and simulating quantum systems—are nice, but not important enough by themselves to justify the cost?  (E.g., when the imminent risk of a quantum computer simply causes people to switch from RSA to other cryptographic codes?  Or when the large polynomial overheads of quantum simulation algorithms limit their usefulness?)  Finally, what happens when it turns out that the promises of useful quantum computers in 5-10 years were wildly unrealistic?

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing—the case based on basic science—because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.

Many people would say it’s impossible to make the latter pitch, that funders and laypeople would never understand it or buy it.  But there’s an \$11-billion, 17-mile ring under Geneva that speaks against their cynicism.

Anyway, let me end this “movie review” with an anecdote.  The other day a respected colleague of mine—someone who doesn’t normally follow such matters—asked me what I thought about D-Wave.  After I’d given my usual spiel, he smiled and said:

“See Scott, but you could imagine scientists of the 1400s saying the same things about Columbus!  He had no plan that could survive academic scrutiny.  He raised money under the false belief that he could reach India by sailing due west.  And he didn’t understand what he’d found even after he’d found it.  Yet for all that, it was Columbus, and not some academic critic on the sidelines, who discovered the new world.”

With this one analogy, my colleague had eloquently summarized the case for D-Wave, a case often leveled against me much more verbosely.  But I had an answer.

“I accept your analogy!” I replied.  “But to me, Columbus and the other conquerors of the Americas weren’t heroes to be admired or emulated.  Motivated by gold and spices rather than knowledge, they spread disease, killed and enslaved millions in one of history’s greatest holocausts, and burned the priceless records of the Maya and Inca civilizations so that the world would never even understand what was lost.  I submit that, had it been undertaken by curious and careful scientists—or at least people with a scientific mindset—rather than by swashbucklers funded by greedy kings, the European exploration and colonization of the Americas could have been incalculably less tragic.”

The trouble is, when I say things like that, people just laugh at me knowingly.  There he goes again, the pie-in-the-sky complexity theorist, who has no idea what it takes to get anything done in the real world.  What an amusingly contrary perspective he has.

And that, in the end, is why I think Particle Fever is such an important movie.  Through the stories of the people who built the LHC, you’ll see how it really is possible to reach a new continent without the promise of gold or the allure of lies.