The Power of the Digi-Comp II: My First Conscious Paperlet

July 4th, 2014

Foreword: Right now, I have a painfully-large stack of unwritten research papers.  Many of these are “paperlets”: cool things I noticed that I want to tell people about, but that would require a lot more development before they became competitive for any major theoretical computer science conference.  And what with the baby, I simply don’t have time anymore for the kind of obsessive, single-minded, all-nighter-filled effort needed to bulk my paperlets up.  So starting today, I’m going to try turning some of my paperlets into blog posts.  I don’t mean advertisements or sneak previews for papers, but replacements for papers: blog posts that constitute the entirety of what I have to say for now about some research topic.  “Peer reviewing” (whether signed or anonymous) can take place in the comments section, and “citation” can be done by URL.  The hope is that, much like with 17th-century scientists who communicated results by letter, this will make it easier to get my paperlets done: after all, I’m not writing Official Papers, just blogging for colleagues and friends.

Of course, I’ve often basically done this before—as have many other academic bloggers—but now I’m going to go about it more consciously.  I’ve thought for years that the Internet would eventually change the norms of scientific publication much more radically than it so far has: that yes, instant-feedback tools like blogs and StackExchange and MathOverflow might have another decade or two at the periphery of progress, but their eventual destiny is at the center.  And now that I have tenure, it hit me that I can do more than prognosticate about such things.  I’ll start small: I won’t go direct-to-blog for big papers, papers that cry out for LaTeX formatting, or joint papers.  I certainly won’t do it for papers with students who need official publications for their professional advancement.  But for things like today’s post—on the power of a wooden mechanical computer now installed in the lobby of the building where I work—I hope you agree that the Science-by-Blog Plan fits well.

Oh, by the way, happy July 4th to American readers!  I hope you find that a paperlet about the logspace-interreducibility of a few not-very-well-known computational models captures everything that the holiday is about.


The Power of the Digi-Comp II

by Scott Aaronson

Abstract

I study the Digi-Comp II, a wooden mechanical computer whose only moving parts are balls, switches, and toggles.  I show that the problem of simulating (a natural abstraction of) the Digi-Comp, with a polynomial number of balls, is complete for CC (Comparator Circuit), a complexity class defined by Subramanian in 1990 that sits between NL and P.  This explains why the Digi-Comp is capable of addition, multiplication, division, and other arithmetical tasks, and also implies new tasks of which the Digi-Comp is capable (and that indeed are complete for it), including the Stable Marriage Problem, finding a lexicographically-first perfect matching, and the simulation of other Digi-Comps.  However, it also suggests that the Digi-Comp is not a universal computer (not even in the circuit sense), making it a very interesting way to fall short of Turing-universality.  I observe that even with an exponential number of balls, simulating the Digi-Comp remains in P, but I leave open the problem of pinning down its complexity more precisely.

Introduction

To celebrate his 60th birthday, my colleague Charles Leiserson (who some of you might know as the “L” in the CLRS algorithms textbook) had a striking contraption installed in the lobby of the MIT Stata Center.  That contraption, pictured below, is a custom-built, supersized version of a wooden mechanical computer from the 1960s called the Digi-Comp II, now manufactured and sold by a company called Evil Mad Scientist.

Click here for a short video showing the Digi-Comp’s operation (and here for the user’s manual).  Basically, the way it works is this: a bunch of balls (little steel balls in the original version, pool balls in the supersized version) start at the top and roll to the bottom, one by one.  On their way down, the balls may encounter black toggles, which route each incoming ball either left or right.  Whenever this happens, the weight of the ball flips the toggle to the opposite setting: so for example, if a ball goes left, then the next ball to encounter the same toggle will go right, and the ball after that will go left, and so on.  The toggles thus maintain a “state” for the computer, with each toggle storing one bit.

Besides the toggles, there are also “switches,” which the user can set at the beginning to route every incoming ball either left or right, and whose settings aren’t changed by the balls.  And then there are various wooden tunnels and ledges, whose function is simply to direct the balls in a desired way as they roll down.  A ball could reach different locations, or even the same location in different ways, depending on the settings of the toggles and switches above that location.  On the other hand, once we fix the toggles and switches, a ball’s motion is completely determined: there’s no random or chaotic element.

“Programming” is done by configuring the toggles and switches in some desired way, then loading a desired number of balls at the top and letting them go.  “Reading the output” can be done by looking at the final configuration of some subset of the toggles.

Whenever a ball reaches the bottom, it hits a lever that causes the next ball to be released from the top.  This ensures that the balls go through the device one at a time, rather than all at once.  As we’ll see, however, this is mainly for aesthetic reasons, and maybe also for the mechanical reason that the toggles wouldn’t work properly if two or more balls hit them at once.  The actual logic of the machine doesn’t care about the timing of the balls; the sheer number of balls that go through is all that matters.

The Digi-Comp II, as sold, contains a few other features: most notably, toggles that can be controlled by other toggles (or switches).  But I’ll defer discussion of that feature to later.  As we’ll see, we already get a quite interesting model of computation without it.

One final note: of course the machine that’s sold has a fixed size and a fixed geometry.  But for theoretical purposes, it’s much more interesting to consider an arbitrary network of toggles and switches (not necessarily even planar!), with arbitrary size, and with an arbitrary number of balls fed into it.  (I’ll give a more formal definition in the next section.)

The Power of the Digi-Comp

So, what exactly can the Digi-Comp do?  As a first exercise, you should convince yourself that, by simply putting a bunch of toggles in a line and initializing them all to “L” (that is, Left), it’s easy to set up a binary counter.  In other words, starting from the configuration, say, LLL (in which three toggles all point left), as successive balls pass through we can enter the configurations RLL, LRL, RRL, etc.  If we interpret L as 0 and R as 1, and treat the first bit as the least significant, then we’re simply counting from 0 to 7 in binary.  With 20 toggles, we could instead count to 1,048,575.

counter

But counting is not the most interesting thing we can do.  As Charles eagerly demonstrated to me, we can also set up the Digi-Comp to perform binary addition, binary multiplication, sorting, and even long division.  (Excruciatingly slowly, of course: the Digi-Comp might need even more work to multiply 3×5, than existing quantum computers need to factor the result!)

To me, these demonstrations served only as proof that, while Charles might call himself a theoretical computer scientist, he’s really a practical person at heart.  Why?  Because a theorist would know that the real question is not what the Digi-Comp can do, but rather what it can’t do!  In particular, do we have a universal computer on our hands here, or not?

If the answer is yes, then it’s amazing that such a simple contraption of balls and toggles could already take us over the threshold of universality.  Universality would immediately explain why the Digi-Comp is capable of multiplication, division, sorting, and so on.  If, on the other hand, we don’t have universality, that too is extremely interesting—for we’d then face the challenge of explaining how the Digi-Comp can do so many things without being universal.

It might be said that the Digi-Comp is certainly not a universal computer, since if nothing else, it’s incapable of infinite loops.  Indeed, the number of steps that a given Digi-Comp can execute is bounded by the number of balls, while the number of bits it can store is bounded by the number of toggles: clearly we don’t have a Turing machine.  This is true, but doesn’t really tell us what we want to know.  For, as discussed in my last post, we can consider not only Turing-machine universality, but also the weaker (but still interesting) notion of circuit-universality.  The latter means the ability to simulate, with reasonable efficiency, any Boolean circuit of AND, OR, and NOT gates—and hence, in particular, to compute any Boolean function on any fixed number of input bits (given enough resources), or to simulate any polynomial-time Turing machine (given polynomial resources).

The formal way to ask whether something is circuit-universal, is to ask whether the problem of simulating the thing is P-complete.  Here P-complete (not to be confused with NP-complete!) basically means the following:

There exists a polynomial p such that any S-step Turing machine computation—or equivalently, any Boolean circuit with at most S gates—can be embedded into our system if we allow the use of poly(S) computing elements (in our case, balls, toggles, and switches).

Of course, I need to tell you what I mean by the weasel phrase “can be embedded into.”  After all, it wouldn’t be too impressive if the Digi-Comp could “solve” linear programming, primality testing, or other highly-nontrivial problems, but only via “embeddings” in which we had to do essentially all the work, just to decide how to configure the toggles and switches!  The standard way to handle this issue is to demand that the embedding be “computationally simple”: that is, we should be able to carry out the embedding in L (logarithmic space), or some other complexity class believed to be much smaller than the class (P, in this case) for which we’re trying to prove completeness.  That way, we’ll be able to say that our device really was “doing something essential”—i.e., something that our embedding procedure couldn’t efficiently do for itself—unless the larger complexity class collapses with the smaller one (i.e., unless L=P).

So then, our question is whether simulating the Digi-Comp II is a P-complete problem under L-reductions, or alternatively, whether the problem is in some complexity class believed to be smaller than P.  The one last thing we need is a formal definition of “the problem of simulating the Digi-Comp II.”  Thus, let DIGICOMP be the following problem:

We’re given as inputs:

  • A directed acyclic graph G, with n vertices.  There is a designated vertex with indegree 0 and outdegree 1 called the “source,” and a designated vertex with indegree 1 and outdegree 0 called the “sink.”  Every internal vertex v (that is, every vertex with both incoming and outgoing edges) has exactly two outgoing edges, labeled “L” (left) and “R” (right), as well as one bit of internal state s(v)∈{L,R}.
  • For each vertex v, an “initial” value for its internal state s(v).
  • A positive integer T (encoded in unary notation), representing the number of balls dropped successively from the source vertex.

Computation proceeds as follows: each time a ball appears at the source vertex, it traverses the path induced by the L and R states of the vertices that it encounters, until it reaches a terminal vertex, which might or might not be the sink.  As the ball traverses the path, it flips s(v) for each vertex v that it encounters: L goes to R and R goes to L.  Then the next ball is dropped in.

The problem is to decide whether any balls reach the sink.

Here the internal vertices represent toggles, and the source represents the chute at the top from which the balls drop.  Switches aren’t included, since (by definition) the reduction can simply fix their values to “L” to “R” and thereby simplify the graph.

Of course we could consider other problems: for example, the problem of deciding whether an odd number of balls reach the sink, or of counting how many balls reach the sink, or of computing the final value of every state-variable s(v).  However, it’s not hard to show that all of these problems are interreducible with the DIGICOMP problem as defined above.

The Class CC

My main result, in this paperlet, is to pin down the complexity of the DIGICOMP problem in terms of a complexity class called CC (Comparator Circuit): a class that’s obscure enough not to be in the Complexity Zoo (!), but that’s been studied in several papers.  CC was defined by Subramanian in his 1990 Stanford PhD thesis; around the same time Mayr and Subramanian showed the inclusion NL ⊆ CC (the inclusion CC ⊆ P is immediate).  Recently Cook, Filmus, and Lê revived interest in CC with their paper The Complexity of the Comparator Circuit Value Problem, which is probably the best current source of information about this class.

OK, so what is CC?  Informally, it’s the class of problems that you can solve using a comparator circuit, which is a circuit that maps n bits of input to n bits of output, and whose only allowed operation is to sort any desired pair of bits.  That is, a comparator circuit can repeatedly apply the transformation (x,y)→(x∧y,x∨y), in which 00, 01, and 11 all get mapped to themselves, while 10 gets mapped to 01.  Note that there’s no facility in the circuit for copying bits (i.e., for fanout), so sorting could irreversibly destroy information about the input.  In the comparator circuit value problem (or CCV), we’re given as input a description of a comparator circuit C, along with an input x∈{0,1}n and an index i∈[n]; then the problem is to determine the final value of the ith bit when C is applied to x.  Then CC is simply the class of all languages that are L-reducible to CCV.

sort

As Cook et al. discuss, there are various other characterizations of CC: for example, rather than using a complete problem, we can define CC directly as the class of languages computed by uniform families of comparator circuits.  More strikingly, Mayr and Subramanian showed that CC has natural complete problems, which include (decision versions of) the famous Stable Marriage Problem, as well as finding the lexicographically first perfect matching in a bipartite graph.  So perhaps the most appealing definition of CC is that it’s “the class of problems that can be easily mapped to the Stable Marriage Problem.”

It’s a wide-open problem whether CC=NL or CC=P: as usual, one can give oracle separations, but as far as anyone knows, either equality could hold without any dramatic implications for “standard” complexity classes.  (Of course, the conjunction of these equalities would have a dramatic implication.)  What got Cook et al. interested was that CC isn’t even known to contain (or be contained in) the class NC of parallelizable problems.  In particular, linear-algebra problems in NC, like determinant, matrix inversion, and iterated matrix multiplication—not to mention other problems in P, like linear programming and greatest common divisor—might all be examples of problems that are efficiently solvable by Boolean circuits, but not by comparator circuits.

One final note about CC.  Cook et al. showed the existence of a universal comparator circuit: that is, a single comparator circuit C able to simulate any other comparator circuit C’ of some fixed size, given a description of C’ as part of its input.

DIGICOMP is CC-Complete

I can now proceed to my result: that, rather surprisingly, the Digi-Comp II can solve exactly the problems in CC, giving us another characterization of that class.

I’ll prove this using yet another model of computation, which I call the pebbles model.  In the pebbles model, you start out with a pile of x pebbles; the positive integer x is the “input” to your computation.  Then you’re allowed to apply a straight-line program that consists entirely of the following two operations:

  1. Given any pile of y pebbles, you can split it into two piles consisting of ⌈y/2⌉ and ⌊y/2⌋ pebbles respectively.
  2. Given any two piles, consisting of y and z pebbles respectively, you can combine them into a single pile consisting of y+z pebbles.

Your program “accepts” if and only if some designated output pile contains at least one pebble (or, in a variant that can be shown to be equivalent, if it contains an odd number of pebbles).

piles

As suggested by the imagery, you don’t get to make “backup copies” of the piles before splitting or combining them: if, for example, you merge y with z to create y+z, then y isn’t also available to be split into ⌈y/2⌉ and ⌊y/2⌋.

Note that the ceiling and floor functions are the only “nonlinear” elements of the pebbles model: if not for them, we’d simply be applying a sequence of linear transformations.

I can now divide my CC-completeness proof into two parts: first, that DIGICOMP (i.e., the problem of simulating the Digi-Comp II) is equivalent to the pebbles model, and second, that the pebbles model is equivalent to comparator circuits.

Let’s first show the equivalence between DIGICOMP and pebbles.  The reduction is simply this: in a given Digi-Comp, each edge will be associated to a pile, with the number of pebbles in the pile equal to the total number of balls that ever traverse that edge.  Thus, we have T balls dropped in to the edge incident to the source vertex, corresponding to an initial pile with T pebbles.  Multiple edges pointing to the same vertex (i.e., fan-in) can be modeled by combining the associated piles into a single pile.  Meanwhile, a toggle has the effect of splitting a pile: if y balls enter the toggle in total, then ⌈y/2⌉ balls will ultimately exit in whichever direction the toggle was pointing initially (whether left or right), and ⌊y/2⌋ balls will ultimately exit in the other direction.  It’s clear that this equivalence works in both directions: not only does it let us simulate any given Digi-Comp by a pebble program, it also lets us simulate any pebble program by a suitably-designed Digi-Comp.

OK, next let’s show the equivalence between pebbles and comparator circuits.  As a first step, given any comparator circuit, I claim that we can simulate it by a pebble program.  The way to do it is simply to use a pile of 0 pebbles to represent each “0” bit, and a pile of 1 pebble to represent each “1” bit.  Then, any time we want to sort two bits, we simply merge their corresponding piles, then split the result back into two piles.  The result?  00 gets mapped to 00, 11 gets mapped to 11, and 01 and 10 both get mapped to one pebble in the ⌈y/2⌉ pile and zero pebbles in the ⌊y/2⌋ pile.  At the end, a given pile will have a pebble in it if and only if the corresponding output bit in the comparator circuit is 1.

One might worry that the input to a comparator circuit is a sequence of bits, whereas I said before that the input to a pebble program is just a single pile.  However, it’s not hard to see that we can deal with this, without leaving the world of logspace reductions, by breaking up an initial pile of n pebbles into n piles each of zero pebbles or one pebble, corresponding to any desired n-bit string (along with some extra pebbles, which we subsequently ignore).  Alternatively, we could generalize the pebbles model so that the input can consist of multiple piles.  One can show, by a similar “breaking-up” trick, that this wouldn’t affect the pebbles model’s equivalence to the DIGICOMP problem.

Finally, given a pebble program, I need to show how to simulate it by a comparator circuit.  The reduction works as follows: let T be the number of pebbles we’re dealing with (or even just an upper bound on that number).  Then each pile will be represented by its own group of T wires in the comparator circuit.  The Hamming weight of those T wires—i.e., the number of them that contain a ‘1’ bit—will equal the number of pebbles in the corresponding pile.

To merge two piles, we first merge the corresponding groups of T wires.  We then use comparator gates to sort the bits in those 2T wires, until all the ‘1’ bits have been moved into the first T wires.  Finally, we ignore the remaining T wires for the remainder of the computation.

To split a pile, we first use comparator gates to sort the bits in the T wires, until all the ‘1’ bits have been moved to the left.  We then route all the odd-numbered wires into “Pile A” (the one that’s supposed to get ⌈y/2⌉ pebbles), and route all the even-numbered wires into “Pile B” (the one that’s supposed to get ⌊y/2⌋ pebbles).  Finally, we introduce T additional wires with 0’s in them, so that piles A and B have T wires each.

At the end, by examining the leftmost wire in the group of wires corresponding to the output pile, we can decide whether that pile ends up with any pebbles in it.

Since it’s clear that all of the above transformations can be carried out in logspace (or even smaller complexity classes), this completes the proof that DIGICOMP is CC-complete under L-reductions.  As corollaries, the Stable Marriage and lexicographically-first perfect matching problems are L-reducible to DIGICOMP—or informally, are solvable by easily-described, polynomial-size Digi-Comp machines (and indeed, characterize the power of such machines).  Combining my result with the universality result of Cook et al., a second corollary is that there exists a “universal Digi-Comp”: that is, a single Digi-Comp D that can simulate any other Digi-Comp D’ of some polynomially-smaller size, so long as we initialize some subset of the toggles in D to encode a description of D’.

How Does the Digi-Comp Avoid Universality?

Let’s now step back and ask: given that the Digi-Comp is able to do so many things—division, Stable Marriage, bipartite matching—how does it fail to be a universal computer, at least a circuit-universal one?  Is the Digi-Comp a counterexample to the oft-repeated claims of people like Stephen Wolfram, about the ubiquity of universal computation and the difficulty of avoiding it in any sufficiently complex system?  What would need to be added to the Digi-Comp to make it circuit-universal?  Of course, we can ask the same questions about pebble programs and comparator circuits, now that we know that they’re all computationally equivalent.

The reason for the failure of universality is perhaps easiest to see in the case of comparator circuits.  As Steve Cook pointed out in a talk, comparator circuits are “1-Lipschitz“: that is, if you have a comparator circuit acting on n input bits, and you change one of the input bits, your change can affect at most one output bit.  Why?  Well, trace through the circuit and use induction.  So in particular, there’s no amplification of small effects in comparator circuits, no chaos, no sensitive dependence on initial conditions, no whatever you want to call it.  Now, while chaos doesn’t suffice for computational universality, at least naïvely it’s a necessary condition, since there exist computations are chaotic.  Of course, this simpleminded argument can’t be all there is to it, since otherwise we would’ve proved CCP.  What the argument does show is that, if CC=P, then the encoding of a Boolean circuit into a comparator circuit (or maybe into a collection of such circuits) would need to be subtle and non-obvious: it would need to take computations with the potential for chaos, and reduce them to computations without that potential.

Once we understand this 1-Lipschitz business, we can also see it at work in the pebbles model.  Given a pebble program, suppose someone surreptitiously removed a single pebble from one of the initial piles.  For want of that pebble, could the whole kingdom be lost?  Not really.  Indeed, you can convince yourself that the output will be exactly the same as before, except that one output pile will have one fewer pebble than it would have otherwise.  The reason is again an induction: if you change x by 1, that affects at most one of ⌈x/2⌉ and ⌊x/2⌋ (and likewise, merging two piles affects at most one pile).

We now see the importance of the point I made earlier, about there being no facility in the piles model for “copying” a pile.  If we could copy piles, then the 1-Lipschitz property would fail.  And indeed, it’s not hard to show that in that case, we could implement AND, OR, and NOT gates with arbitrary fanout, and would therefore have a circuit-universal computer.  Likewise, if we could copy bits, then comparator circuits—which, recall, map (x,y) to (x∧y,x∨y)—would implement AND, OR, and NOT with arbitrary fanout, and would be circuit-universal.  (If you’re wondering how to implement NOT: one way to do it is to use what’s known in quantum computing as the “dual-rail representation,” where each bit b is encoded by two bits, one for b and the other for ¬b.  Then a NOT can be accomplished simply by swapping those bits.  And it’s not hard to check that comparator gates in a comparator circuit, and combining and splitting two piles in a pebble program, can achieve the desired updates to both the b rails and the ¬b rails when an AND or OR gate is applied.  However, we could also just omit NOT gates entirely, and use the fact that computing the output of even a monotone Boolean circuit is a P-complete problem under L-reductions.)

In summary, then, the inability to amplify small effects seems like an excellent candidate for the central reason why the power of comparator circuits and pebble programs hits a ceiling at CC, and doesn’t go all the way up to P.  It’s interesting, in this connection, that while transistors (and before them, vacuum tubes) can be used to construct logic gates, the original purpose of both of them was simply to amplify information: to transform a small signal into a large one.  Thus, we might say, comparator circuits and pebble programs fail to be computationally universal because they lack transistors or other amplifiers.

I’d like to apply exactly the same analysis to the Digi-Comp itself: that is, I’d like to say that the reason the Digi-Comp fails to be universal (unless CC=P) is that it, too, lacks the ability to amplify small effects (wherein, for example, the drop of a single ball would unleash a cascade of other balls).  In correspondence, however, David Deutsch pointed out a problem: namely, if we just watch a Digi-Comp in action, then it certainly looks like it has an amplification capability!  Consider, for example, the binary counter discussed earlier.  Suppose a column of ten toggles is in the configuration RRRRRRRRRR, representing the integer 1023.  Then the next ball to fall down will hit all ten toggles in sequence, resetting them to LLLLLLLLLL (and thus, resetting the counter to 0).  Why isn’t this precisely the amplification of a small effect that we were looking for?

Well, maybe it’s amplification, but it’s not of a kind that does what we want computationally.  One way to see the difficulty is that we can’t take all those “L” settings we’ve produced as output, and feed them as inputs to further gates in an arbitrary way.  We could do it if the toggles were arranged in parallel, but they’re arranged serially, so that flipping any one toggle inevitably has the potential also to flip the toggles below it.  Deutsch describes this as a “failure of composition”: in some sense, we do have a fan-out or copying operation, but the design of the Digi-Comp prevents us from composing the fan-out operation with other operations in arbitrary ways, and in particular, in the ways that would be needed to simulate any Boolean circuit.

So, what features could we add to the Digi-Comp to make it universal?  Here’s the simplest possibility I was able to come up with: suppose that, scattered throughout the device, there were balls precariously perched on ledges, in such a way that whenever one was hit by another ball, it would get dislodged, and both balls would continue downward.  We could, of course, chain several of these together, so that the two balls would in turn dislodge four balls, the four would dislodge eight, and so on.  I invite you to check that this would provide the desired fan-out gate, which, when combined with AND, OR, and NOT gates that we know how to implement (e.g., in the dual-rail representation described previously), would allow us to simulate arbitrary Boolean circuits.  In effect, the precariously perched balls would function as “transistors” (of course, painfully slow transistors, and ones that have to be laboriously reloaded with a ball after every use).

As a second possibility, Charles Leiserson points out to me that the Digi-Comp, as sold, has a few switches and toggles that can be controlled by other toggles.  Depending on exactly how one modeled this feature, it’s possible that it, too, could let us implement arbitrary fan-out gates, and thereby boost the Digi-Comp up to circuit-universality.

Open Problems

My personal favorite open problem is this:

What is the complexity of simulating a Digi-Comp II if the total number of balls dropped in is exponential, rather than polynomial?  (In other words, if the positive integer T, representing the number of balls, is encoded in binary rather than in unary?)

From the equivalence between the Digi-Comp and pebble programs, we can already derive a conclusion about the above problem that’s not intuitively obvious: namely, that it’s in P.  Or to say it another way: it’s possible to predict the exact state of a Digi-Comp with n toggles, after T balls have passed through it, using poly(n, log T) computation steps.  The reason is simply that, if there are T balls, then the total number of balls that pass through any given edge (the only variable we need to track) can be specified using log2T bits.  This, incidentally, gives us a second sense in which the Digi-Comp is not a universal computer: namely, even if we let the machine “run for exponential time” (that is, drop exponentially many balls into it), unlike a conventional digital computer it can’t possibly solve all problems in PSPACE, unless P=PSPACE.

However, this situation also presents us with a puzzle: if we let DIGICOMPEXP be the problem of simulating a Digi-Comp with an exponential number of balls, then it’s clear that DIGICOMPEXP is hard for CC and contained in P, but we lack any information about its difficulty more precise than that.  At present, I regard both extremes—that DIGICOMPEXP is in CC (and hence, no harder than ordinary DIGICOMP), and that it’s P-complete—as within the realm of possibility (along with the possibility that DIGICOMPEXP is intermediate between the two).

By analogy, one can also consider comparator circuits where the entities that get compared are integers from 1 to T rather than bits—and one can then consider the power of such circuits, when T is allowed to grow exponentially.  In email correspondence, however, Steve Cook sent me a proof that such circuits have the same power as standard, Boolean comparator circuits.  It’s not clear whether this tells us anything about the power of a Digi-Comp with exponentially many balls.

A second open problem is to formalize the feature of Digi-Comp that Charles mentioned—namely, toggles and switches controlled by other toggles—and see whether, under some reasonable formalization, that feature bumps us up to P-completeness (i.e., to circuit-universality).  Personally, though, I confess I’d be even more interested if there were some feature we could add to the machine that gave us a larger class than CC, but that still wasn’t all of P.

A third problem is to pin down the power of Digi-Comps (or pebble programs, or comparator circuits) that are required to be planar.  While my experience with woodcarving is limited, I imagine that planar or near-planar graphs are a lot easier to carve than arbitrary graphs (even if the latter present no problems of principle).

A fourth problem has to do with the class CC in general, rather than the Digi-Comp in particular, but I can’t resist mentioning it.  Let CCEXP be the complexity class that’s just like CC, but where the comparator circuit (or pebble program, or Digi-Comp) is exponentially large and specified only implicitly (that is, by a Boolean circuit that, given as input a binary encoding of an integer i, tells you the ith bit of the comparator circuit’s description).  Then it’s easy to see that PSPACE ⊆ CCEXP ⊆ EXP.  Do we have CCEXP = PSPACE or CCEXP = EXP?  If not, then CCEXP would be the first example I’ve ever seen of a natural complexity class intermediate between PSPACE and EXP.

Acknowledgments

I thank Charles Leiserson for bringing the Digi-Comp II to MIT, and thereby inspiring this “research.”  I also thank Steve Cook, both for giving a talk that first brought the complexity class CC to my attention, and for helpful correspondence.  Finally I thank David Deutsch for the point about composition.

A Physically Universal Cellular Automaton

June 26th, 2014

It’s been understood for decades that, if you take a simple discrete rule—say, a cellular automaton like Conway’s Game of Life—and iterate it over and over, you can very easily get the capacity for universal computation.  In other words, your cellular automaton becomes able to implement any desired sequence of AND, OR, and NOT gates, store and retrieve bits in a memory, and even (in principle) run Windows or Linux, albeit probably veerrryyy sloowwllyyy, using a complicated contraption of thousands or millions of cells to represent each bit of the desired computation.  If I’m not mistaken, a guy named Wolfram even wrote an entire 1200-page-long book about this phenomenon (see here for my 2002 review).

But suppose we want more than mere computational universality.  Suppose we want “physical” universality: that is, the ability to implement any transformation whatsoever on any finite region of the cellular automaton’s state, by suitably initializing the complement of that region.  So for example, suppose that, given some 1000×1000 square of cells, we’d like to replace every “0” cell within that square by a “1” cell, and vice versa.  Then physical universality would mean that we could do that, eventually, by some “machine” we could build outside the 1000×1000 square of interest.

You might wonder: are we really asking for more here than just ordinary computational universality?  Indeed we are.  To see this, consider Conway’s famous Game of Life.  Even though Life has been proved to be computationally universal, it’s not physically universal in the above sense.  The reason is simply that Life’s evolution rule is not time-reversible.  So if, for example, there were a lone “1” cell deep inside the 1000×1000 square, surrounded by a sea of “0” cells, then that “1” cell would immediately disappear without a trace, and no amount of machinery outside the square could possibly detect that it was ever there.

Furthermore, even cellular automata that are both time-reversible and computationally universal could fail to be physically universal.  Suppose, for example, that our CA allowed for the construction of “impenetrable walls,” through which no signal could pass.  And suppose that our 1000×1000 region contained a hollow box built out of these impenetrable walls.  Then, by definition, no amount of machinery that we built outside the region could ever detect whether there was a particle bouncing around inside the box.

So, in summary, we now face a genuinely new question:

Does there exist a physically universal cellular automaton, or not?

This question had sort of vaguely bounced around in my head (and probably other people’s) for years.  But as far as I know, it was first asked, clearly and explicitly, in a lovely 2010 preprint by Dominik Janzing.

Today, I’m proud to report that Luke Schaeffer, a first-year PhD student in my group, has answered Janzing’s question in the affirmative, by constructing the first cellular automaton (again, to the best of our knowledge) that’s been proved to be physically universal.  Click here for Luke’s beautifully-written preprint about his construction, and click here for a webpage that he’s prepared, explaining the details of the construction using color figures and videos.  Even if you don’t have time to get into the nitty-gritty, the videos on the webpage should give you a sense for the intricacy of what he accomplished.

Very briefly, Luke first defines a reversible, two-dimensional CA involving particles that move diagonally across a square lattice, in one of four possible directions (northeast, northwest, southeast, or southwest).  The number of particles is always conserved.  The only interesting behavior occurs when three of the particles “collide” in a single 2×2 square, and Luke gives rules (symmetric under rotations and reflections) that specify what happens then.

Given these rules, it’s possible to prove that any configuration whatsoever of finitely many particles will “diffuse,” after not too many time steps, into four unchanging clouds of particles, which thereafter simply move away from each other in the four diagonal directions for all eternity.  This has the interesting consequence that Luke’s CA, when initialized with finitely many particles, cannot be capable of universal computation in Turing’s sense.  In other words, there’s no way, using n initial particles confined to an n×n box, to set up a computation that continues to do something interesting after 2n or 22^n time steps, let alone forever. On the other hand, using finitely many particles, one can also prove that the CA can perform universal computation in the Boolean circuit sense.  In other words, we can implement AND, OR, and NOT gates, and by chaining them together, can compute any Boolean function that we like on any fixed number of input bits (with the number of input bits generally much smaller than the number of particles).  And this “circuit universality,” rather than Turing-machine universality, is all that’s implied anyway by physical universality in Janzing’s sense.  (As a side note, the distinction between circuit and Turing-machine universality seems to deserve much more attention than it usually gets.)

Anyway, while the “diffusion into four clouds” aspect of Luke’s CA might seem annoying, it turns out to be extremely useful for proving physical universality.  For it has the consequence that, no matter what the initial state was inside the square we cared about, that state will before too long be encoded into the states of four clouds headed away from the square.  So then, “all” we need to do is engineer some additional clouds of particles, initially outside the square, that

  1. intercept the four escaping clouds,
  2. “decode” the contents of those clouds into a flat sequence of bits,
  3. apply an arbitrary Boolean circuit to that bit sequence, and then
  4. convert the output bits of the Boolean circuit into new clouds of particles converging back onto the square.

So, well … that’s exactly what Luke did.  And just in case there’s any doubt about the correctness of the end result, Luke actually implemented his construction in the cellular-automaton simulator Golly, where you can try it out yourself (he explains how on his webpage).

So far, of course, I’ve skirted past the obvious question of “why.”  Who cares that we now know that there exists a physically-universal CA?  Apart from the sheer intrinsic coolness, a second reason is that I’ve been interested for years in how to make finer (but still computer-sciencey) distinctions, among various “candidate laws of physics,” then simply saying that some laws are computationally universal and others aren’t, or some are easy to simulate on a standard Turing machine and others hard.  For ironically, the very pervasiveness of computational universality (the thing Wolfram goes on and on about) makes it of limited usefulness in distinguishing physical laws: almost any sufficiently-interesting set of laws will turn out to be computationally universal, at least in the circuit sense if not the Turing-machine one!

On the other hand, many of these laws will be computationally universal only because of extremely convoluted constructions, which fall apart if even the tiniest error is introduced.  And in other cases, we’ll be able to build a universal computer, all right, but that computer will be relatively impotent to obtain interesting input about its physical environment, or to make its output affect the gross features of the CA’s physical state.  If you like, we’ll have a recipe for creating a universe full of ivory-tower, eggheaded nerds, who can search for counterexamples to Goldbach’s Conjecture but can’t build a shelter to protect themselves from a hail of “1” bits, or even learn whether such a hail is present or not, or decide which other part of the CA to travel to.

As I see it, Janzing’s notion of physical universality is directly addressing this “egghead” problem, by asking whether we can build not merely a universal computer but a particularly powerful kind of robot: one that can effect a completely arbitrary transformation (given enough time, of course) on any part of its physical environment.  And the answer turns out to be that, at least in a weird CA consisting of clouds of diagonally-moving particles, we can indeed do that.  The question of whether we can also achieve physical universality in more natural CAs remains open (and in his Future Work section, Luke discusses several ways of formalizing what we mean by “more natural”).

As Luke mentions in his introduction, there’s at least a loose connection here to David Deutsch’s recent notion of constructor theory (see also this followup paper by Deutsch and Chiara Marletto).  Basically, Deutsch and Marletto want to reconstruct all of physics taking what can and can’t be constructed (i.e., what kinds of transformations are possible) as the most primitive concept, rather than (as in ordinary physics) what will or won’t happen (i.e., how the universe’s state evolves with time).  The hope is that, once physics was reconstructed in this way, we could then (for example) state and answer the question of whether or not scalable quantum computers can be built as a principled question of physics, rather than as a “mere” question of engineering.

Now, regardless of what you think about these audacious goals, or about Deutsch and Marletto’s progress (or lack of progress?) so far toward achieving them, it’s certainly a worthwhile project to study what sorts of machines can and can’t be constructed, as a matter of principle, both in the real physical world and in other, hypothetical worlds that capture various aspects of our world.  Indeed, one could say that that’s what many of us in quantum information and theoretical computer science have been trying to do for decades!  However, Janzing’s “physical universality” problem hints at a different way to approach the project: starting with some far-reaching desire (say, to be able to implement any transformation whatsoever on any finite region), can we engineer laws of physics that make that desire possible?  If so, then how close can we make those laws to “our” laws?

Luke has now taken a first stab at answering these questions.  Whether his result ends up merely being a fun, recreational “terminal branch” on the tree of science, or a trunk leading to something more, probably just depends on how interested people get.  I have no doubt that our laws of physics permit the creation of additional papers on this topic, but whether they do or don’t is (as far as I can see) merely a question of contingency and human will, not a constructor-theoretic question.

Integrated Information Theory: Virgil Griffith opines

June 25th, 2014

Remember the two discussions about Integrated Information Theory that we had a month ago on this blog?  You know, the ones where I argued that IIT fails because “the brain might be an expander, but not every expander is a brain”; where IIT inventor Giulio Tononi wrote a 14-page response biting the bullet with mustard; and where famous philosopher of mind David Chalmers, and leading consciousness researcher (and IIT supporter) Christof Koch, also got involved in the comments section?

OK, so one more thing about that.  Virgil Griffith recently completed his PhD under Christof Koch at Caltech—as he puts it, “immersing [him]self in the nitty-gritty of IIT for the past 6.5 years.”  This morning, Virgil sent me two striking letters about his thoughts on the recent IIT exchanges on this blog.  He asked me to share them here, something that I’m more than happy to do:

Reading these letters, what jumped out at me—given Virgil’s long apprenticeship in the heart of IIT-land—was the amount of agreement between my views and his.  In particular, Virgil agrees with my central contention that Φ, as it stands, can at most be a necessary condition for consciousness, not a sufficient condition, and remarks that “[t]o move IIT from talked about to accepted among hard scientists, it may be necessary for [Tononi] to wash his hands of sufficiency claims.”  He agrees that a lack of mathematical clarity in the definition of Φ is a “major problem in the IIT literature,” commenting that “IIT needs more mathematically inclined people at its helm.”  He also says he agrees “110%” that the lack of a derivation of the form of Φ from IIT’s axioms is “a pothole in the theory,” and further agrees 110% that the current prescriptions for computing Φ contain many unjustified idiosyncrasies.

Indeed, given the level of agreement here, there’s not all that much for me to rebut, defend, or clarify!

I suppose there are a few things.

  1. Just as a clarifying remark, in a few places where it looks from the formatting like Virgil is responding to something I said (for example, “The conceptual structure is unified—it cannot be decomposed into independent components” and “Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts”), he’s actually responding to something Giulio said (and that I, at most, quoted).
  2. Virgil says, correctly, that Giulio would respond to my central objection against IIT by challenging my “intuition for things being unconscious.”  (Indeed, because Giulio did respond, there’s no need to speculate about how he would respond!)  However, Virgil then goes on to explicate Giulio’s response using the analogy of temperature (interestingly, the same analogy I used for a different purpose).  He points out how counterintuitive it would be for Kelvin’s contemporaries to accept that “even the coldest thing you’ve touched actually has substantial heat in it,” and remarks: “I find this ‘Kelvin scale for C’ analogy makes the panpsychism much more palatable.”  The trouble is that I never objected to IIT’s panpsychism per se: I only objected to its seemingly arbitrary and selective panpsychism.  It’s one thing for a theory to ascribe some amount of consciousness to a 2D grid or an expander graph.  It’s quite another for a theory to ascribe vastly more consciousness to those things than it ascribes to a human brain—even while denying consciousness to things that are intuitively similar but organized a little differently (say, a 1D grid).  A better analogy here would be if Kelvin’s theory of temperature had predicted, not merely that all ordinary things had some heat in them, but that an ice cube was hotter than the Sun, even though a popsicle was, of course, colder than the Sun.  (The ice cube, you see, “integrates heat” in a way that the popsicle doesn’t…)
  3. Virgil imagines two ways that an IIT proponent could respond to my argument involving the cerebellum—the argument that accuses IIT proponents of changing the rules of the game according to convenience (a 2D grid has a large Φ?  suck it up and accept it; your intuitions about a grid’s lack of consciousness are irrelevant.  the human cerebellum has a small Φ?  ah, that’s a victory for IIT, since the cerebellum is intuitively unconscious).  The trouble is that both of Virgil’s imagined responses are by reference to the IIT axioms.  But I wasn’t talking about the axioms themselves, but about whether we’re allowed to validate the axioms, by checking their consequences against earlier, pre-theoretic intuitions.  And I was pointing out that Giulio seemed happy to do so when the results “went in IIT’s favor” (in the cerebellum example), even though he lectured me against doing so in the cases of the expander and the 2D grid (cases where IIT does less well, to put it mildly, at capturing our intuitions).
  4. Virgil chastises me for ridiculing Giulio’s phenomenological argument for the consciousness of a 2D grid by way of nursery rhymes: “Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall.  You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you.”  Virgil amusingly comments: “Even when both are inebriated, I’ve never heard [Giulio] nor [Christof] separately or collectively imply anything like this.  Moreover, they’re each far too clueful to fall for something so trivial.”  For my part, I agree that neither Giulio nor Christof would ever advocate something as transparently silly as, “if you have a rich inner experience when thinking about X, then that’s evidence X itself is conscious.”  And I apologize if I seemed to suggest they would.  To clarify, my point was not that Giulio was making such an absurd statement, but rather that, assuming he wasn’t, I didn’t know what he was trying to say in the passages of his that I’d just quoted at length.  The silly thing seemed like the “obvious” reading of his words, and my hermeneutic powers were unequal to the task of figuring out the non-silly, non-obvious reading that he surely intended.

Anyway, there’s much more to Virgil’s letters than the above—including answers to some of my subsidiary questions about the details of IIT (e.g., how to handle unbalanced partitions, and the mathematical meanings of terms like “mechanism” and “system of mechanisms”).  Also, in parts of the letters, Virgil’s main concern is neither to agree with me nor to agree with Giulio, but rather to offer his own ideas, developed in the course of his PhD work, for how to move forward and fix some of the problems with IIT.  All in all, these are recommended reads for anyone who’s been following this debate.

Eigenmorality

June 18th, 2014

This post is about an idea I had around 1997, when I was 16 years old and a freshman computer-science major at Cornell.  Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden.  The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query.  Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.”  At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix.  Equivalently, you can imagine an iterative process where each web page starts out with the same hub/authority “starting credits,” but then in each round, the pages distribute their credits among their neighbors, so that the most popular pages get more credits, which they can then, in turn, distribute to their neighbors by linking to them.

I was also impressed by a similar research project called PageRank, which was proposed later by two guys at Stanford named Sergey Brin and Larry Page.  Brin and Page dispensed with Kleinberg’s bipartite hubs-and-authorities structure in favor of a more uniform structure, and made some other changes, but otherwise their idea was very similar.  At the time, of course, I didn’t know that CLEVER was going to languish at IBM, while PageRank (renamed Google) was going to expand to roughly the size of the entire world’s economy.

In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”

Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”  After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.”  Yet they both managed to use math to defeat the circularity.  All you had to do was find an “importance equilibrium,” in which your assignment of “importance” to each web page was stable under a certain linear map.  And such an equilibrium could be shown to exist—indeed, to exist uniquely.

Searching for other circular notions to elucidate using linear algebra, I hit on morality.  Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer.  Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Obviously one can quibble with this definition on numerous grounds: for example, what exactly does it mean to “cooperate,” and which other people are relevant here?  If you don’t donate money to starving children in Africa, have you implicitly “refused to cooperate” with them?  What’s the relative importance of cooperating with good people and withholding cooperation with bad people, of kindness and justice?  Is there a duty not to cooperate with bad people, or merely the lack of a duty to cooperate with them?  Should we consider intent, or only outcomes?  Surely we shouldn’t hold someone accountable for sheltering a burglar, if they didn’t know about the burgling?  Also, should we compute your “total morality” by simply summing over your interactions with everyone else in your community?  If so, then can a career’s worth of lifesaving surgeries numerically overwhelm the badness of murdering a single child?

For now, I want you to set all of these important questions aside, and just focus on the fact that the definition doesn’t even seem to work on its own terms, because of circularity.  How can we possibly know which people are moral (and hence worthy of our cooperation), and which ones immoral (and hence unworthy), without presupposing the very thing that we seek to define?

Ah, I thought—this is precisely where linear algebra can come to the rescue!  Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”  Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already.  We apply the rule over and over, until the number of morality credits per person converges to an equilibrium.  (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.)  We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.

The next step, I figured, would be to hack together some code that computed this “eigenmorality” metric, and then see what happened when I ran the code to measure the morality of each participant in a simulated society.  What would happen?  Would the results conform to my pre-theoretic intuitions about what sort of behavior was moral and what wasn’t?  If not, then would watching the simulation give me new ideas about how to improve the morality metric?  Or would it be my intuitions themselves that would change?

Unfortunately, I never got around to the “coding it up” part—there’s a reason why I became a theorist!  The eigenmorality idea went onto my back burner, where it stayed for the next 16 years: 16 years in which our world descended ever further into darkness, lacking a principled way to quantify morality.  But finally, this year, just two separate things have happened on the eigenmorality front, and that’s why I’m blogging about it now.

Eigenjesus and Eigenmoses

The first thing that’s happened is that Tyler Singer-Clark, my superb former undergraduate advisee, did code up eigenmorality metrics and test them out on a simulated society, for his MIT senior thesis project.  You can read Tyler’s 12-page report here—it’s a fun, enjoyable, thought-provoking first research paper, one that I wholeheartedly recommend.  Or, if you’d like to experiment yourself with the Python code, you can download it here from github.  (Of course, all opinions expressed in this post are mine alone, not necessarily Tyler’s.)

Briefly, Tyler examined what eigenmorality has to say in the setting of an Iterated Prisoner’s Dilemma (IPD) tournament.  The Iterated Prisoner’s Dilemma is the famous game in which two players meet repeatedly, and in each turn can either “Cooperate” or “Defect.”  The absolute best thing, from your perspective, is if you defect while your partner cooperates.  But you’re also pretty happy if you both cooperate.  You’re less happy if you both defect, while the worst (from your standpoint) is if you cooperate while your partner defects.  At each turn, when contemplating what to do, you have the entire previous history of your interaction with this partner available to you.  And thus, for example, you can decide to “punish” your partner for past defections, “reward” her for past cooperations, or “try to take advantage” by unilaterally defecting and seeing what happens.  At each turn, the game has some small constant probability of ending—so you know approximately how many times you’ll meet this partner in the future, but you don’t know exactly when the last turn will be.  Your score, in the game, is then the sum-total of your score over all turns and all partners (where each player meets each other player once).

In the late 1970s, as recounted in his classic work The Evolution of Cooperation, Robert Axelrod invited people all over the world to submit computer programs for playing this game, which were then pit against each other in the world’s first serious IPD tournament.  And, in a tale that’s been retold in hundreds of popular books, while many people submitted complicated programs that used machine learning, etc. to try to suss out their opponents, the program that won—hands-down, repeatedly—was TIT_FOR_TAT, a few lines of code submitted by the psychologist Anatol Rapaport to implement an ancient moral maxim.  TIT_FOR_TAT starts out by cooperating; thereafter, it simply does whatever its opponent did in the last move, swiftly rewarding every cooperation and punishing every defection, and ignoring the entire previous history.  In the decades since Axelrod, running Iterated Prisoners’ Dilemma tournaments has become a minor industry, with countless variations explored (for example, “evolutionary” versions, and versions allowing side-communication between the players), countless new strategies invented, and countless papers published.  To make a long story short, TIT_FOR_TAT continues to do quite well across a wide range of environments, but depending on the mix of players present, other strategies can sometimes beat TIT_FOR_TAT.  (As one example, if there’s a sizable minority of colluding players, who recognize each other by cooperating and defecting in a prearranged sequence, then those players can destroy TIT_FOR_TAT and other “simple” strategies, by cooperating with one another while defecting against everyone else.)

Anyway, Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly.  However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners?

It might be objected that the players didn’t “know” they were going to be graded on morality: as far as they knew, they were just trying to maximize their individual utilities.  The trouble with that objection is that the players didn’t “know” they were trying to maximize their utilities either!  The players are bots, which do whatever their code tells them to do.  So in some sense, utility—no less than morality—is “merely an interpretation” that we impose on the raw cooperate/defect sequences!  There’s nothing to stop us from imposing some other interpretation (say, one that explicitly tries to measure morality) and seeing what happens.

In an attempt to measure the players’ morality, Tyler uses the eigenmorality idea from before.  The extent to which player A “cooperates” with player B is simply measured by the percentage of times A cooperates.  (One acknowledged limitation of this work is that, when two players both defect, there’s no attempt to take into account “who started it,” and to judge the aggressor more harshly than the retaliator—or to incorporate time in any other way.)  This then gives us a “cooperation matrix,” whose (i,j) entry records the total amount of niceness that player i displayed to player j.  Diagonalizing that matrix, and taking its largest eigenvector, then gives us our morality scores.

Now, there’s a very interesting ambiguity in what I said above.  Namely, should we define the “niceness scores” to lie in [0,1] (so that the lowest, meanest possible score is 0), or in [-1,1] (so that it’s possible to have negative niceness)?  This might sound like a triviality, but in our setting, it’s precisely the mathematical reflection of one of the philosophical conundrums I mentioned earlier.  The conundrum can be stated as follows: is your morality a monotone function of your niceness?  We all agree, presumably, that it’s better to be nice to Gandhi than to be nice to Hitler.  But do you have a positive obligation to be not-nice to Hitler: to make him suffer because he made others suffer?  Or, OK, how about not Hitler, but someone who’s somewhat bad?  Consider, for example, a woman who falls in love with, and marries, an unrepentant armed robber (with full knowledge of who he is, and with other options available to her).  Is the woman morally praiseworthy for loving her husband despite his bad behavior?  Or is she blameworthy because, by rewarding his behavior with her love, she helps to enable it?

To capture two possible extremes of opinion about such questions, Tyler and I defined two different morality metrics, which we called … wait for it … eigenmoses and eigenjesus.  Eigenmoses has the niceness scores in [-1,1], which means that you’re actively rewarded for punishing evildoers: that is, for defecting against those who defect against many moral players.  Eigenjesus, by contrast, has the niceness scores in [0,1], which means that you always do at least as well by “turning the other cheek” and cooperating.  (Though note that, even with eigenjesus, you get more morality credits by cooperating with moral players than by cooperating with immoral ones.)

This is probably a good place to mention a second limitation of Tyler’s current study.  Namely, with the current system, there’s no direct way for a player to find out how its partner has been behaving toward third parties.  The only information that A gets about the goodness or evilness of player B, comes from A and B’s direct interaction.  Ideally, one would like to design bots that take into account, not only the other bots’ behavior toward them, but the other bots’ behavior toward each other.  So for example, even if someone is unfailingly nice to you, if that person is an asshole to everyone else, then the eigenmoses moral code would demand that you return the person’s cooperation with icy defection.  Conversely, even if Gandhi is mean and hateful to you, you would still be morally obliged (interestingly, on both the eigenmoses and eigenjesus codes) to be nice to him, because of the amount of good he does for everyone else.

Anyway, you can read Tyler’s paper if you want to see the results of computing the eigenmoses and eigenjesus scores for a diverse population of bots.  Briefly, the results accord pretty well with intuition.  When we look at eigenjesus scores, the all-cooperate bot comes out on top and the all-defect bot on the bottom (as is mathematically necessary), with TIT_FOR_TAT somewhere in the middle, and generous versions of TIT_FOR_TAT higher up.  When we look at eigenmoses, by contrast, TIT_FOR_TWO_TATS comes out on top, with TIT_FOR_TAT in sixth place, and the all-cooperate bot scoring below the median.  Interestingly, once again, the all-defect bot gets the lowest score (though in this case, it wasn’t mathematically necessary).

Even though the measures acquit themselves well in this particular tournament, it’s admittedly easy to construct scenarios where the prescriptions of eigenjesus and eigenmoses alike violently diverge from most people’s moral intuitions.  We’ve already touched on a few such scenarios above (for example, are you really morally obligated to lick the boots of someone who kicks you, just because that person is a saint to everyone other than you?).  Another type of scenario involves minorities.  Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that).  Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%.  Who, in this scenario, is moral, and who’s immoral?  The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil.  After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone.  Of course, for much of human history, this is precisely how morality worked, in many people’s minds.  But I dare say it’s a result that would make moderns uncomfortable.

In summary, it seems clear to me that neither eigenmoses nor eigenjesus correctly captures our intuitions about morality, any more than Φ captures our intuitions about consciousness.  But as they say, I think there’s plenty of scope here for further research: for coming up with new mathematical measures that sharpen our intuitive judgments about morality, and (if we like) testing those measures out using IPD tournaments.  It also seems to me that there’s something fundamentally right about the eigenvector idea: all else being equal, we’d like to say, being nice to others is good, except that aiding and abetting evildoers is not good, and the way we can recognize the evildoers in our midst is that they’re not nice to others—except that, if the people who someone isn’t nice to are themselves evildoers, then the person might again be good, and so on.  The only way to cut off the infinite regress, it seems, is to demand some sort of “reflective equilibrium” in our moral judgments, and that’s precisely what eigenmorality tries to capture.  On the other hand, no such idea can ever make moral debate obsolete—if for no other reason than that we still need to decide which specific eigenmorality metric to use, and that choice is itself a moral judgment.

Scooped by Plato

Which brings me, finally, to the second new thing that’s happened this year on the eigenmorality front.  Namely, Rebecca Newberger Goldstein—who’s far and away my favorite contemporary novelist—published a charming new book entitled Plato at the Googleplex: Why Philosophy Won’t Go Away.  Here she imagines that Plato has reappeared in present-day America (she doesn’t bother to explain how), where he’s taught himself English and the basics of modern science, learned how to use the Internet, and otherwise gotten himself up to speed.  The book recounts Plato’s dialogues with various modern interlocutors, as he volunteers to have his brain scanned, guest-writes a relationship advice column, participates in a panel discussion on child-rearing, and gets interviewed on cable news by “Roy McCoy” (a thinly veiled Bill O’Reilly).  Often, Goldstein has Plato answer the moderns’ questions using direct quotes from the Timaeus, the Gorgias, the Meno, etc., which makes her Plato into a very intelligent sort of chatbot.  This is a genre that’s not often seriously attempted, and that I’d love to read more of (possible subjects: Shakespeare, Galileo, Jefferson, Lincoln, Einstein, Turing…).

Anyway, my favorite episode in the book is the first, eponymous one, where Plato visits the Googleplex in Mountain View.  While eating lunch in one of the many free cafeterias, Plato is cornered by a somewhat self-important, dreadlocked coder named Marcus, who tries to convince Plato that Google PageRank has finally solved the problem agonized over in the Republic, of how to define justice.  By using the Internet, we can simply crowd-source the answer, Marcus declares: get millions of people to render moral judgments on every conceivable question, and also moral judgments on each other’s judgments.  Then declare those judgments the most morally reliable, that are judged the most reliable by the people who are themselves the most morally reliable.  The circularity, as usual, is broken by taking the principal eigenvector of the graph of moral judgments (Goldstein doesn’t have Marcus put it that way, but it’s what she means).

Not surprisingly, Plato is skeptical.  Through Socratic questioning—the method he learned from the horse’s mouth—Plato manages to make Marcus realize that, in the very act of choosing which of several variants of PageRank to use in our crowd-sourced justice engine, we’ll implicitly be making moral choices already.  And therefore, we can’t use PageRank, or anything like it, as the ultimate ground of morality.

Whereas I imagined that the raw data for an “eigenmorality” metric would consist of numerical measures of how nice people had been to each other, Goldstein imagines the raw data to consist of abstract moral judgments, and of judgments about judgments.  Also, whereas the output of my kind of metric would be a measure of the “goodness” of each individual person, the outputs of hers would presumably be verdicts about general moral and political questions.  But, much like with CLEVER versus PageRank, it’s obvious that the ideas are similar—and that I should credit Goldstein with independently discovering my nerdy 16-year-old vision, in order to put it in the mouth of a nerdy character in her story.

As I said before, I agree with Goldstein’s Plato that eigenmorality can’t serve as the ultimate ground of morality.  But that’s a bit like saying that Google rank can’t serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google’s engineers must have some prior notion of “importance” to serve as a standard.  That’s true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years.  Goldstein’s book has the wonderful property that even the ideas she gives to her secondary characters, the ones who serve as foils to Plato, are sometimes interesting enough to deserve book-length treatments of their own, and crowd-sourced morality strikes me as a perfect example.

In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization’s survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives.  (Obviously, such ideas are nonstarters in the current political climate of the US, but I’m not talking here about what’s feasible, only about what’s necessary.)  As several commenters pointed out, my view raises an obvious question: who is to decide how much “damage” each activity causes, and thus how much it should be taxed?  Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever?

For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn’t decree the answer—has been representative democracy.  Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences.  But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it’s ever faced: namely, that of leaving our descendants a livable planet.  Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we’re still taking only limping, token steps toward those goals, and in many cases we’re moving rapidly in the opposite direction.  Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening.

So what’s the solution?  To put the world under the thumb of an environmentalist dictator?  Absolutely not.  In all of history, I don’t think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren’t too keen on alternative energy either).  The problem, I think, is epistemological.  Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn’t yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history.  Because our domination of the earth’s climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win.  If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won’t be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it’s too late.  If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other.  (Of course, all this is a problem for many fields, not just climate change.  Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.)

Yet for all that, I’m an optimist—sort of.  For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society.  Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little.  We could improve it a lot more.  Consider, for example, the following attempted definitions:

A trustworthy source of information is one that’s considered trustworthy by many sources who are themselves trustworthy (on the same topic or on closely related topics).  The current scientific consensus, on any given issue, is what the trustworthy sources consider to be the consensus.  A good decision-maker is someone who’s considered to be a good decision-maker by many other good decision-makers.

At first glance, the above definitions sound ludicrously circular—even Orwellian—but we now know that all that’s needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.  And the computation of such an eigenvector need be no more “Orwellian” than … well, Google.  If enough people want it, then we have the tools today to put flesh on these definitions, to give them agency: to build a crowd-sourced deliberative democracy, one that “usually just works” in much the same way Google usually just works.

Now, would those with axes to grind try to subvert such a system the instant it went online?  Certainly.  For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy.  So there would arise a parallel world of trust and consensus and “expertise,” mutually-reinforcing yet nearly disjoint from the world of the real.  But here’s the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one.  They’d see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe.  The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy.  It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I’d cheer that outcome as well.

Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they’re in a minority!  And far from being worried about it, they treat it as a badge of honor.  They think they’re Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers.

I admit all this.  But the point of an eigentrust system wouldn’t be to convince everyone.  As long as I’m fantasizing, the point would be that, once people’s individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law.  The formation of the giant component would be the signal that there’s now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse.  Conversely, it’s essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority’s objections might be enough to veto action, even if it was numerically small.  This is still democracy; it’s just democracy enhanced by linear algebra.

Other people will object that, while we should use the Internet to improve the democratic process, the idea we’re looking for is not eigentrust or eigenmorality but rather prediction markets.  Such markets would allow us to, as my friend Robin Hanson advocates, “vote on values but bet on beliefs.”  For example, a country could vote for the conditional policy that, if business-as-usual is predicted to cause sea levels to rise at least 4 meters by the year 2200, then an aggressive emissions reduction plan will be triggered, but not otherwise.  But as for the prediction itself, that would be left to a futures market: a place where, unlike with voting, there’s a serious penalty for being wrong, namely losing your shirt.  If the futures market assigned the prediction at least such-and-such a probability, then the policy tied to that prediction would become law.

I actually like the idea of prediction markets—I have ever since I heard about them—but I consider them limited in scope.  My example above, involving the year 2200, gives a hint as to why.  Prediction markets are great whenever our disagreements are over something that will be settled one way or the other, to everyone’s assent, in the near future (e.g., who will win the World Cup, or next year’s GDP).  But most of our important disagreements aren’t like that: they’re over which direction society should move in, which issues to care about, which statistical indicators are even the right ones to measure a country’s health.  Now, those broader questions can sometimes be settled empirically, in a sense: they can be settled by the overwhelming judgment of history, as the slavery, women’s suffrage, and fascism debates were.  But that kind of empirical confirmation typically takes way too long to set up a decent betting market around it.  And for the non-bettable questions, a carefully-crafted eigendemocracy really is the best system I can think of.

Again, I think Rebecca Goldstein’s Plato is completely right that such a system, were it implemented, couldn’t possibly solve the philosophical problem of finding the “ultimate ground of justice,” just like Google can’t provide us with the “ultimate ground of importance.”  If nothing else, we’d still need to decide which of the many possible eigentrust metrics to use, and we couldn’t use eigentrust for that without risking an infinite regress.  But just like Google, whatever its flaws, works well enough for you to use it dozens of times per day, so a crowd-sourced eigendemocracy might—just might—work well enough to save civilization.


Update (6/20): If you haven’t been following, there’s an excellent discussion in the comments, with, as I’d hoped, many commenters raising strong and pertinent objections to the eigenmorality and eigendemocracy concepts, while also proposing possible fixes.  Let me now mention what I think are the most important problems with eigenmorality and eigendemocracy respectively—both of them things that had occurred to me also, but that the commenters have brought out very clearly and explicitly.

With eigenmorality, perhaps the most glaring problem is that, as I mentioned before, there’s no notion of time-ordering, or of “who started it,” in the definition that Tyler and I were using.  As Luca Trevisan aptly points out in the comments, this has the consequence that eigenmorality, as it stands, is completely unable to distinguish between a crime syndicate that’s hated by the majority because of its crimes, and an equally-large ethnic minority that’s hated by the majority solely because it’s different, and that therefore hates the majority.  However, unlike with mathematical theories of consciousness—where I used counterexamples to try to show that no mathematical definition of a certain kind could possibly capture our intuitions about consciousness—here the problem strikes me as much more circumscribed and bounded.  It’s far from obvious to me that we can’t easily improve the definition of eigenmorality so that it does agree with most people’s moral intuition, whenever intuition renders a clear verdict, at least in the limited setting of Iterated Prisoners’ Dilemma tournaments.

Let’s see, in particular, how to solve the problem that Luca stressed.  As a first pass, we could do so as follows:

A moral agent is one who only initiates defection against agents who it has good reason to believe are immoral (where, as usual, linear algebra is used to unravel the definition’s apparent circularity).

Notice that I’ve added two elements to the setup: not only time but also knowledge.  If you shun someone solely because you don’t like how they look, then we’d like to say that reflects poorly on you, even if (unbeknownst to you) it turns out that the person really is an asshole.  Now, several more clauses would need to be added to the above definition to flesh it out: for example, if you’ve initiated defection against an immoral person, but then the person stops being immoral, at what point do you have a moral duty to “forgive and forget”?  Also, just like with the eigenmoses/eigenjesus distinction, do you have a positive duty to initiate defection against an agent who you learn is immoral, or merely no duty not to do so?

OK, so after we handle the above issues, will there still be examples that our time-sensitive, knowledge-sensitive eigenmorality definition gets badly, egregiously wrong?  Maybe—I don’t know!  Please let me know in the comments.

Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul.  Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages’ Google rank.  In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people’s “revealed preferences,” and exploits that structure for its own purposes.  Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings.  But Google still isn’t the only reason for linking, so the link structure still contains real information.

By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject.  If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the “experts” such that claiming to “trust” them would do the most for their favored outcome.  It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust.  So, how would an eigendemocracy suss out the truth about who trusts whom on which subject?  I don’t have a very good answer to this, and am open to suggestions.  The best idea so far is to use Facebook for this purpose, but I don’t know exactly how.


Update (6/22): Many commenters, both here and on Hacker News, interpreted me to be saying something obviously stupid: namely, that any belief identified as “the consensus” by an eigenvector analysis is therefore the morally right one. They then energetically knocked down this strawman, with the standard examples (Hitler, slavery, discrimination against gays).

Admittedly, I probably contributed to this confusion by my ill-advised decision to discuss eigenmorality and eigendemocracy in the same blog post—solely because of their mathematical similarity, and the ease with which thinking about one leads to thinking about the other. But the two are different, as are my claims about them. For the record:

  • Eigenmorality: Within the stylized setting of an Iterated Prisoner’s Dilemma tournament, with side-channels allowing agents to learn who are doing what to each other, I believe it ought to be possible, by looking at who initiated rounds of defection and forgiveness, and then doing an eigenvector analysis on the result, to identify the “moral” and “immoral” agents in a way that more-or-less accords with our moral intuitions. Even if true, of course, this wouldn’t have any obvious moral implications for hot-button issues such as abortion, gun control, or climate change, which it’s far from obvious how to encode in terms of IPD tournaments.
  • Eigendemocracy: By doing an eigenvector analysis, to identify who people implicitly acknowledge as the “experts” within each field, I believe that it might be possible to produce results that, on average, in practice, and in contemporary society, are better and more rational than those produced by ordinary majority-voting. Obviously, there’s no guarantee whatsoever that the results of eigendemocracy would be morally acceptable ones: if the public acknowledges as “experts” people who believe evil things (as in Nazi Germany), then eigendemocracy will produce evil results. But democracy itself suffers from a precisely analogous problem. The situation that interests me is one that’s been with us since the time of ancient Athens: one where there is a consensus among the experts about the wisest course of action, and there’s also an implicit consensus among the public that those experts are indeed the experts, but the democratic system is somehow “unable to complete the modus ponens,” because of manipulation by powerful interests and the sway of demagogues. In such cases, it seems possible to me that an eigendemocracy could improve on the results of ordinary democracy—perhaps dramatically so—while still avoiding the evils of dictatorship.

Crucially, in neither of the above bullet points, nor in their combination, is there any hint of a belief that “the will of the majority always defines what’s morally right” (if anything, there’s a belief in the opposite).


Update (7/4): While this isn’t really a surprise—I’d astonished if it weren’t the case—I’ve now learned that several people, besides me and Rebecca Goldstein, have previously written about the ideas of eigentrust and eigendemocracy. Perhaps more surprising is that one of the earlier groups—consisting of Sep Kamvar, Mario Schlosser, and Hector Garcia-Molina from Stanford—literally called the idea “EigenTrust,” when they published about it in 2003. (Note that Garcia-Molina, in a likely non-coincidence, was Larry Page and Sergey Brin’s PhD adviser.) Kamvar et al.’s intended application for EigenTrust was to determine which nodes are trustworthy in a peer-to-peer file-sharing network, rather than (say) to reinvent democracy, or to address conundrums of epistemology and ethics that have been with us since Plato. But while the scope might be more modest, the core idea is the same. (Hat tip to commenter Babak.)

As for enhancing democracy using linear algebra, it turns out that that too has already been discussed: see for example this presentation by Rob Spekkens of the Perimeter Institute, which Michael Nielsen pointed me to. (In yet another small-world phenomenon, Rob’s main interest is in quantum foundations, and in that context I’ve known him for a decade! But his interest in eigendemocracy was news to me.)

If you’re wondering whether anything in this post was original … well, so far, I haven’t learned of prior work specifically about eigenmorality (e.g., in Iterated Prisoners Dilemma tournaments), much less about eigenmoses and eigenjesus.

Randomness Rules in Quantum Mechanics

June 16th, 2014

So, Part II of my two-part series for American Scientist magazine about how to recognize random numbers is now out.  This part—whose original title was the one above, but was changed to “Quantum Randomness” to fit the allotted space—is all about quantum mechanics and the Bell inequality, and their use in generating “Einstein-certified random numbers.”  I discuss the CHSH game, the Free Will Theorem, and Gerard ‘t Hooft’s “superdeterminism” (just a bit), before explaining the striking recent protocols of Colbeck, Pironio et al., Vazirani and Vidick, Couldron and Yuen, and Miller and Shi, all of which expand a short random seed into additional random bits that are “guaranteed to be random unless Nature resorted to faster-than-light communication to bias them.”  I hope you like it.

[Update: See here for Hacker News thread]

In totally unrelated news, President Obama’s commencement speech at UC Irvine, about climate change and the people who still deny its reality, is worth reading.

My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test

June 9th, 2014

If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper.  See for example here, here, here, here.  In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.”  Much more importantly, Turing’s famous example dialogue, involving Mr. Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve.

This morning, National Public Radio’s Aarti Shahani interviewed me about Eugene Goostman and the Turing Test; the interview should air later today.

Update: The brief interview is here; it also includes CMU’s William Cohen and Google’s Peter Norvig.

I tried to redirect Ms. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice.  Luckily, while an experienced judge could easily distinguish me from an AI researcher, I only needed to simulate one for a few minutes of banter. :-)

Prompted by Ms. Shahani’s inquiries, I had the “conversation” below with the version of Eugene Goostman available at http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/.  It’s possible that there’s a more recent version somewhere else, but this is the only version I was able to access.  Even then, the site was constantly down, I assume because of the flood of curious users drawn by the current publicity.

Scott: Which is bigger, a shoebox or Mount Everest?

Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…

Scott: How many legs does a camel have?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Scott: How many legs does a millipede have?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.

Scott: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-)

Scott: Do you understand why I’m asking such basic questions?  Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”?

Eugene: …wait

Scott: Do you think your ability to fool unsophisticated judges indicates a flaw with the Turing Test itself, or merely with the way people have interpreted the test?

Eugene: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Scott: Do you think Alan Turing, brilliant though he was, had trouble imagining that the judges of his “imitation game” wouldn’t think to ask commonsense questions like the ones above—or that, if they did, they’d actually accept evasion or irrelevant banter as answers?

Eugene: No, not really. I don’t think alan turing brilliant although this guy was had trouble imagining that the judges of his imitation game would not consider to Oooh. Anything else?

Scott: In your opinion, does your existence demonstrate any idea or principle that wasn’t demonstrated just as convincingly by ELIZA, Joseph Weizenbaum’s chatbot from the 1960s?

Eugene: Eliza was a break-thru. All the bots after it were nothing but weak parodies, claiming to have “revolutionary improvements”.

Scott: Hey, that’s the first sensible thing you’ve said!

Eugene: Did I really say so? Well, I must have forgotten about it. Could you tell me about your job, by the way?

Scott: OK, don’t push your luck.  Why do you think journalists are now hyping you?  Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?

Eugene: Ask the Creator about it (call Him by phone, for example:-). He knows all hidden reasons of human-beings’ life and behavior.

Scott: OK, I’ll count that as the second sensible thing you’ve said (grading generously).  Don’t you think that a more convincing chatbot could be created, by using Google to trawl the entire web for plausible-sounding answers (analogous to what IBM’s Watson did)?

Eugene: Yes, I do think that a more convincing chatbot can be built by using google to trawl the all web for plausible – sounding – I hope I have right to have my own opinion. Maybe, let’s talk about something else? What would you like to discuss?

Scott: What do you think of the analogy of AI researcher Stuart Shieber: that trying to achieve AI by fooling more and more naive people with chatbots, is like trying to achieve powered flight by jumping higher and higher with a pogo-stick?

Eugene: “Artificial Intelligence”? I guess you mean that stupid joke about a blonde who dyed her hair!

Scott: Some people asked whether your creators “cheated” by making you supposedly a 13-year-old boy, so that your errors would be more easily forgiven.  Do people actually have that low an opinion of 13-year-old boys?  I mean, I knew when I was 13 that camels have four legs!

Eugene: I wasn’t created by anyone. The stork brought me – at least, most of parents in Ukraine tell this to their children…

Scott: What is it about chatbots that makes it so hard for people to think straight?  Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?

Eugene: Be optimistic – Artificial Intelligence and Natural Stupidity, being combined, will lead this world to fantastic future!

Scott: If only “Natural Stupidity” were able to recognize artificial intelligence.  Speaking of which, did you see the movie “Her”?  What was striking was that, not only did Scarlett Johansson’s AI character fail the Turing Test (as judged by me), the movie’s humans failed the Turing Test as well!

Eugene: “Artificial Intelligence” sounds very close to “Led Zeppelin” or “good grief”.

Scott: Let’s end our conversation on that thought-provoking note.  Thanks for chatting, Eugene!

Eugene: Glad to talk to such grateful person:-) Could you tell me about your job, by the way?

Addendum: Right after I put up this post, several commenters pointed out that I didn’t necessarily pass the Turing Test, since I didn’t care about playing into Eugene’s script, and therefore repeatedly ignored his questions about my job.  Well, Turing didn’t say anything in his paper about the interrogator also being tested!  I was trying to illustrate how to unmask a robot as quickly as possible, not how to distinguish yourself from one.  But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response:

Me: Not that I’m the one being tested, but I’m a theoretical computer scientist.

Eugene: I have nice Celeron-667, by the way. I play games on it!

Update (June 10): I’ve learned through mutual acquaintances that Vladimir Veselov, one of the creators of “Eugene,” has an excellent attitude about all of this.  He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator.  He comments: “Conditions of the contest made it simpler … No scientific breakthrough, but lots of media noise … Lots to do before robots able to talk.”  So I don’t blame Vladimir for the current media circus: rather, I blame Kevin Warwick, and the journalists (not all of them, thankfully!) who uncritically repeated Warwick’s pronouncements.

Incidentally, I strongly encourage people to read Stuart Shieber’s classic paper, Lessons from a Restricted Turing Test (about Shieber’s experiences with the Loebner Prize competition).  This is the paper where Shieber introduces the pogo-stick analogy, and where he crisply explains why AI researchers don’t currently focus their energies on chatbot competitions.

Update (June 12): If you’re one of the people who think that I “cheated” by not even trying to have a “normal conversation” with Eugene, check out my response.

CCC’s Declaration of Independence

June 6th, 2014

Recently, the participants of the Conference on Computational Complexity (CCC)—the latest iteration of which I’ll be speaking at next week in Vancouver—voted to declare their independence from the IEEE, and to become a solo, researcher-organized conference.  See this open letter for the reasons why (basically, IEEE charged a huge overhead, didn’t allow open access to the proceedings, and increased rather than decreased the administrative burden on the organizers).  As a former member of the CCC Steering Committee, I’m in violent agreement with this move, and only wish we’d managed to do it sooner.

Now, Dieter van Melkebeek (the current Steering Committee chair) is asking complexity theorists to sign a public Letter of Support, to make it crystal-clear that the community is behind the move to independence.  And Jeff Kinne has asked me to advertise the letter on my blog.  So, if you’re a complexity theorist who agrees with the move, please go there and sign (it already has 111 signatures, but could use more).

Meanwhile, I wish to express my profound gratitude to Dieter, Jeff, and the other Steering Committee members for their efforts toward independence.  The only thing I might’ve done differently would be to add a little more … I dunno, pizzazz to the documents explaining the reasons for the move.  Like:

When in the Course of human events, it becomes necessary for a conference to dissolve the organizational bands that have connected it with the IEEE, and to assume among the powers of the earth, the separate and equal station to which the Laws of Mathematics and the CCC Charter entitle it, a decent respect to the opinions of theorist-kind requires that the participants should declare the causes which impel them to the separation.

We hold these truths to be self-evident (but still in need of proof), that P and NP are created unequal, that one-way functions exist, that the polynomial hierarchy is infinite…

Giulio Tononi and Me: A Phi-nal Exchange

May 30th, 2014

You might recall that last week I wrote a post criticizing Integrated Information Theory (IIT), and its apparent implication that a simple Reed-Solomon decoding circuit would, if scaled to a large enough size, bring into being a consciousness vastly exceeding our own.  On Wednesday Giulio Tononi, the creator of IIT, was kind enough to send me a fascinating 14-page rebuttal, and to give me permission to share it here:

Why Scott should stare at a blank wall and reconsider (or, the conscious grid)

If you’re interested in this subject at all, then I strongly recommend reading Giulio’s response before continuing further.   But for those who want the tl;dr: Giulio, not one to battle strawmen, first restates my own argument against IIT with crystal clarity.  And while he has some minor quibbles (e.g., apparently my calculations of Φ didn’t use the most recent, “3.0” version of IIT), he wisely sets those aside in order to focus on the core question: according to IIT, are all sorts of simple expander graphs conscious?

There, he doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.  He affirms that, yes, according to IIT, a large network of XOR gates arranged in a simple expander graph is conscious.  Indeed, he goes further, and says that the “expander” part is superfluous: even a network of XOR gates arranged in a 2D square grid is conscious.  In my language, Giulio is simply pointing out here that a √n×√n square grid has decent expansion: good enough to produce a Φ-value of about √n, if not the information-theoretic maximum of n (or n/2, etc.) that an expander graph could achieve.  And apparently, by Giulio’s lights, Φ=√n is sufficient for consciousness!

While Giulio never mentions this, it’s interesting to observe that logic gates arranged in a 1-dimensional line would produce a tiny Φ-value (Φ=O(1)).  So even by IIT standards, such a linear array would not be conscious.  Yet the jump from a line to a two-dimensional grid is enough to light the spark of Mind.

Personally, I give Giulio enormous credit for having the intellectual courage to follow his theory wherever it leads.  When the critics point out, “if your theory were true, then the Moon would be made of peanut butter,” he doesn’t try to wiggle out of the prediction, but proudly replies, “yes, chunky peanut butter—and you forgot to add that the Earth is made of Nutella!”

Yet even as we admire Giulio’s honesty and consistency, his stance might also prompt us, gently, to take another look at this peanut-butter-moon theory, and at what grounds we had for believing it in the first place.  In his response essay, Giulio offers four arguments (by my count) for accepting IIT despite, or even because of, its conscious-grid prediction: one “negative” argument and three “positive” ones.  Alas, while your Φ-lage may vary, I didn’t find any of the four arguments persuasive.  In the rest of this post, I’ll go through them one by one and explain why.

I. The Copernicus-of-Consciousness Argument

Like many commenters on my last post, Giulio heavily criticizes my appeal to “common sense” in rejecting IIT.  Sure, he says, I might find it “obvious” that a huge Vandermonde matrix, or its physical instantiation, isn’t conscious.  But didn’t people also find it “obvious” for millennia that the Sun orbits the Earth?  Isn’t the entire point of science to challenge common sense?  Clearly, then, the test of a theory of consciousness is not how well it upholds “common sense,” but how well it fits the facts.

The above position sounds pretty convincing: who could dispute that observable facts trump personal intuitions?  The trouble is, what are the observable facts when it comes to consciousness?  The anti-common-sense view gets all its force by pretending that we’re in a relatively late stage of research—namely, the stage of taking an agreed-upon scientific definition of consciousness, and applying it to test our intuitions—rather than in an extremely early stage, of agreeing on what the word “consciousness” is even supposed to mean.

Since I think this point is extremely important—and of general interest, beyond just IIT—I’ll expand on it with some analogies.

Suppose I told you that, in my opinion, the ε-δ definition of continuous functions—the one you learn in calculus class—failed to capture the true meaning of continuity.  Suppose I told you that I had a new, better definition of continuity—and amazingly, when I tried out my definition on some examples, it turned out that ⌊x⌋ (the floor function) was continuous, whereas x2  had discontinuities, though only at 17.5 and 42.

You would probably ask what I was smoking, and whether you could have some.  But why?  Why shouldn’t the study of continuity produce counterintuitive results?  After all, even the standard definition of continuity leads to some famously weird results, like that x sin(1/x) is a continuous function, even though sin(1/x) is discontinuous.  And it’s not as if the standard definition is God-given: people had been using words like “continuous” for centuries before Bolzano, Weierstrass, et al. formalized the ε-δ definition, a definition that millions of calculus students still find far from intuitive.  So why shouldn’t there be a different, better definition of “continuous,” and why shouldn’t it reveal that a step function is continuous while a parabola is not?

In my view, the way out of this conceptual jungle is to realize that, before any formal definitions, any ε’s and δ’s, we start with an intuition for we’re trying to capture by the word “continuous.”  And if we press hard enough on what that intuition involves, we’ll find that it largely consists of various “paradigm-cases.”  A continuous function, we’d say, is a function like 3x, or x2, or sin(x), while a discontinuity is the kind of thing that the function 1/x has at x=0, or that ⌊x⌋ has at every integer point.  Crucially, we use the paradigm-cases to guide our choice of a formal definition—not vice versa!  It’s true that, once we have a formal definition, we can then apply it to “exotic” cases like x sin(1/x), and we might be surprised by the results.  But the paradigm-cases are different.  If, for example, our definition told us that x2 was discontinuous, that wouldn’t be a “surprise”; it would just be evidence that we’d picked a bad definition.  The definition failed at the only task for which it could have succeeded: namely, that of capturing what we meant.

Some people might say that this is all well and good in pure math, but empirical science has no need for squishy intuitions and paradigm-cases.  Nothing could be further from the truth.  Suppose, again, that I told you that physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition.  And, when I built a Scott-thermometer that measures true temperatures, it delivered the shocking result that boiling water is actually colder than ice.  You’d probably tell me where to shove my Scott-thermometer.  But wait: how do you know that I’m not the Copernicus of heat, and that future generations won’t celebrate my breakthrough while scoffing at your small-mindedness?

I’d say there’s an excellent answer: because what we mean by heat is “whatever it is that boiling water has more of than ice” (along with dozens of other paradigm-cases).  And because, if you use a thermometer to check whether boiling water is hotter than ice, then the term for what you’re doing is calibrating your thermometer.  When the clock strikes 13, it’s time to fix the clock, and when the thermometer says boiling water’s colder than ice, it’s time to replace the thermometer—or if needed, even the entire theory on which the thermometer is based.

Ah, you say, but doesn’t modern physics define heat in a completely different, non-intuitive way, in terms of molecular motion?  Yes, and that turned out to be a superb definition—not only because it was precise, explanatory, and applicable to cases far beyond our everyday experience, but crucially, because it matched common sense on the paradigm-cases.  If it hadn’t given sensible results for boiling water and ice, then the only possible conclusion would be that, whatever new quantity physicists had defined, they shouldn’t call it “temperature,” or claim that their quantity measured the amount of “heat.”  They should call their new thing something else.

The implications for the consciousness debate are obvious.  When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared.  The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean:

  • You are conscious (though not when anesthetized).
  • (Most) other people appear to be conscious, judging from their behavior.
  • Many animals appear to be conscious, though probably to a lesser degree than humans (and the degree of consciousness in each particular species is far from obvious).
  • A rock is not conscious.  A wall is not conscious.  A Reed-Solomon code is not conscious.  Microsoft Word is not conscious (though a Word macro that passed the Turing test conceivably would be).

Fetuses, coma patients, fish, and hypothetical AIs are the x sin(1/x)’s of consciousness: they’re the tougher cases, the ones where we might actually need a formal definition to adjudicate the truth.

Now, given a proposed formal definition for an intuitive concept, how can we check whether the definition is talking about same thing we were trying to get at before?  Well, we can check whether the definition at least agrees that parabolas are continuous while step functions are not, that boiling water is hot while ice is cold, and that we’re conscious while Reed-Solomon decoders are not.  If so, then the definition might be picking out the same thing that we meant, or were trying to mean, pre-theoretically (though we still can’t be certain).  If not, then the definition is certainly talking about something else.

What else can we do?

II. The Axiom Argument

According to Giulio, there is something else we can do, besides relying on paradigm-cases.  That something else, in his words, is to lay down “postulates about how the physical world should be organized to support the essential properties of experience,” then use those postulates to derive a consciousness-measuring quantity.

OK, so what are IIT’s postulates?  Here’s how Giulio states the five postulates leading to Φ in his response essay (he “derives” these from earlier “phenomenological axioms,” which you can find in the essay):

  1. A system of mechanisms exists intrinsically if it can make a difference to itself, by affecting the probability of its past and future states, i.e. it has causal power (existence).
  2. It is composed of submechanisms each with their own causal power (composition).
  3. It generates a conceptual structure that is the specific way it is, as specified by each mechanism’s concept — this is how each mechanism affects the probability of the system’s past and future states (information).
  4. The conceptual structure is unified — it cannot be decomposed into independent components (integration).
  5. The conceptual structure is singular — there can be no superposition of multiple conceptual structures over the same mechanisms and intervals of time.

From my standpoint, these postulates have three problems.  First, I don’t really understand them.  Second, insofar as I do understand them, I don’t necessarily accept their truth.  And third, insofar as I do accept their truth, I don’t see how they lead to Φ.

To elaborate a bit:

I don’t really understand the postulates.  I realize that the postulates are explicated further in the many papers on IIT.  Unfortunately, while it’s possible that I missed something, in all of the papers that I read, the definitions never seemed to “bottom out” in mathematical notions that I understood, like functions mapping finite sets to other finite sets.  What, for example, is a “mechanism”?  What’s a “system of mechanisms”?  What’s “causal power”?  What’s a “conceptual structure,” and what does it mean for it to be “unified”?  Alas, it doesn’t help to define these notions in terms of other notions that I also don’t understand.  And yes, I agree that all these notions can be given fully rigorous definitions, but there could be many different ways to do so, and the devil could lie in the details.  In any case, because (as I said) it’s entirely possible that the failure is mine, I place much less weight on this point than I do on the two points to follow.

I don’t necessarily accept the postulates’ truth.  Is consciousness a “unified conceptual structure”?  Is it “singular”?  Maybe.  I don’t know.  It sounds plausible.  But at any rate, I’m far less confident about any these postulates—whatever one means by them!—than I am about my own “postulate,” which is that you and I are conscious while my toaster is not.  Note that my postulate, though not phenomenological, does have the merit of constraining candidate theories of consciousness in an unambiguous way.

I don’t see how the postulates lead to Φ.  Even if one accepts the postulates, how does one deduce that the “amount of consciousness” should be measured by Φ, rather than by some other quantity?  None of the papers I read—including the ones Giulio linked to in his response essay—contained anything that looked to me like a derivation of Φ.  Instead, there was general discussion of the postulates, and then Φ just sort of appeared at some point.  Furthermore, given the many idiosyncrasies of Φ—the minimization over all bipartite (why just bipartite? why not tripartite?) decompositions of the system, the need for normalization (or something else in version 3.0) to deal with highly-unbalanced partitions—it would be quite a surprise were it possible to derive its specific form from postulates of such generality.

I was going to argue for that conclusion in more detail, when I realized that Giulio had kindly done the work for me already.  Recall that Giulio chided me for not using the “latest, 2014, version 3.0″ edition of Φ in my previous post.  Well, if the postulates uniquely determined the form of Φ, then what’s with all these upgrades?  Or has Φ’s definition been changing from year to year because the postulates themselves have been changing?  If the latter, then maybe one should wait for the situation to stabilize before trying to form an opinion of the postulates’ meaningfulness, truth, and completeness?

III. The Ironic Empirical Argument

Or maybe not.  Despite all the problems noted above with the IIT postulates, Giulio argues in his essay that there’s a good a reason to accept them: namely, they explain various empirical facts from neuroscience, and lead to confirmed predictions.  In his words:

[A] theory’s postulates must be able to explain, in a principled and parsimonious way, at least those many facts about consciousness and the brain that are reasonably established and non-controversial.  For example, we know that our own consciousness depends on certain brain structures (the cortex) and not others (the cerebellum), that it vanishes during certain periods of sleep (dreamless sleep) and reappears during others (dreams), that it vanishes during certain epileptic seizures, and so on.  Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts.  Such empirical facts, and not intuitions, should be its primary test…

[I]n some cases we already have some suggestive evidence [of the truth of the IIT postulates' predictions].  One example is the cerebellum, which has 69 billion neurons or so — more than four times the 16 billion neurons of the cerebral cortex — and is as complicated a piece of biological machinery as any.  Though we do not understand exactly how it works (perhaps even less than we understand the cerebral cortex), its connectivity definitely suggests that the cerebellum is ill suited to information integration, since it lacks lateral connections among its basic modules.  And indeed, though the cerebellum is heavily connected to the cerebral cortex, removing it hardly affects our consciousness, whereas removing the cortex eliminates it.

I hope I’m not alone in noticing the irony of this move.  But just in case, let me spell it out: Giulio has stated, as “largely uncontroversial facts,” that certain brain regions (the cerebellum) and certain states (dreamless sleep) are not associated with our consciousness.  He then views it as a victory for IIT, if those regions and states turn out to have lower information integration than the regions and states that he does take to be associated with our consciousness.

But how does Giulio know that the cerebellum isn’t conscious?  Even if it doesn’t produce “our” consciousness, maybe the cerebellum has its own consciousness, just as rich as the cortex’s but separate from it.  Maybe removing the cerebellum destroys that other consciousness, unbeknownst to “us.”  Likewise, maybe “dreamless” sleep brings about its own form of consciousness, one that (unlike dreams) we never, ever remember in the morning.

Giulio might take the implausibility of those ideas as obvious, or at least as “largely uncontroversial” among neuroscientists.  But here’s the problem with that: he just told us that a 2D square grid is conscious!  He told us that we must not rely on “commonsense intuition,” or on any popular consensus, to say that if a square mesh of wires is just sitting there XORing some input bits, doing nothing at all that we’d want to call intelligent, then it’s probably safe to conclude that the mesh isn’t conscious.  So then why shouldn’t he say the same for the cerebellum, or for the brain in dreamless sleep?  By Giulio’s own rules (the ones he used for the mesh), we have no a-priori clue whether those systems are conscious or not—so even if IIT predicts that they’re not conscious, that can’t be counted as any sort of success for IIT.

For me, the point is even stronger: I, personally, would be a million times more inclined to ascribe consciousness to the human cerebellum, or to dreamless sleep, than I would to the mesh of XOR gates.  For it’s not hard to imagine neuroscientists of the future discovering “hidden forms of intelligence” in the cerebellum, and all but impossible to imagine them doing the same for the mesh.  But even if you put those examples on the same footing, still the take-home message seems clear: you can’t count it as a “success” for IIT if it predicts that the cerebellum in unconscious, while at the same time denying that it’s a “failure” for IIT if it predicts that a square mesh of XOR gates is conscious.  If the unconsciousness of the cerebellum can be considered an “empirical fact,” safe enough for theories of consciousness to be judged against it, then surely the unconsciousness of the mesh can also be considered such a fact.

IV. The Phenomenology Argument

I now come to, for me, the strangest and most surprising part of Giulio’s response.  Despite his earlier claim that IIT need not dovetail with “commonsense intuition” about which systems are conscious—that it can defy intuition—at some point, Giulio valiantly tries to reprogram our intuition, to make us feel why a 2D grid could be conscious.  As best I can understand, the argument seems to be that, when we stare at a blank 2D screen, we form a rich experience in our heads, and that richness must be mirrored by a corresponding “intrinsic” richness in 2D space itself:

[I]f one thinks a bit about it, the experience of empty 2D visual space is not at all empty, but contains a remarkable amount of structure.  In fact, when we stare at the blank screen, quite a lot is immediately available to us without any effort whatsoever.  Thus, we are aware of all the possible locations in space (“points”): the various locations are right “there”, in front of us.  We are aware of their relative positions: a point may be left or right of another, above or below, and so on, for every position, without us having to order them.  And we are aware of the relative distances among points: quite clearly, two points may be close or far, and this is the case for every position.  Because we are aware of all of this immediately, without any need to calculate anything, and quite regularly, since 2D space pervades most of our experiences, we tend to take for granted the vast set of relationship[s] that make up 2D space.

And yet, says IIT, given that our experience of the blank screen definitely exists, and it is precisely the way it is — it is 2D visual space, with all its relational properties — there must be physical mechanisms that specify such phenomenological relationships through their causal power … One may also see that the causal relationships that make up 2D space obtain whether the elements are on or off.  And finally, one may see that such a 2D grid is necessary not so much to represent space from the extrinsic perspective of an observer, but to create it, from its own intrinsic perspective.

Now, it would be child’s-play to criticize the above line of argument for conflating our consciousness of the screen with the alleged consciousness of the screen itself.  To wit:  Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall.  You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you.

However, I actually prefer a different tack in criticizing Giulio’s “wall argument.”  Suppose I accepted that my mental image of the relationships between certain entities was relevant to assessing whether those entities had their own mental life, independent of me or any other observer.  For example, suppose I believed that, if my experience of 2D space is rich and structured, then that’s evidence that 2D space is rich and structured enough to be conscious.

Then my question is this: why shouldn’t the same be true of 1D space?  After all, my experience of staring at a rope is also rich and structured, no less than my experience of staring at a wall.  I perceive some points on the rope as being toward the left, others as being toward the right, and some points as being between two other points.  In fact, the rope even has a structure—namely, a natural total ordering on its points—that the wall lacks.  So why does IIT cruelly deny subjective experience to a row of logic gates strung along a rope, reserving it only for a mesh of logic gates pasted to a wall?

And yes, I know the answer: because the logic gates on the rope aren’t “integrated” enough.  But who’s to say that the gates in the 2D mesh are integrated enough?  As I mentioned before, their Φ-value grows only as the square root of the number of gates, so that the ratio of integrated information to total information tends to 0 as the number of gates increases.  And besides, aren’t what Giulio calls “the facts of phenomenology” the real arbiters here, and isn’t my perception of the rope’s structure a phenomenological fact?  When you cut a rope, does it not split?  When you prick it, does it not fray?

Conclusion

At this point, I fear we’re at a philosophical impasse.  Having learned that, according to IIT,

  1. a square grid of XOR gates is conscious, and your experience of staring at a blank wall provides evidence for that,
  2. by contrast, a linear array of XOR gates is not conscious, your experience of staring at a rope notwithstanding,
  3. the human cerebellum is also not conscious (even though a grid of XOR gates is), and
  4. unlike with the XOR gates, we don’t need a theory to tell us the cerebellum is unconscious, but can simply accept it as “reasonably established” and “largely uncontroversial,”

I personally feel completely safe in saying that this is not the theory of consciousness for me.  But I’ve also learned that other people, even after understanding the above, still don’t reject IIT.  And you know what?  Bully for them.  On reflection, I firmly believe that a two-state solution is possible, in which we simply adopt different words for the different things that we mean by “consciousness”—like, say, consciousnessReal for my kind and consciousnessWTF for the IIT kind.  OK, OK, just kidding!  How about “paradigm-case consciousness” for the one and “IIT consciousness” for the other.


Completely unrelated announcement: Some of you might enjoy this Nature News piece by Amanda Gefter, about black holes and computational complexity.

Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton

May 27th, 2014

Update (June 3): A few days after we posted this paper online, Brent Werness, a postdoc in probability theory at the University of Washington, discovered a serious error in the “experimental” part of the paper.  Happily, Brent is now collaborating with us on producing a new version of the paper that fixes the error, which we hope to have available within a few months (and which will replace the version currently on the arXiv).

To make a long story short: while the overall idea, of measuring “apparent complexity” by the compressed file size of a coarse-grained image, is fine, the “interacting coffee automaton” that we study in the paper is not an example where the apparent complexity becomes large at intermediate times.  That fact can be deduced as a corollary of a result of Liggett from 2009 about the “symmetric exclusion process,” and can be seen as a far-reaching generalization of a result that we prove in our paper’s appendix: namely, that in the non-interacting coffee automaton (our “control case”), the apparent complexity after t time steps is upper-bounded by O(log(nt)).  As it turns out, we were more right than we knew to worry about large-deviation bounds giving complete mathematical control over what happens when the cream spills into the coffee, thereby preventing the apparent complexity from ever becoming large!

But what about our numerical results, which showed a small but unmistakable complexity bump for the interacting automaton (figure 10(a) in the paper)?  It now appears that the complexity bump we saw in our data is likely to be explainable by an incomplete removal of what we called “border pixel artifacts”: that is, “spurious” complexity that arises merely from the fact that, at the border between cream and coffee, we need to round the fraction of cream up or down to the nearest integer to produce a grayscale.  In the paper, we devoted a whole section (Section 6) to border pixel artifacts and the need to deal with them: something sufficiently non-obvious that in the comments of this post, you can find people arguing with me that it’s a non-issue.  Well, it now appears that we erred by underestimating the severity of border pixel artifacts, and that a better procedure to get rid of them would also eliminate the complexity bump for the interacting automaton.

Once again, this error has no effect on either the general idea of complexity rising and then falling in closed thermodynamic systems, or our proposal for how to quantify that rise and fall—the two aspects of the paper that have generated the most interest.  But we made a bad choice of model system with which to illustrate those ideas.  Had I looked more carefully at the data, I could’ve noticed the problem before we posted, and I take responsibility for my failure to do so.

The good news is that ultimately, I think the truth only makes our story more interesting.  For it turns out that apparent complexity, as we define it, is not something that’s trivial to achieve by just setting loose a bunch of randomly-walking particles, which bump into each other but are otherwise completely independent.  If you want “complexity” along the approach to thermal equilibrium, you need to work a bit harder for it.  One promising idea, which we’re now exploring, is to consider a cream tendril whose tip takes a random walk through the coffee, leaving a trail of cream in its wake.  Using results in probability theory—closely related, or so I’m told, to the results for which Wendelin Werner won his Fields Medal!—it may even be possible to prove analytically that the apparent complexity becomes large in thermodynamic systems with this sort of behavior, much as one can prove that the complexity doesn’t become large in our original coffee automaton.

So, if you’re interested in this topic, stay tuned for the updated version of our paper.  In the meantime, I wish to express our deepest imaginable gratitude to Brent Werness for telling us all this.


Good news!  After nearly three years of procrastination, fellow blogger Sean Carroll, former MIT undergraduate Lauren Ouellette, and yours truly finally finished a paper with the above title (coming soon to an arXiv near you).  PowerPoint slides are also available (as usual, you’re on your own if you can’t open them—sorry!).

For the background and context of this paper, please see my old post “The First Law of Complexodynamics,” which discussed Sean’s problem of defining a “complextropy” measure that first increases and then decreases in closed thermodynamic systems, in contrast to entropy (which increases monotonically).  In this exploratory paper, we basically do five things:

  1. We survey several candidate “complextropy” measures: their strengths, weaknesses, and relations to one another.
  2. We propose a model system for studying such measures: a probabilistic cellular automaton that models a cup of coffee into which cream has just been poured.
  3. We report the results of numerical experiments with one of the measures, which we call “apparent complexity” (basically, the gzip file size of a smeared-out image of the coffee cup).  The results confirm that the apparent complexity does indeed increase, reach a maximum, then turn around and decrease as the coffee and cream mix.
  4. We discuss a technical issue that one needs to overcome (the so-called “border pixels” problem) before one can do meaningful experiments in this area, and offer a solution.
  5. We raise the open problem of proving analytically that the apparent complexity ever becomes large for the coffee automaton.  To underscore this problem’s difficulty, we prove that the apparent complexity doesn’t become large in a simplified version of the coffee automaton.

Anyway, here’s the abstract:

In contrast to entropy, which increases monotonically, the “complexity” or “interestingness” of closed systems seems intuitively to increase at first and then decrease as equilibrium is approached. For example, our universe lacked complex structures at the Big Bang and will also lack them after black holes evaporate and particles are dispersed. This paper makes an initial attempt to quantify this pattern. As a model system, we use a simple, two-dimensional cellular automaton that simulates the mixing of two liquids (“coffee” and “cream”). A plausible complexity measure is then the Kolmogorov complexity of a coarse-grained approximation of the automaton’s state, which we dub the “apparent complexity.” We study this complexity measure, and show analytically that it never becomes large when the liquid particles are non-interacting. By contrast, when the particles do interact, we give numerical evidence that the complexity reaches a maximum comparable to the “coffee cup’s” horizontal dimension. We raise the problem of proving this behavior analytically.

Questions and comments more than welcome.


In unrelated news, Shafi Goldwasser has asked me to announce that the Call for Papers for the 2015 Innovations in Theoretical Computer Science (ITCS) conference is now available.

Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)

May 21st, 2014

Happy birthday to me!

Recently, lots of people have been asking me what I think about IIT—no, not the Indian Institutes of Technology, but Integrated Information Theory, a widely-discussed “mathematical theory of consciousness” developed over the past decade by the neuroscientist Giulio Tononi.  One of the askers was Max Tegmark, who’s enthusiastically adopted IIT as a plank in his radical mathematizing platform (see his paper “Consciousness as a State of Matter”).  When, in the comment thread about Max’s Mathematical Universe Hypothesis, I expressed doubts about IIT, Max challenged me to back up my doubts with a quantitative calculation.

So, this is the post that I promised to Max and all the others, about why I don’t believe IIT.  And yes, it will contain that quantitative calculation.

But first, what is IIT?  The central ideas of IIT, as I understand them, are:

(1) to propose a quantitative measure, called Φ, of the amount of “integrated information” in a physical system (i.e. information that can’t be localized in the system’s individual parts), and then

(2) to hypothesize that a physical system is “conscious” if and only if it has a large value of Φ—and indeed, that a system is more conscious the larger its Φ value.

I’ll return later to the precise definition of Φ—but basically, it’s obtained by minimizing, over all subdivisions of your physical system into two parts A and B, some measure of the mutual information between A’s outputs and B’s inputs and vice versa.  Now, one immediate consequence of any definition like this is that all sorts of simple physical systems (a thermostat, a photodiode, etc.) will turn out to have small but nonzero Φ values.  To his credit, Tononi cheerfully accepts the panpsychist implication: yes, he says, it really does mean that thermostats and photodiodes have small but nonzero levels of consciousness.  On the other hand, for the theory to work, it had better be the case that Φ is small for “intuitively unconscious” systems, and only large for “intuitively conscious” systems.  As I’ll explain later, this strikes me as a crucial point on which IIT fails.

The literature on IIT is too big to do it justice in a blog post.  Strikingly, in addition to the “primary” literature, there’s now even a “secondary” literature, which treats IIT as a sort of established base on which to build further speculations about consciousness.  Besides the Tegmark paper linked to above, see for example this paper by Maguire et al., and associated popular article.  (Ironically, Maguire et al. use IIT to argue for the Penrose-like view that consciousness might have uncomputable aspects—a use diametrically opposed to Tegmark’s.)

Anyway, if you want to read a popular article about IIT, there are loads of them: see here for the New York Times’s, here for Scientific American‘s, here for IEEE Spectrum‘s, and here for the New Yorker‘s.  Unfortunately, none of those articles will tell you the meat (i.e., the definition of integrated information); for that you need technical papers, like this or this by Tononi, or this by Seth et al.  IIT is also described in Christof Koch’s memoir Consciousness: Confessions of a Romantic Reductionist, which I read and enjoyed; as well as Tononi’s Phi: A Voyage from the Brain to the Soul, which I haven’t yet read.  (Koch, one of the world’s best-known thinkers and writers about consciousness, has also become an evangelist for IIT.)

So, I want to explain why I don’t think IIT solves even the problem that it “plausibly could have” solved.  But before I can do that, I need to do some philosophical ground-clearing.  Broadly speaking, what is it that a “mathematical theory of consciousness” is supposed to do?  What questions should it answer, and how should we judge whether it’s succeeded?

The most obvious thing a consciousness theory could do is to explain why consciousness exists: that is, to solve what David Chalmers calls the “Hard Problem,” by telling us how a clump of neurons is able to give rise to the taste of strawberries, the redness of red … you know, all that ineffable first-persony stuff.  Alas, there’s a strong argument—one that I, personally, find completely convincing—why that’s too much to ask of any scientific theory.  Namely, no matter what the third-person facts were, one could always imagine a universe consistent with those facts in which no one “really” experienced anything.  So for example, if someone claims that integrated information “explains” why consciousness exists—nope, sorry!  I’ve just conjured into my imagination beings whose Φ-values are a thousand, nay a trillion times larger than humans’, yet who are also philosophical zombies: entities that there’s nothing that it’s like to be.  Granted, maybe such zombies can’t exist in the actual world: maybe, if you tried to create one, God would notice its large Φ-value and generously bequeath it a soul.  But if so, then that’s a further fact about our world, a fact that manifestly couldn’t be deduced from the properties of Φ alone.  Notice that the details of Φ are completely irrelevant to the argument.

Faced with this point, many scientifically-minded people start yelling and throwing things.  They say that “zombies” and so forth are empty metaphysics, and that our only hope of learning about consciousness is to engage with actual facts about the brain.  And that’s a perfectly reasonable position!  As far as I’m concerned, you absolutely have the option of dismissing Chalmers’ Hard Problem as a navel-gazing distraction from the real work of neuroscience.  The one thing you can’t do is have it both ways: that is, you can’t say both that the Hard Problem is meaningless, and that progress in neuroscience will soon solve the problem if it hasn’t already.  You can’t maintain simultaneously that

(a) once you account for someone’s observed behavior and the details of their brain organization, there’s nothing further about consciousness to be explained, and

(b) remarkably, the XYZ theory of consciousness can explain the “nothing further” (e.g., by reducing it to integrated information processing), or might be on the verge of doing so.

As obvious as this sounds, it seems to me that large swaths of consciousness-theorizing can just be summarily rejected for trying to have their brain and eat it in precisely the above way.

Fortunately, I think IIT survives the above observations.  For we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious.  Namely, we can say that IIT “merely” aims to tell us which physical systems are associated with consciousness and which aren’t, purely in terms of the systems’ physical organization.  The test of such a theory is whether it can produce results agreeing with “commonsense intuition”: for example, whether it can affirm, from first principles, that (most) humans are conscious; that dogs and horses are also conscious but less so; that rocks, livers, bacteria colonies, and existing digital computers are not conscious (or are hardly conscious); and that a room full of people has no “mega-consciousness” over and above the consciousnesses of the individuals.

The reason it’s so important that the theory uphold “common sense” on these test cases is that, given the experimental inaccessibility of consciousness, this is basically the only test available to us.  If the theory gets the test cases “wrong” (i.e., gives results diverging from common sense), it’s not clear that there’s anything else for the theory to get “right.”  Of course, supposing we had a theory that got the test cases right, we could then have a field day with the less-obvious cases, programming our computers to tell us exactly how much consciousness is present in octopi, fetuses, brain-damaged patients, and hypothetical AI bots.

In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science.  Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness.  Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward.  But I also regard IIT as a failed attempt on the problem.  And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data.  Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ.  Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about.  Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t).  Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists.  And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}).  We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa.  If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.

More formally, given a partition (A,B) of {1,…,n}, let us write an input y=(y1,…,yn)∈Sn to f in the form (yA,yB), where yA consists of the y variables in A and yB consists of the y variables in B.  Then we can think of f as mapping an input pair (yA,yB) to an output pair (zA,zB).  Now, we define the “effective information” EI(A→B) as H(zB | A random, yB=xB).  Or in words, EI(A→B) is the Shannon entropy of the output variables in B, if the input variables in A are drawn uniformly at random, while the input variables in B are fixed to their values in x.  It’s a measure of the dependence of B on A in the computation of f(x).  Similarly, we define

EI(B→A) := H(zA | B random, yA=xA).

We then consider the sum

Φ(A,B) := EI(A→B) + EI(B→A).

Intuitively, we’d like the integrated information Φ=Φ(f,x) be the minimum of Φ(A,B), over all 2n-2 possible partitions of {1,…,n} into nonempty sets A and B.  The idea is that Φ should be large, if and only if it’s not possible to partition the variables into two sets A and B, in such a way that not much information flows from A to B or vice versa when f(x) is computed.

However, no sooner do we propose this than we notice a technical problem.  What if A is much larger than B, or vice versa?  As an extreme case, what if A={1,…,n-1} and B={n}?  In that case, we’ll have Φ(A,B)≤2log2|S|, but only for the boring reason that there’s hardly any entropy in B as a whole, to either influence A or be influenced by it.  For this reason, Tononi proposes a fix where we normalize each Φ(A,B) by dividing it by min{|A|,|B|}.  He then defines the integrated information Φ to be Φ(A,B), for whichever partition (A,B) minimizes the ratio Φ(A,B) / min{|A|,|B|}.  (Unless I missed it, Tononi never specifies what we should do if there are multiple (A,B)’s that all achieve the same minimum of Φ(A,B) / min{|A|,|B|}.  I’ll return to that point later, along with other idiosyncrasies of the normalization procedure.)

Tononi gives some simple examples of the computation of Φ, showing that it is indeed larger for systems that are more “richly interconnected” in an intuitive sense.  He speculates, plausibly, that Φ is quite large for (some reasonable model of) the interconnection network of the human brain—and probably larger for the brain than for typical electronic devices (which tend to be highly modular in design, thereby decreasing their Φ), or, let’s say, than for other organs like the pancreas.  Ambitiously, he even speculates at length about how a large value of Φ might be connected to the phenomenology of consciousness.

To be sure, empirical work in integrated information theory has been hampered by three difficulties.  The first difficulty is that we don’t know the detailed interconnection network of the human brain.  The second difficulty is that it’s not even clear what we should define that network to be: for example, as a crude first attempt, should we assign a Boolean variable to each neuron, which equals 1 if the neuron is currently firing and 0 if it’s not firing, and let f be the function that updates those variables over a timescale of, say, a millisecond?  What other variables do we need—firing rates, internal states of the neurons, neurotransmitter levels?  Is choosing many of these variables uniformly at random (for the purpose of calculating Φ) really a reasonable way to “randomize” the variables, and if not, what other prescription should we use?

The third and final difficulty is that, even if we knew exactly what we meant by “the f and x corresponding to the human brain,” and even if we had complete knowledge of that f and x, computing Φ(f,x) could still be computationally intractable.  For recall that the definition of Φ involved minimizing a quantity over all the exponentially-many possible bipartitions of {1,…,n}.  While it’s not directly relevant to my arguments in this post, I leave it as a challenge for interested readers to pin down the computational complexity of approximating Φ to some reasonable precision, assuming that f is specified by a polynomial-size Boolean circuit, or alternatively, by an NC0 function (i.e., a function each of whose outputs depends on only a constant number of the inputs).  (Presumably Φ will be #P-hard to calculate exactly, but only because calculating entropy exactly is a #P-hard problem—that’s not interesting.)

I conjecture that approximating Φ is an NP-hard problem, even for restricted families of f’s like NC0 circuits—which invites the amusing thought that God, or Nature, would need to solve an NP-hard problem just to decide whether or not to imbue a given physical system with consciousness!  (Alas, if you wanted to exploit this as a practical approach for solving NP-complete problems such as 3SAT, you’d need to do a rather drastic experiment on your own brain—an experiment whose result would be to render you unconscious if your 3SAT instance was satisfiable, or conscious if it was unsatisfiable!  In neither case would you be able to communicate the outcome of the experiment to anyone else, nor would you have any recollection of the outcome after the experiment was finished.)  In the other direction, it would also be interesting to upper-bound the complexity of approximating Φ.  Because of the need to estimate the entropies of distributions (even given a bipartition (A,B)), I don’t know that this problem is in NP—the best I can observe is that it’s in AM.

In any case, my own reason for rejecting IIT has nothing to do with any of the “merely practical” issues above: neither the difficulty of defining f and x, nor the difficulty of learning them, nor the difficulty of calculating Φ(f,x).  My reason is much more basic, striking directly at the hypothesized link between “integrated information” and consciousness.  Specifically, I claim the following:

Yes, it might be a decent rule of thumb that, if you want to know which brain regions (for example) are associated with consciousness, you should start by looking for regions with lots of information integration.  And yes, it’s even possible, for all I know, that having a large Φ-value is one necessary condition among many for a physical system to be conscious.  However, having a large Φ-value is certainly not a sufficient condition for consciousness, or even for the appearance of consciousness.  As a consequence, Φ can’t possibly capture the essence of what makes a physical system conscious, or even of what makes a system look conscious to external observers.

The demonstration of this claim is embarrassingly simple.  Let S=Fp, where p is some prime sufficiently larger than n, and let V be an n×n Vandermonde matrix over Fp—that is, a matrix whose (i,j) entry equals ij-1 (mod p).  Then let f:Sn→Sn be the update function defined by f(x)=Vx.  Now, for p large enough, the Vandermonde matrix is well-known to have the property that every submatrix is full-rank (i.e., “every submatrix preserves all the information that it’s possible to preserve about the part of x that it acts on”).  And this implies that, regardless of which bipartition (A,B) of {1,…,n} we choose, we’ll get

EI(A→B) = EI(B→A) = min{|A|,|B|} log2p,

and hence

Φ(A,B) = EI(A→B) + EI(B→A) = 2 min{|A|,|B|} log2p,

or after normalizing,

Φ(A,B) / min{|A|,|B|} = 2 log2p.

Or in words: the normalized information integration has the same value—namely, the maximum value!—for every possible bipartition.  Now, I’d like to proceed from here to a determination of Φ itself, but I’m prevented from doing so by the ambiguity in the definition of Φ that I noted earlier.  Namely, since every bipartition (A,B) minimizes the normalized value Φ(A,B) / min{|A|,|B|}, in theory I ought to be able to pick any of them for the purpose of calculating Φ.  But the unnormalized value Φ(A,B), which gives the final Φ, can vary greatly, across bipartitions: from 2 log2p (if min{|A|,|B|}=1) all the way up to n log2p (if min{|A|,|B|}=n/2).  So at this point, Φ is simply undefined.

On the other hand, I can solve this problem, and make Φ well-defined, by an ironic little hack.  The hack is to replace the Vandermonde matrix V by an n×n matrix W, which consists of the first n/2 rows of the Vandermonde matrix each repeated twice (assume for simplicity that n is a multiple of 4).  As before, we let f(x)=Wx.  Then if we set A={1,…,n/2} and B={n/2+1,…,n}, we can achieve

EI(A→B) = EI(B→A) = (n/4) log2p,

Φ(A,B) = EI(A→B) + EI(B→A) = (n/2) log2p,

and hence

Φ(A,B) / min{|A|,|B|} = log2p.

In this case, I claim that the above is the unique bipartition that minimizes the normalized integrated information Φ(A,B) / min{|A|,|B|}, up to trivial reorderings of the rows.  To prove this claim: if |A|=|B|=n/2, then clearly we minimize Φ(A,B) by maximizing the number of repeated rows in A and the number of repeated rows in B, exactly as we did above.  Thus, assume |A|≤|B| (the case |B|≤|A| is analogous).  Then clearly

EI(B→A) ≥ |A|/2,

while

EI(A→B) ≥ min{|A|, |B|/2}.

So if we let |A|=cn and |B|=(1-c)n for some c∈(0,1/2], then

Φ(A,B) ≥ [c/2 + min{c, (1-c)/2}] n,

and

Φ(A,B) / min{|A|,|B|} = Φ(A,B) / |A| = 1/2 + min{1, 1/(2c) – 1/2}.

But the above expression is uniquely minimized when c=1/2.  Hence the normalized integrated information is minimized essentially uniquely by setting A={1,…,n/2} and B={n/2+1,…,n}, and we get

Φ = Φ(A,B) = (n/2) log2p,

which is quite a large value (only a factor of 2 less than the trivial upper bound of n log2p).

Now, why did I call the switch from V to W an “ironic little hack”?  Because, in order to ensure a large value of Φ, I decreased—by a factor of 2, in fact—the amount of “information integration” that was intuitively happening in my system!  I did that in order to decrease the normalized value Φ(A,B) / min{|A|,|B|} for the particular bipartition (A,B) that I cared about, thereby ensuring that that (A,B) would be chosen over all the other bipartitions, thereby increasing the final, unnormalized value Φ(A,B) that Tononi’s prescription tells me to return.  I hope I’m not alone in fearing that this illustrates a disturbing non-robustness in the definition of Φ.

But let’s leave that issue aside; maybe it can be ameliorated by fiddling with the definition.  The broader point is this: I’ve shown that my system—the system that simply applies the matrix W to an input vector x—has an enormous amount of integrated information Φ.  Indeed, this system’s Φ equals half of its entire information content.  So for example, if n were 1014 or so—something that wouldn’t be hard to arrange with existing computers—then this system’s Φ would exceed any plausible upper bound on the integrated information content of the human brain.

And yet this Vandermonde system doesn’t even come close to doing anything that we’d want to call intelligent, let alone conscious!  When you apply the Vandermonde matrix to a vector, all you’re really doing is mapping the list of coefficients of a degree-(n-1) polynomial over Fp, to the values of the polynomial on the n points 0,1,…,n-1.  Now, evaluating a polynomial on a set of points turns out to be an excellent way to achieve “integrated information,” with every subset of outputs as correlated with every subset of inputs as it could possibly be.  In fact, that’s precisely why polynomials are used so heavily in error-correcting codes, such as the Reed-Solomon code, employed (among many other places) in CD’s and DVD’s.  But that doesn’t imply that every time you start up your DVD player you’re lighting the fire of consciousness.  It doesn’t even hint at such a thing.  All it tells us is that you can have integrated information without consciousness (or even intelligence)—just like you can have computation without consciousness, and unpredictability without consciousness, and electricity without consciousness.

It might be objected that, in defining my “Vandermonde system,” I was too abstract and mathematical.  I said that the system maps the input vector x to the output vector Wx, but I didn’t say anything about how it did so.  To perform a computation—even a computation as simple as a matrix-vector multiply—won’t we need a physical network of wires, logic gates, and so forth?  And in any realistic such network, won’t each logic gate be directly connected to at most a few other gates, rather than to billions of them?  And if we define the integrated information Φ, not directly in terms of the inputs and outputs of the function f(x)=Wx, but in terms of all the actual logic gates involved in computing f, isn’t it possible or even likely that Φ will go back down?

This is a good objection, but I don’t think it can rescue IIT.  For we can achieve the same qualitative effect that I illustrated with the Vandermonde matrix—the same “global information integration,” in which every large set of outputs depends heavily on every large set of inputs—even using much “sparser” computations, ones where each individual output depends on only a few of the inputs.  This is precisely the idea behind low-density parity check (LDPC) codes, which have had a major impact on coding theory over the past two decades.  Of course, one would need to muck around a bit to construct a physical system based on LDPC codes whose integrated information Φ was provably large, and for which there were no wildly-unbalanced bipartitions that achieved lower Φ(A,B)/min{|A|,|B|} values than the balanced bipartitions one cared about.  But I feel safe in asserting that this could be done, similarly to how I did it with the Vandermonde matrix.

More generally, we can achieve pretty good information integration by hooking together logic gates according to any bipartite expander graph: that is, any graph with n vertices on each side, such that every k vertices on the left side are connected to at least min{(1+ε)k,n} vertices on the right side, for some constant ε>0.  And it’s well-known how to create expander graphs whose degree (i.e., the number of edges incident to each vertex, or the number of wires coming out of each logic gate) is a constant, such as 3.  One can do so either by plunking down edges at random, or (less trivially) by explicit constructions from algebra or combinatorics.  And as indicated in the title of this post, I feel 100% confident in saying that the so-constructed expander graphs are not conscious!  The brain might be an expander, but not every expander is a brain.

Before winding down this post, I can’t resist telling you that the concept of integrated information (though it wasn’t called that) played an interesting role in computational complexity in the 1970s.  As I understand the history, Leslie Valiant conjectured that Boolean functions f:{0,1}n→{0,1}n with a high degree of “information integration” (such as discrete analogues of the Fourier transform) might be good candidates for proving circuit lower bounds, which in turn might be baby steps toward P≠NP.  More strongly, Valiant conjectured that the property of information integration, all by itself, implied that such functions had to be at least somewhat computationally complex—i.e., that they couldn’t be computed by circuits of size O(n), or even required circuits of size Ω(n log n).  Alas, that hope was refuted by Valiant’s later discovery of linear-size superconcentrators.  Just as information integration doesn’t suffice for intelligence or consciousness, so Valiant learned that information integration doesn’t suffice for circuit lower bounds either.

As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it.  But our intuition is wrong.  If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.

I should mention that I had the privilege of briefly speaking with Giulio Tononi (as well as his collaborator, Christof Koch) this winter at an FQXi conference in Puerto Rico.  At that time, I challenged Tononi with a much cruder, handwavier version of some of the same points that I made above.  Tononi’s response, as best as I can reconstruct it, was that it’s wrong to approach IIT like a mathematician; instead one needs to start “from the inside,” with the phenomenology of consciousness, and only then try to build general theories that can be tested against counterexamples.  This response perplexed me: of course you can start from phenomenology, or from anything else you like, when constructing your theory of consciousness.  However, once your theory has been constructed, surely it’s then fair game for others to try to refute it with counterexamples?  And surely the theory should be judged, like anything else in science or philosophy, by how well it withstands such attacks?

But let me end on a positive note.  In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed.  Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

[Endnote: See also this related post, by the philosopher Eric Schwetzgebel: Why Tononi Should Think That the United States Is Conscious.  While the discussion is much more informal, and the proposed counterexample more debatable, the basic objection to IIT is the same.]


Update (5/22): Here are a few clarifications of this post that might be helpful.

(1) The stuff about zombies and the Hard Problem was simply meant as motivation and background for what I called the “Pretty-Hard Problem of Consciousness”—the problem that I take IIT to be addressing.  You can disagree with the zombie stuff without it having any effect on my arguments about IIT.

(2) I wasn’t arguing in this post that dualism is true, or that consciousness is irreducibly mysterious, or that there could never be any convincing theory that told us how much consciousness was present in a physical system.  All I was arguing was that, at any rate, IIT is not such a theory.

(3) Yes, it’s true that my demonstration of IIT’s falsehood assumes—as an axiom, if you like—that while we might not know exactly what we mean by “consciousness,” at any rate we’re talking about something that humans have to a greater extent than DVD players.  If you reject that axiom, then I’d simply want to define a new word for a certain quality that non-anesthetized humans seem to have and that DVD players seem not to, and clarify that that other quality is the one I’m interested in.

(4) For my counterexample, the reason I chose the Vandermonde matrix is not merely that it’s invertible, but that all of its submatrices are full-rank.  This is the property that’s relevant for producing a large value of the integrated information Φ; by contrast, note that the identity matrix is invertible, but produces a system with Φ=0.  (As another note, if we work over a large enough field, then a random matrix will have this same property with high probability—but I wanted an explicit example, and while the Vandermonde is far from the only one, it’s one of the simplest.)

(5) The n×n Vandermonde matrix only does what I want if we work over (say) a prime field Fp with p>>n elements.  Thus, it’s natural to wonder whether similar examples exist where the basic system variables are bits, rather than elements of Fp.  The answer is yes. One way to get such examples is using the low-density parity check codes that I mention in the post.  Another common way to get Boolean examples, and which is also used in practice in error-correcting codes, is to start with the Vandermonde matrix (a.k.a. the Reed-Solomon code), and then combine it with an additional component that encodes the elements of Fp as strings of bits in some way.  Of course, you then need to check that doing this doesn’t harm the properties of the original Vandermonde matrix that you cared about (e.g., the “information integration”) too much, which causes some additional complication.

(6) Finally, it might be objected that my counterexamples ignored the issue of dynamics and “feedback loops”: they all consisted of unidirectional processes, which map inputs to outputs and then halt.  However, this can be fixed by the simple expedient of iterating the process over and over!  I.e., first map x to Wx, then map Wx to W2x, and so on.  The integrated information should then be the same as in the unidirectional case.


Update (5/24): See a very interesting comment by David Chalmers.