Happy birthday to me!

Recently, lots of people have been asking me what I think about IIT—no, not the Indian Institutes of Technology, but Integrated Information Theory, a widely-discussed “mathematical theory of consciousness” developed over the past decade by the neuroscientist Giulio Tononi. One of the askers was Max Tegmark, who’s enthusiastically adopted IIT as a plank in his radical mathematizing platform (see his paper “Consciousness as a State of Matter”). When, in the comment thread about Max’s Mathematical Universe Hypothesis, I expressed doubts about IIT, Max challenged me to back up my doubts with a quantitative calculation.

So, this is the post that I promised to Max and all the others, about why I don’t believe IIT. And yes, it will contain that quantitative calculation.

But first, what *is* IIT? The central ideas of IIT, as I understand them, are:

(1) to propose a quantitative measure, called Φ, of the amount of “integrated information” in a physical system (i.e. information that can’t be localized in the system’s individual parts), and then

(2) to hypothesize that a physical system is “conscious” if and only if it has a large value of Φ—and indeed, that a system is *more* conscious the larger its Φ value.

I’ll return later to the precise definition of Φ—but basically, it’s obtained by minimizing, over all subdivisions of your physical system into two parts A and B, some measure of the mutual information between A’s outputs and B’s inputs and vice versa. Now, one immediate consequence of any definition like this is that all sorts of simple physical systems (a thermostat, a photodiode, etc.) will turn out to have small but nonzero Φ values. To his credit, Tononi cheerfully accepts the panpsychist implication: yes, he says, it really does mean that thermostats and photodiodes have small but nonzero levels of consciousness. On the other hand, for the theory to work, it had better be the case that Φ is *small* for “intuitively unconscious” systems, and only *large* for “intuitively conscious” systems. As I’ll explain later, this strikes me as a crucial point on which IIT fails.

The literature on IIT is too big to do it justice in a blog post. Strikingly, in addition to the “primary” literature, there’s now even a “secondary” literature, which treats IIT as a sort of established base on which to build further speculations about consciousness. Besides the Tegmark paper linked to above, see for example this paper by Maguire et al., and associated popular article. (Ironically, Maguire et al. use IIT to argue for the Penrose-like view that consciousness might have uncomputable aspects—a use diametrically opposed to Tegmark’s.)

Anyway, if you want to read a popular article about IIT, there are loads of them: see here for the *New York Times’*s, here for *Scientific American*‘s, here for *IEEE Spectrum*‘s, and here for the *New Yorker*‘s. Unfortunately, none of those articles will tell you the meat (i.e., the definition of integrated information); for that you need technical papers, like this or this by Tononi, or this by Seth et al. IIT is also described in Christof Koch’s memoir *Consciousness: Confessions of a Romantic Reductionist*, which I read and enjoyed; as well as Tononi’s *Phi: A Voyage from the Brain to the Soul*, which I haven’t yet read. (Koch, one of the world’s best-known thinkers and writers about consciousness, has also become an evangelist for IIT.)

So, I want to explain why I don’t think IIT solves even the problem that it “plausibly *could have*” solved. But before I can do that, I need to do some philosophical ground-clearing. Broadly speaking, what is it that a “mathematical theory of consciousness” is supposed to do? What questions should it answer, and how should we judge whether it’s succeeded?

The most obvious thing a consciousness theory could do is to *explain why consciousness exists*: that is, to solve what David Chalmers calls the “Hard Problem,” by telling us how a clump of neurons is able to give rise to the taste of strawberries, the redness of red … you know, all that ineffable first-persony stuff. Alas, there’s a strong argument—one that I, personally, find completely convincing—why that’s too much to ask of *any* scientific theory. Namely, no matter what the third-person facts were, one could always imagine a universe consistent with those facts in which no one “really” experienced anything. So for example, if someone claims that integrated information “explains” why consciousness exists—nope, sorry! I’ve just conjured into my imagination beings whose Φ-values are a thousand, nay a trillion times larger than humans’, yet who are also philosophical zombies: entities that there’s nothing that it’s like to be. Granted, maybe such zombies can’t exist in the actual world: maybe, if you tried to create one, God would notice its large Φ-value and generously bequeath it a soul. But if so, then that’s a further fact about our world, a fact that manifestly couldn’t be deduced from the properties of Φ alone. Notice that the details of Φ are completely irrelevant to the argument.

Faced with this point, many scientifically-minded people start yelling and throwing things. They say that “zombies” and so forth are empty metaphysics, and that our only hope of learning about consciousness is to engage with actual facts about the brain. And that’s a perfectly reasonable position! As far as I’m concerned, you absolutely have the option of dismissing Chalmers’ Hard Problem as a navel-gazing distraction from the real work of neuroscience. The one thing you *can’t* do is have it both ways: that is, you can’t say both that the Hard Problem is meaningless, *and* that progress in neuroscience will soon solve the problem if it hasn’t already. You can’t maintain simultaneously that

(a) once you account for someone’s observed behavior and the details of their brain organization, there’s nothing further about consciousness to be explained, *and*

(b) remarkably, the XYZ theory of consciousness **can explain the “nothing further”** (e.g., by reducing it to integrated information processing), or might be on the verge of doing so.

As obvious as this sounds, it seems to me that large swaths of consciousness-theorizing can just be summarily rejected for trying to have their brain and eat it in precisely the above way.

Fortunately, I think IIT survives the above observations. For we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious. Namely, we can say that IIT “merely” aims to tell us *which physical systems are associated with consciousness and which aren’t*, purely in terms of the systems’ physical organization. The test of such a theory is whether it can produce results agreeing with “commonsense intuition”: for example, whether it can affirm, from first principles, that (most) humans are conscious; that dogs and horses are also conscious but less so; that rocks, livers, bacteria colonies, and existing digital computers are *not* conscious (or are hardly conscious); and that a room full of people has no “mega-consciousness” over and above the consciousnesses of the individuals.

The reason it’s so important that the theory uphold “common sense” on these test cases is that, given the experimental inaccessibility of consciousness, *this is basically the only test available to us.* If the theory gets the test cases “wrong” (i.e., gives results diverging from common sense), it’s not clear that there’s anything else for the theory to get “right.” Of course, supposing we *had* a theory that got the test cases right, we could then have a field day with the *less*-obvious cases, programming our computers to tell us exactly how much consciousness is present in octopi, fetuses, brain-damaged patients, and hypothetical AI bots.

In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the **Pretty-Hard Problem of Consciousness**. Unlike with the *Hard* Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we *had* solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a *failed* attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because **it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all:** indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly *more* conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including Φ_{DM} (discrete memoryless), Φ_{E} (empirical), and Φ_{AR} (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x_{1},…,x_{n})∈S^{n}, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:S^{n}→S^{n}. Then the question that interests us is whether the x_{i}‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.

More formally, given a partition (A,B) of {1,…,n}, let us write an input y=(y_{1},…,y_{n})∈S^{n} to f in the form (y_{A},y_{B}), where y_{A} consists of the y variables in A and y_{B} consists of the y variables in B. Then we can think of f as mapping an input pair (y_{A},y_{B}) to an output pair (z_{A},z_{B}). Now, we define the “effective information” EI(A→B) as H(z_{B} | A random, y_{B}=x_{B}). Or in words, EI(A→B) is the Shannon entropy of the output variables in B, if the input variables in A are drawn uniformly at random, while the input variables in B are fixed to their values in x. It’s a measure of the dependence of B on A in the computation of f(x). Similarly, we define

EI(B→A) := H(z_{A} | B random, y_{A}=x_{A}).

We then consider the sum

Φ(A,B) := EI(A→B) + EI(B→A).

Intuitively, we’d like the integrated information Φ=Φ(f,x) be the *minimum* of Φ(A,B), over all 2^{n}-2 possible partitions of {1,…,n} into nonempty sets A and B. The idea is that Φ should be large, if and only if it’s *not* possible to partition the variables into two sets A and B, in such a way that not much information flows from A to B or vice versa when f(x) is computed.

However, no sooner do we propose this than we notice a technical problem. What if A is much larger than B, or vice versa? As an extreme case, what if A={1,…,n-1} and B={n}? In that case, we’ll have Φ(A,B)≤2log_{2}|S|, but only for the boring reason that there’s hardly any entropy in B as a whole, to either influence A or be influenced by it. For this reason, Tononi proposes a fix where we normalize each Φ(A,B) by dividing it by min{|A|,|B|}. He then defines the integrated information Φ to be Φ(A,B), for whichever partition (A,B) minimizes the ratio Φ(A,B) / min{|A|,|B|}. (Unless I missed it, Tononi never specifies what we should do if there are multiple (A,B)’s that all achieve the same minimum of Φ(A,B) / min{|A|,|B|}. I’ll return to that point later, along with other idiosyncrasies of the normalization procedure.)

Tononi gives some simple examples of the computation of Φ, showing that it is indeed larger for systems that are more “richly interconnected” in an intuitive sense. He speculates, plausibly, that Φ is quite large for (some reasonable model of) the interconnection network of the human brain—and probably larger for the brain than for typical electronic devices (which tend to be highly modular in design, thereby decreasing their Φ), or, let’s say, than for other organs like the pancreas. Ambitiously, he even speculates at length about how a large value of Φ might be connected to the phenomenology of consciousness.

To be sure, empirical work in integrated information theory has been hampered by three difficulties. The first difficulty is that we don’t *know* the detailed interconnection network of the human brain. The second difficulty is that it’s not even clear what we should define that network to be: for example, as a crude first attempt, should we assign a Boolean variable to each neuron, which equals 1 if the neuron is currently firing and 0 if it’s not firing, and let f be the function that updates those variables over a timescale of, say, a millisecond? What other variables do we need—firing rates, internal states of the neurons, neurotransmitter levels? Is choosing many of these variables uniformly at random (for the purpose of calculating Φ) really a reasonable way to “randomize” the variables, and if not, what other prescription should we use?

The third and final difficulty is that, even if we knew exactly what we meant by “the f and x corresponding to the human brain,” and even if we had complete knowledge of that f and x, computing Φ(f,x) could still be computationally intractable. For recall that the definition of Φ involved minimizing a quantity over all the exponentially-many possible bipartitions of {1,…,n}. While it’s not directly relevant to my arguments in this post, I leave it as a challenge for interested readers to pin down the computational complexity of approximating Φ to some reasonable precision, assuming that f is specified by a polynomial-size Boolean circuit, or alternatively, by an NC^{0} function (i.e., a function each of whose outputs depends on only a constant number of the inputs). (Presumably Φ will be #P-hard to calculate *exactly*, but only because calculating entropy exactly is a #P-hard problem—that’s not interesting.)

I conjecture that approximating Φ is an NP-hard problem, even for restricted families of f’s like NC^{0} circuits—which invites the amusing thought that God, or Nature, would need to solve an NP-hard problem just to decide whether or not to imbue a given physical system with consciousness! (Alas, if you wanted to exploit this as a practical approach for solving NP-complete problems such as 3SAT, you’d need to do a rather drastic experiment on your own brain—an experiment whose result would be to render you unconscious if your 3SAT instance was satisfiable, or conscious if it was unsatisfiable! In neither case would you be able to communicate the outcome of the experiment to anyone else, nor would you have any recollection of the outcome after the experiment was finished.) In the other direction, it would also be interesting to *upper*-bound the complexity of approximating Φ. Because of the need to estimate the entropies of distributions (even given a bipartition (A,B)), I don’t know that this problem is in NP—the best I can observe is that it’s in AM.

In any case, my own reason for rejecting IIT has nothing to do with any of the “merely practical” issues above: neither the difficulty of defining f and x, nor the difficulty of learning them, nor the difficulty of calculating Φ(f,x). My reason is much more basic, striking directly at the hypothesized link between “integrated information” and consciousness. Specifically, I claim the following:

*Yes, it might be a decent rule of thumb that, if you want to know which brain regions (for example) are associated with consciousness, you should start by looking for regions with lots of information integration. And yes, it’s even possible, for all I know, that having a large Φ-value is one necessary condition among many for a physical system to be conscious. However, having a large Φ-value is certainly not a ***sufficient** condition for consciousness, or even for the appearance of consciousness. As a consequence, Φ can’t possibly capture the essence of what makes a physical system conscious, or even of what makes a system **look** conscious to external observers.

The demonstration of this claim is embarrassingly simple. Let S=F_{p}, where p is some prime sufficiently larger than n, and let V be an n×n Vandermonde matrix over F_{p}—that is, a matrix whose (i,j) entry equals i^{j-1} (mod p). Then let f:S^{n}→S^{n} be the update function defined by f(x)=Vx. Now, for p large enough, the Vandermonde matrix is well-known to have the property that *every submatrix is full-rank* (i.e., “every submatrix preserves all the information that it’s possible to preserve about the part of x that it acts on”). And this implies that, regardless of which bipartition (A,B) of {1,…,n} we choose, we’ll get

EI(A→B) = EI(B→A) = min{|A|,|B|} log_{2}p,

and hence

Φ(A,B) = EI(A→B) + EI(B→A) = 2 min{|A|,|B|} log_{2}p,

or after normalizing,

Φ(A,B) / min{|A|,|B|} = 2 log_{2}p.

Or in words: *the normalized information integration has the same value—namely, the maximum value!—for every possible bipartition*. Now, I’d like to proceed from here to a determination of Φ itself, but I’m prevented from doing so by the ambiguity in the definition of Φ that I noted earlier. Namely, since *every* bipartition (A,B) minimizes the normalized value Φ(A,B) / min{|A|,|B|}, in theory I ought to be able to pick any of them for the purpose of calculating Φ. But the *unnormalized* value Φ(A,B), which gives the final Φ, can vary greatly, across bipartitions: from 2 log_{2}p (if min{|A|,|B|}=1) all the way up to n log_{2}p (if min{|A|,|B|}=n/2). So at this point, Φ is simply undefined.

On the other hand, I can solve this problem, and *make* Φ well-defined, by an ironic little hack. The hack is to replace the Vandermonde matrix V by an n×n matrix W, which consists of the first n/2 rows of the Vandermonde matrix each repeated twice (assume for simplicity that n is a multiple of 4). As before, we let f(x)=Wx. Then if we set A={1,…,n/2} and B={n/2+1,…,n}, we can achieve

EI(A→B) = EI(B→A) = (n/4) log_{2}p,

Φ(A,B) = EI(A→B) + EI(B→A) = (n/2) log_{2}p,

and hence

Φ(A,B) / min{|A|,|B|} = log_{2}p.

In this case, I claim that the above is the *unique* bipartition that minimizes the normalized integrated information Φ(A,B) / min{|A|,|B|}, up to trivial reorderings of the rows. To prove this claim: if |A|=|B|=n/2, then clearly we minimize Φ(A,B) by maximizing the number of repeated rows in A and the number of repeated rows in B, exactly as we did above. Thus, assume |A|≤|B| (the case |B|≤|A| is analogous). Then clearly

EI(B→A) ≥ |A|/2,

while

EI(A→B) ≥ min{|A|, |B|/2}.

So if we let |A|=cn and |B|=(1-c)n for some c∈(0,1/2], then

Φ(A,B) ≥ [c/2 + min{c, (1-c)/2}] n,

and

Φ(A,B) / min{|A|,|B|} = Φ(A,B) / |A| = 1/2 + min{1, 1/(2c) – 1/2}.

But the above expression is uniquely minimized when c=1/2. Hence the normalized integrated information is minimized essentially uniquely by setting A={1,…,n/2} and B={n/2+1,…,n}, and we get

Φ = Φ(A,B) = (n/2) log_{2}p,

which is quite a large value (only a factor of 2 less than the trivial upper bound of n log_{2}p).

Now, why did I call the switch from V to W an “ironic little hack”? Because, in order to ensure a large value of Φ, I *decreased*—by a factor of 2, in fact—the amount of “information integration” that was intuitively happening in my system! I did that in order to decrease the normalized value Φ(A,B) / min{|A|,|B|} for the particular bipartition (A,B) that I cared about, thereby ensuring that that (A,B) would be chosen over all the other bipartitions, thereby *increasing* the final, unnormalized value Φ(A,B) that Tononi’s prescription tells me to return. I hope I’m not alone in fearing that this illustrates a disturbing non-robustness in the definition of Φ.

But let’s leave that issue aside; maybe it can be ameliorated by fiddling with the definition. The broader point is this: **I’ve shown that my system—the system that simply applies the matrix W to an input vector x—has an ***enormous* amount of integrated information Φ. Indeed, this system’s Φ equals half of its entire information content. So for example, if n were 10^{14} or so—something that wouldn’t be hard to arrange with existing computers—then this system’s Φ would exceed any plausible upper bound on the integrated information content of the human brain.

And yet this Vandermonde system doesn’t even come close to doing anything that we’d want to call *intelligent*, let alone conscious! When you apply the Vandermonde matrix to a vector, all you’re really doing is mapping the list of coefficients of a degree-(n-1) polynomial over F_{p}, to the values of the polynomial on the n points 0,1,…,n-1. Now, evaluating a polynomial on a set of points turns out to be an excellent way to achieve “integrated information,” with *every* subset of outputs as correlated with every subset of inputs as it could possibly be. In fact, that’s precisely why polynomials are used so heavily in error-correcting codes, such as the Reed-Solomon code, employed (among many other places) in CD’s and DVD’s. But that doesn’t imply that every time you start up your DVD player you’re lighting the fire of consciousness. It doesn’t even *hint* at such a thing. All it tells us is that you can have integrated information without consciousness (or even intelligence)—just like you can have computation without consciousness, and unpredictability without consciousness, and electricity without consciousness.

It might be objected that, in defining my “Vandermonde system,” I was too abstract and mathematical. I said that the system maps the input vector x to the output vector Wx, but I didn’t say anything about *how* it did so. To perform a computation—even a computation as simple as a matrix-vector multiply—won’t we need a physical network of wires, logic gates, and so forth? And in any realistic such network, won’t each logic gate be directly connected to at most *a few* other gates, rather than to billions of them? And if we define the integrated information Φ, not directly in terms of the inputs and outputs of the function f(x)=Wx, but in terms of all the actual logic gates involved in computing f, isn’t it possible or even likely that Φ will go back down?

This is a good objection, but I don’t think it can rescue IIT. For we can achieve the same qualitative effect that I illustrated with the Vandermonde matrix—the same “global information integration,” in which every large set of outputs depends heavily on every large set of inputs—even using much “sparser” computations, ones where each *individual* output depends on only a few of the inputs. This is precisely the idea behind low-density parity check (LDPC) codes, which have had a major impact on coding theory over the past two decades. Of course, one would need to muck around a bit to construct a physical system based on LDPC codes whose integrated information Φ was *provably* large, and for which there were no wildly-unbalanced bipartitions that achieved lower Φ(A,B)/min{|A|,|B|} values than the balanced bipartitions one cared about. But I feel safe in asserting that this could be done, similarly to how I did it with the Vandermonde matrix.

More generally, we can achieve pretty good information integration by hooking together logic gates according to any bipartite expander graph: that is, any graph with n vertices on each side, such that every k vertices on the left side are connected to at least min{(1+ε)k,n} vertices on the right side, for some constant ε>0. And it’s well-known how to create expander graphs whose degree (i.e., the number of edges incident to each vertex, or the number of wires coming out of each logic gate) is a constant, such as 3. One can do so either by plunking down edges at random, or (less trivially) by explicit constructions from algebra or combinatorics. And as indicated in the title of this post, I feel 100% confident in saying that the so-constructed expander graphs are **not conscious! **The brain might be an expander, but not every expander is a brain.

Before winding down this post, I can’t resist telling you that the concept of integrated information (though it wasn’t called that) played an interesting role in computational complexity in the 1970s. As I understand the history, Leslie Valiant conjectured that Boolean functions f:{0,1}^{n}→{0,1}^{n} with a high degree of “information integration” (such as discrete analogues of the Fourier transform) might be good candidates for proving circuit lower bounds, which in turn might be baby steps toward P≠NP. More strongly, Valiant conjectured that the property of information integration, all by itself, implied that such functions had to be at least *somewhat* computationally complex—i.e., that they couldn’t be computed by circuits of size O(n), or even required circuits of size Ω(n log n). Alas, that hope was refuted by Valiant’s later discovery of linear-size superconcentrators. Just as information integration doesn’t suffice for intelligence or consciousness, so Valiant learned that information integration doesn’t suffice for circuit lower bounds either.

As humans, we seem to have the intuition that *global integration of information* is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it. But our intuition is wrong. If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.

I should mention that I had the privilege of briefly speaking with Giulio Tononi (as well as his collaborator, Christof Koch) this winter at an FQXi conference in Puerto Rico. At that time, I challenged Tononi with a much cruder, handwavier version of some of the same points that I made above. Tononi’s response, as best as I can reconstruct it, was that it’s wrong to approach IIT like a mathematician; instead one needs to start “from the inside,” with the phenomenology of consciousness, and only then try to build general theories that can be tested against counterexamples. This response perplexed me: *of course* you can start from phenomenology, or from anything else you like, when constructing your theory of consciousness. However, once your theory *has* been constructed, surely it’s then fair game for others to try to refute it with counterexamples? And surely the theory should be judged, like anything else in science or philosophy, by how well it withstands such attacks?

But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only **aspire** to wrongness.

*[Endnote: See also this related post, by the philosopher Eric Schwetzgebel: Why Tononi Should Think That the United States Is Conscious. While the discussion is much more informal, and the proposed counterexample more debatable, the basic objection to IIT is the same.]*

**Update (5/22):** Here are a few clarifications of this post that might be helpful.

(1) The stuff about zombies and the Hard Problem was simply meant as motivation and background for what I called the “Pretty-Hard Problem of Consciousness”—the problem that I take IIT to be addressing. You can disagree with the zombie stuff without it having any effect on my arguments about IIT.

(2) I wasn’t arguing in this post that dualism is true, or that consciousness is irreducibly mysterious, or that there could never be any convincing theory that told us how much consciousness was present in a physical system. All I was arguing was that, at any rate, IIT is not such a theory.

(3) Yes, it’s true that my demonstration of IIT’s falsehood assumes—as an axiom, if you like—that while we might not know exactly what we mean by “consciousness,” at any rate we’re talking about something that humans have to a greater extent than DVD players. If you reject that axiom, then I’d simply want to define a *new* word for a certain quality that non-anesthetized humans seem to have and that DVD players seem not to, and clarify that that other quality is the one I’m interested in.

(4) For my counterexample, the reason I chose the Vandermonde matrix is not merely that it’s invertible, but that all of its submatrices are full-rank. *This* is the property that’s relevant for producing a large value of the integrated information Φ; by contrast, note that the identity matrix is invertible, but produces a system with Φ=0. (As another note, if we work over a large enough field, then a *random* matrix will have this same property with high probability—but I wanted an explicit example, and while the Vandermonde is far from the only one, it’s one of the simplest.)

(5) The n×n Vandermonde matrix only does what I want if we work over (say) a prime field F_{p} with p>>n elements. Thus, it’s natural to wonder whether similar examples exist where the basic system variables are bits, rather than elements of F_{p}. The answer is yes. One way to get such examples is using the low-density parity check codes that I mention in the post. Another common way to get Boolean examples, and which is also used in practice in error-correcting codes, is to *start* with the Vandermonde matrix (a.k.a. the Reed-Solomon code), and then combine it with an additional component that encodes the elements of F_{p} as strings of bits in some way. Of course, you then need to check that doing this doesn’t harm the properties of the original Vandermonde matrix that you cared about (e.g., the “information integration”) too much, which causes some additional complication.

(6) Finally, it might be objected that my counterexamples ignored the issue of dynamics and “feedback loops”: they all consisted of unidirectional processes, which map inputs to outputs and then halt. However, this can be fixed by the simple expedient of iterating the process over and over! I.e., first map x to Wx, then map Wx to W^{2}x, and so on. The integrated information should then be the same as in the unidirectional case.

**Update (5/24):** See a very interesting comment by David Chalmers.