Giulio Tononi and Me: A Phi-nal Exchange

You might recall that last week I wrote a post criticizing Integrated Information Theory (IIT), and its apparent implication that a simple Reed-Solomon decoding circuit would, if scaled to a large enough size, bring into being a consciousness vastly exceeding our own.  On Wednesday Giulio Tononi, the creator of IIT, was kind enough to send me a fascinating 14-page rebuttal, and to give me permission to share it here:

Why Scott should stare at a blank wall and reconsider (or, the conscious grid)

If you’re interested in this subject at all, then I strongly recommend reading Giulio’s response before continuing further.   But for those who want the tl;dr: Giulio, not one to battle strawmen, first restates my own argument against IIT with crystal clarity.  And while he has some minor quibbles (e.g., apparently my calculations of Φ didn’t use the most recent, “3.0” version of IIT), he wisely sets those aside in order to focus on the core question: according to IIT, are all sorts of simple expander graphs conscious?

There, he doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.  He affirms that, yes, according to IIT, a large network of XOR gates arranged in a simple expander graph is conscious.  Indeed, he goes further, and says that the “expander” part is superfluous: even a network of XOR gates arranged in a 2D square grid is conscious.  In my language, Giulio is simply pointing out here that a √n×√n square grid has decent expansion: good enough to produce a Φ-value of about √n, if not the information-theoretic maximum of n (or n/2, etc.) that an expander graph could achieve.  And apparently, by Giulio’s lights, Φ=√n is sufficient for consciousness!

While Giulio never mentions this, it’s interesting to observe that logic gates arranged in a 1-dimensional line would produce a tiny Φ-value (Φ=O(1)).  So even by IIT standards, such a linear array would not be conscious.  Yet the jump from a line to a two-dimensional grid is enough to light the spark of Mind.

Personally, I give Giulio enormous credit for having the intellectual courage to follow his theory wherever it leads.  When the critics point out, “if your theory were true, then the Moon would be made of peanut butter,” he doesn’t try to wiggle out of the prediction, but proudly replies, “yes, chunky peanut butter—and you forgot to add that the Earth is made of Nutella!”

Yet even as we admire Giulio’s honesty and consistency, his stance might also prompt us, gently, to take another look at this peanut-butter-moon theory, and at what grounds we had for believing it in the first place.  In his response essay, Giulio offers four arguments (by my count) for accepting IIT despite, or even because of, its conscious-grid prediction: one “negative” argument and three “positive” ones.  Alas, while your Φ-lage may vary, I didn’t find any of the four arguments persuasive.  In the rest of this post, I’ll go through them one by one and explain why.

I. The Copernicus-of-Consciousness Argument

Like many commenters on my last post, Giulio heavily criticizes my appeal to “common sense” in rejecting IIT.  Sure, he says, I might find it “obvious” that a huge Vandermonde matrix, or its physical instantiation, isn’t conscious.  But didn’t people also find it “obvious” for millennia that the Sun orbits the Earth?  Isn’t the entire point of science to challenge common sense?  Clearly, then, the test of a theory of consciousness is not how well it upholds “common sense,” but how well it fits the facts.

The above position sounds pretty convincing: who could dispute that observable facts trump personal intuitions?  The trouble is, what are the observable facts when it comes to consciousness?  The anti-common-sense view gets all its force by pretending that we’re in a relatively late stage of research—namely, the stage of taking an agreed-upon scientific definition of consciousness, and applying it to test our intuitions—rather than in an extremely early stage, of agreeing on what the word “consciousness” is even supposed to mean.

Since I think this point is extremely important—and of general interest, beyond just IIT—I’ll expand on it with some analogies.

Suppose I told you that, in my opinion, the ε-δ definition of continuous functions—the one you learn in calculus class—failed to capture the true meaning of continuity.  Suppose I told you that I had a new, better definition of continuity—and amazingly, when I tried out my definition on some examples, it turned out that ⌊x⌋ (the floor function) was continuous, whereas x2  had discontinuities, though only at 17.5 and 42.

You would probably ask what I was smoking, and whether you could have some.  But why?  Why shouldn’t the study of continuity produce counterintuitive results?  After all, even the standard definition of continuity leads to some famously weird results, like that x sin(1/x) is a continuous function, even though sin(1/x) is discontinuous.  And it’s not as if the standard definition is God-given: people had been using words like “continuous” for centuries before Bolzano, Weierstrass, et al. formalized the ε-δ definition, a definition that millions of calculus students still find far from intuitive.  So why shouldn’t there be a different, better definition of “continuous,” and why shouldn’t it reveal that a step function is continuous while a parabola is not?

In my view, the way out of this conceptual jungle is to realize that, before any formal definitions, any ε’s and δ’s, we start with an intuition for we’re trying to capture by the word “continuous.”  And if we press hard enough on what that intuition involves, we’ll find that it largely consists of various “paradigm-cases.”  A continuous function, we’d say, is a function like 3x, or x2, or sin(x), while a discontinuity is the kind of thing that the function 1/x has at x=0, or that ⌊x⌋ has at every integer point.  Crucially, we use the paradigm-cases to guide our choice of a formal definition—not vice versa!  It’s true that, once we have a formal definition, we can then apply it to “exotic” cases like x sin(1/x), and we might be surprised by the results.  But the paradigm-cases are different.  If, for example, our definition told us that x2 was discontinuous, that wouldn’t be a “surprise”; it would just be evidence that we’d picked a bad definition.  The definition failed at the only task for which it could have succeeded: namely, that of capturing what we meant.

Some people might say that this is all well and good in pure math, but empirical science has no need for squishy intuitions and paradigm-cases.  Nothing could be further from the truth.  Suppose, again, that I told you that physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition.  And, when I built a Scott-thermometer that measures true temperatures, it delivered the shocking result that boiling water is actually colder than ice.  You’d probably tell me where to shove my Scott-thermometer.  But wait: how do you know that I’m not the Copernicus of heat, and that future generations won’t celebrate my breakthrough while scoffing at your small-mindedness?

I’d say there’s an excellent answer: because what we mean by heat is “whatever it is that boiling water has more of than ice” (along with dozens of other paradigm-cases).  And because, if you use a thermometer to check whether boiling water is hotter than ice, then the term for what you’re doing is calibrating your thermometer.  When the clock strikes 13, it’s time to fix the clock, and when the thermometer says boiling water’s colder than ice, it’s time to replace the thermometer—or if needed, even the entire theory on which the thermometer is based.

Ah, you say, but doesn’t modern physics define heat in a completely different, non-intuitive way, in terms of molecular motion?  Yes, and that turned out to be a superb definition—not only because it was precise, explanatory, and applicable to cases far beyond our everyday experience, but crucially, because it matched common sense on the paradigm-cases.  If it hadn’t given sensible results for boiling water and ice, then the only possible conclusion would be that, whatever new quantity physicists had defined, they shouldn’t call it “temperature,” or claim that their quantity measured the amount of “heat.”  They should call their new thing something else.

The implications for the consciousness debate are obvious.  When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared.  The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean:

  • You are conscious (though not when anesthetized).
  • (Most) other people appear to be conscious, judging from their behavior.
  • Many animals appear to be conscious, though probably to a lesser degree than humans (and the degree of consciousness in each particular species is far from obvious).
  • A rock is not conscious.  A wall is not conscious.  A Reed-Solomon code is not conscious.  Microsoft Word is not conscious (though a Word macro that passed the Turing test conceivably would be).

Fetuses, coma patients, fish, and hypothetical AIs are the x sin(1/x)’s of consciousness: they’re the tougher cases, the ones where we might actually need a formal definition to adjudicate the truth.

Now, given a proposed formal definition for an intuitive concept, how can we check whether the definition is talking about same thing we were trying to get at before?  Well, we can check whether the definition at least agrees that parabolas are continuous while step functions are not, that boiling water is hot while ice is cold, and that we’re conscious while Reed-Solomon decoders are not.  If so, then the definition might be picking out the same thing that we meant, or were trying to mean, pre-theoretically (though we still can’t be certain).  If not, then the definition is certainly talking about something else.

What else can we do?

II. The Axiom Argument

According to Giulio, there is something else we can do, besides relying on paradigm-cases.  That something else, in his words, is to lay down “postulates about how the physical world should be organized to support the essential properties of experience,” then use those postulates to derive a consciousness-measuring quantity.

OK, so what are IIT’s postulates?  Here’s how Giulio states the five postulates leading to Φ in his response essay (he “derives” these from earlier “phenomenological axioms,” which you can find in the essay):

  1. A system of mechanisms exists intrinsically if it can make a difference to itself, by affecting the probability of its past and future states, i.e. it has causal power (existence).
  2. It is composed of submechanisms each with their own causal power (composition).
  3. It generates a conceptual structure that is the specific way it is, as specified by each mechanism’s concept — this is how each mechanism affects the probability of the system’s past and future states (information).
  4. The conceptual structure is unified — it cannot be decomposed into independent components (integration).
  5. The conceptual structure is singular — there can be no superposition of multiple conceptual structures over the same mechanisms and intervals of time.

From my standpoint, these postulates have three problems.  First, I don’t really understand them.  Second, insofar as I do understand them, I don’t necessarily accept their truth.  And third, insofar as I do accept their truth, I don’t see how they lead to Φ.

To elaborate a bit:

I don’t really understand the postulates.  I realize that the postulates are explicated further in the many papers on IIT.  Unfortunately, while it’s possible that I missed something, in all of the papers that I read, the definitions never seemed to “bottom out” in mathematical notions that I understood, like functions mapping finite sets to other finite sets.  What, for example, is a “mechanism”?  What’s a “system of mechanisms”?  What’s “causal power”?  What’s a “conceptual structure,” and what does it mean for it to be “unified”?  Alas, it doesn’t help to define these notions in terms of other notions that I also don’t understand.  And yes, I agree that all these notions can be given fully rigorous definitions, but there could be many different ways to do so, and the devil could lie in the details.  In any case, because (as I said) it’s entirely possible that the failure is mine, I place much less weight on this point than I do on the two points to follow.

I don’t necessarily accept the postulates’ truth.  Is consciousness a “unified conceptual structure”?  Is it “singular”?  Maybe.  I don’t know.  It sounds plausible.  But at any rate, I’m far less confident about any these postulates—whatever one means by them!—than I am about my own “postulate,” which is that you and I are conscious while my toaster is not.  Note that my postulate, though not phenomenological, does have the merit of constraining candidate theories of consciousness in an unambiguous way.

I don’t see how the postulates lead to Φ.  Even if one accepts the postulates, how does one deduce that the “amount of consciousness” should be measured by Φ, rather than by some other quantity?  None of the papers I read—including the ones Giulio linked to in his response essay—contained anything that looked to me like a derivation of Φ.  Instead, there was general discussion of the postulates, and then Φ just sort of appeared at some point.  Furthermore, given the many idiosyncrasies of Φ—the minimization over all bipartite (why just bipartite? why not tripartite?) decompositions of the system, the need for normalization (or something else in version 3.0) to deal with highly-unbalanced partitions—it would be quite a surprise were it possible to derive its specific form from postulates of such generality.

I was going to argue for that conclusion in more detail, when I realized that Giulio had kindly done the work for me already.  Recall that Giulio chided me for not using the “latest, 2014, version 3.0” edition of Φ in my previous post.  Well, if the postulates uniquely determined the form of Φ, then what’s with all these upgrades?  Or has Φ’s definition been changing from year to year because the postulates themselves have been changing?  If the latter, then maybe one should wait for the situation to stabilize before trying to form an opinion of the postulates’ meaningfulness, truth, and completeness?

III. The Ironic Empirical Argument

Or maybe not.  Despite all the problems noted above with the IIT postulates, Giulio argues in his essay that there’s a good a reason to accept them: namely, they explain various empirical facts from neuroscience, and lead to confirmed predictions.  In his words:

[A] theory’s postulates must be able to explain, in a principled and parsimonious way, at least those many facts about consciousness and the brain that are reasonably established and non-controversial.  For example, we know that our own consciousness depends on certain brain structures (the cortex) and not others (the cerebellum), that it vanishes during certain periods of sleep (dreamless sleep) and reappears during others (dreams), that it vanishes during certain epileptic seizures, and so on.  Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts.  Such empirical facts, and not intuitions, should be its primary test…

[I]n some cases we already have some suggestive evidence [of the truth of the IIT postulates’ predictions].  One example is the cerebellum, which has 69 billion neurons or so — more than four times the 16 billion neurons of the cerebral cortex — and is as complicated a piece of biological machinery as any.  Though we do not understand exactly how it works (perhaps even less than we understand the cerebral cortex), its connectivity definitely suggests that the cerebellum is ill suited to information integration, since it lacks lateral connections among its basic modules.  And indeed, though the cerebellum is heavily connected to the cerebral cortex, removing it hardly affects our consciousness, whereas removing the cortex eliminates it.

I hope I’m not alone in noticing the irony of this move.  But just in case, let me spell it out: Giulio has stated, as “largely uncontroversial facts,” that certain brain regions (the cerebellum) and certain states (dreamless sleep) are not associated with our consciousness.  He then views it as a victory for IIT, if those regions and states turn out to have lower information integration than the regions and states that he does take to be associated with our consciousness.

But how does Giulio know that the cerebellum isn’t conscious?  Even if it doesn’t produce “our” consciousness, maybe the cerebellum has its own consciousness, just as rich as the cortex’s but separate from it.  Maybe removing the cerebellum destroys that other consciousness, unbeknownst to “us.”  Likewise, maybe “dreamless” sleep brings about its own form of consciousness, one that (unlike dreams) we never, ever remember in the morning.

Giulio might take the implausibility of those ideas as obvious, or at least as “largely uncontroversial” among neuroscientists.  But here’s the problem with that: he just told us that a 2D square grid is conscious!  He told us that we must not rely on “commonsense intuition,” or on any popular consensus, to say that if a square mesh of wires is just sitting there XORing some input bits, doing nothing at all that we’d want to call intelligent, then it’s probably safe to conclude that the mesh isn’t conscious.  So then why shouldn’t he say the same for the cerebellum, or for the brain in dreamless sleep?  By Giulio’s own rules (the ones he used for the mesh), we have no a-priori clue whether those systems are conscious or not—so even if IIT predicts that they’re not conscious, that can’t be counted as any sort of success for IIT.

For me, the point is even stronger: I, personally, would be a million times more inclined to ascribe consciousness to the human cerebellum, or to dreamless sleep, than I would to the mesh of XOR gates.  For it’s not hard to imagine neuroscientists of the future discovering “hidden forms of intelligence” in the cerebellum, and all but impossible to imagine them doing the same for the mesh.  But even if you put those examples on the same footing, still the take-home message seems clear: you can’t count it as a “success” for IIT if it predicts that the cerebellum in unconscious, while at the same time denying that it’s a “failure” for IIT if it predicts that a square mesh of XOR gates is conscious.  If the unconsciousness of the cerebellum can be considered an “empirical fact,” safe enough for theories of consciousness to be judged against it, then surely the unconsciousness of the mesh can also be considered such a fact.

IV. The Phenomenology Argument

I now come to, for me, the strangest and most surprising part of Giulio’s response.  Despite his earlier claim that IIT need not dovetail with “commonsense intuition” about which systems are conscious—that it can defy intuition—at some point, Giulio valiantly tries to reprogram our intuition, to make us feel why a 2D grid could be conscious.  As best I can understand, the argument seems to be that, when we stare at a blank 2D screen, we form a rich experience in our heads, and that richness must be mirrored by a corresponding “intrinsic” richness in 2D space itself:

[I]f one thinks a bit about it, the experience of empty 2D visual space is not at all empty, but contains a remarkable amount of structure.  In fact, when we stare at the blank screen, quite a lot is immediately available to us without any effort whatsoever.  Thus, we are aware of all the possible locations in space (“points”): the various locations are right “there”, in front of us.  We are aware of their relative positions: a point may be left or right of another, above or below, and so on, for every position, without us having to order them.  And we are aware of the relative distances among points: quite clearly, two points may be close or far, and this is the case for every position.  Because we are aware of all of this immediately, without any need to calculate anything, and quite regularly, since 2D space pervades most of our experiences, we tend to take for granted the vast set of relationship[s] that make up 2D space.

And yet, says IIT, given that our experience of the blank screen definitely exists, and it is precisely the way it is — it is 2D visual space, with all its relational properties — there must be physical mechanisms that specify such phenomenological relationships through their causal power … One may also see that the causal relationships that make up 2D space obtain whether the elements are on or off.  And finally, one may see that such a 2D grid is necessary not so much to represent space from the extrinsic perspective of an observer, but to create it, from its own intrinsic perspective.

Now, it would be child’s-play to criticize the above line of argument for conflating our consciousness of the screen with the alleged consciousness of the screen itself.  To wit:  Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall.  You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you.

However, I actually prefer a different tack in criticizing Giulio’s “wall argument.”  Suppose I accepted that my mental image of the relationships between certain entities was relevant to assessing whether those entities had their own mental life, independent of me or any other observer.  For example, suppose I believed that, if my experience of 2D space is rich and structured, then that’s evidence that 2D space is rich and structured enough to be conscious.

Then my question is this: why shouldn’t the same be true of 1D space?  After all, my experience of staring at a rope is also rich and structured, no less than my experience of staring at a wall.  I perceive some points on the rope as being toward the left, others as being toward the right, and some points as being between two other points.  In fact, the rope even has a structure—namely, a natural total ordering on its points—that the wall lacks.  So why does IIT cruelly deny subjective experience to a row of logic gates strung along a rope, reserving it only for a mesh of logic gates pasted to a wall?

And yes, I know the answer: because the logic gates on the rope aren’t “integrated” enough.  But who’s to say that the gates in the 2D mesh are integrated enough?  As I mentioned before, their Φ-value grows only as the square root of the number of gates, so that the ratio of integrated information to total information tends to 0 as the number of gates increases.  And besides, aren’t what Giulio calls “the facts of phenomenology” the real arbiters here, and isn’t my perception of the rope’s structure a phenomenological fact?  When you cut a rope, does it not split?  When you prick it, does it not fray?

Conclusion

At this point, I fear we’re at a philosophical impasse.  Having learned that, according to IIT,

  1. a square grid of XOR gates is conscious, and your experience of staring at a blank wall provides evidence for that,
  2. by contrast, a linear array of XOR gates is not conscious, your experience of staring at a rope notwithstanding,
  3. the human cerebellum is also not conscious (even though a grid of XOR gates is), and
  4. unlike with the XOR gates, we don’t need a theory to tell us the cerebellum is unconscious, but can simply accept it as “reasonably established” and “largely uncontroversial,”

I personally feel completely safe in saying that this is not the theory of consciousness for me.  But I’ve also learned that other people, even after understanding the above, still don’t reject IIT.  And you know what?  Bully for them.  On reflection, I firmly believe that a two-state solution is possible, in which we simply adopt different words for the different things that we mean by “consciousness”—like, say, consciousnessReal for my kind and consciousnessWTF for the IIT kind.  OK, OK, just kidding!  How about “paradigm-case consciousness” for the one and “IIT consciousness” for the other.


Completely unrelated announcement: Some of you might enjoy this Nature News piece by Amanda Gefter, about black holes and computational complexity.

217 Responses to “Giulio Tononi and Me: A Phi-nal Exchange”

  1. Sam Hopkins Says:

    Maybe it would be convenient to compare the current state of a mathematically precise notion of consciousness to the current state of a mathematically precise notion of complexity (as discussed in your last post)?

    In my mind, one of the most compelling reasons to think you have correctly modeled your intuition is when you have multiple apparently different models for a certain concept, which nevertheless you can show are all more-or-less equivalent. For example, in that complexity paper you discuss how various notions of entropy are equivalent, as are (if I read the paper correctly) the notions of complexity. Another example of this type of convergence might be Turing machines vs. lambda calculus.

  2. Scott Says:

    Sam #1: I completely agree that “convergence” is excellent evidence for the robustness of a concept. By that light, an additional argument against Φ is that we don’t, to my knowledge, have multiple characterizations of it, or anything else that would pick Φ out as being particularly “natural” mathematically.

    On the other hand, I think, in all honesty, that the same is currently true for measures of complexity! In our paper, we (like many others in this area) recognized the need for a robust measure, and we tried to grope toward one by explaining why four different-looking proposals in the literature are at least related to each other. But I wouldn’t say the relations are nearly strong enough to make the measures “more-or-less equivalent.”

    Since you raised the issue of comparison, one obvious difference is that understanding the cream tendrils that form when a coffee cup is stirred, seems like very obviously an easier problem than understanding consciousness! 🙂 Yet I’d say that we’re still extremely far from understanding the former.

  3. Sid K Says:

    I would also like to mention the approach to discerning consciousness via ethics. If we ascribe consciousness to an entity, then usually, we should be unhappy to witness its destruction. I think this provides a nice partial operationalization of the intuition behind consciousness.

    So we can ask Tononi the question: Would you rather witness the destruction of a 2D grid of XOR gates or that of a living human cerebellum?

    Note that ethical qualms towards the destruction of an object are not sufficient for consciousness: I may after all be quite unhappy to destroy a favorite coffee mug. I don’t think they’re even necessary: you may have no ethical qualms in witnessing the destruction of Adolf Hitler. But I think it’s mostly necessary.

  4. Zhiming Wang Says:

    But in fact, when we stare at the cerebellum (if we could), *a lot more* is immediately available to us without any effort whatsoever.

  5. Scott Says:

    Sid #3:

      So we can ask Tononi the question: Would you rather witness the destruction of a 2D grid of XOR gates or that of a living human cerebellum?

    LOL! I wish I’d written that line myself.

  6. Patrick Says:

    By IIT’s standard, I think a block of diamond would be conscious. If I flick it, the vibration I measure will depend on the lattice of connections across the entire diamond. (Each of the four bonds of each carbon atom in the diamond molecule calculating some sort of vibration transfer function.)

    Would IIT predict the gas in the room is conscious? I’m not sure on that point.

  7. Patrick Says:

    Thinking about this a little more. One might argue that the diamond doesn’t meet the definition, because any actual vibration would dissipate in a large enough diamond. Such that the signal would fall below the noise threshold at some distance.(So in a sufficiently large diamond, large portions of the diamond are uninvolved in the calculation)

    But that would also be true of a physical instantiation of xor gates. To guarantee you implement an arbitrarily large xor grid that matches the theoretical, you’d need to regularly rectify and error correct your signal.

  8. fred Says:

    “You are conscious (though not when anesthetized).”
    “our own consciousness […] vanishes during certain epileptic seizures”.

    I’ve been wondering about the role of memory.
    Are we unconscious when anesthetized or we are conscious but no memories are being produced so there’s no record of the consciousness after the facts?
    What is the minimum amount of memory needed to effectively support consciousness?
    Someone close to me experienced a massive heart attack, was pronounced dead, eventually the heart was barely revived after an hour, and survived after undergoing a heart transplant. He has no memory of the entire event including anything that happened in the previous 2 weeks (no memory of attending his best friend wedding). From his point of view, he wasn’t conscious. He only has our word to believe that he was indeed walking around and conscious during those 2 weeks.
    Maybe the same thing happens during anesthesia/sleep.

  9. Will Says:

    Sid #3:

    I think you are ascribing much more to consciousness than Tononi. Tononi says that a 2D grid would be able to experience 2D space, but it doesn’t follow that a 2D grid would be able to have other experiences like the experience of pain.

    On the other hand, destroying a cerebellum would presumably cause suffering to the cerebral consciousness (which can experience pain).

    So you can just build your ethics on Bentham’s “Can they suffer?” and you have no problem. It’s only a problem if you base your ethics on “don’t destroy consciousness”. But why would you privilege “the experience of 2D space” so much?

  10. Will Says:

    I think you are being a little unfair to what you call Tononi’s argument #4. The part about staring at the wall is a little weird, but I think the point he’s trying to make is that many structures that seem to be associated with consciousness in our own brains look roughly like 2D grids. So maybe other 2D grids could be conscious. This also doesn’t seem like a great argument, but it’s a lot better than the argument as you summarize it. It also could provide an answer to your question about the rope. Maybe there isn’t any evidence that 1D structures in the brain play a significant role in consciousness.

  11. Luke Muehlhauser Says:

    I LOLed pretty hard at the “On reflection, I firmly believe that…” line.

  12. Sid K Says:

    Will #9:

    See, that is exactly the problematic move. We have a bundle of intuitions about ordinary conscious objects. These intuitions include things like: an ethical feeling towards the object; a feeling of empathy towards that object (“what it is like”); a sense that the object is intelligent; that it can dynamically respond to external stimuli; that it has a certain intentionality and “agentiness”; that it possesses a certain aesthetic sense and so on.

    These intuitions have implications. The ethical feeling means we don’t want to see harm come to it: which is why the question of whether fetuses are conscious is important. The empathy and intelligence means that we might be able to communicate with it and that it may communicate with us. The agentiness means that we expect the object to have intentions and change things in the world according to those intentions.

    Now if one wants to deny any and all of these intuitions and their implications, then I don’t see why one isn’t defining consciousness circularly and completely non-operationally. If high-IIT implies consciousness_IIT—but consciousness of no other variety—then so what?

    Are there any real world consequences of this definition other than attaching the label “consciousness” to some objects and not to some others?

  13. Anonymous Programmer Says:

    Why would evolution evolve consciousness if it had no free will? If consciousness has no power and free will doesn’t exist — why not a zombie or robot — it would behave the same?

    Any theory of consciousness that rejects free will and gives it no power, IIT obviously included, is insulting to every conscious being. Free will must be part of physics eventually.

    PS

    I am defining free will to be incompatible with determinism and fundamental randomness.

  14. anders Says:

    It seems like Φ is a perfectly reasonable upper bound on consciousness, like information is an upper bound on complexity. It also seems like the problem we run into with Φ being too high for simple expander graphs is analogous to the problem of a high entropy coffee cup: there may be a lot happening but it just isn’t meaningful stuff. So something like complexity for Φ, so that it only measured activity that had a meaningful relation with the rest of of the world might work well.

  15. Mark H. Says:

    Today, best machine learning techniques are based on deep neural networks (http://en.wikipedia.org/wiki/Deep_learning). Perhaps, the precise definition about what is consciousness is too much for a mathematician, possibly the top expression of human mind. Maybe there are concepts just impossible to exactly understand in our limited mind.

    Scott has demolished Tononi’s arguments. The point about 1D vs 2D is unconvincing. I guess that everything has some kind of consciousness, it like to see a texture and analyze the woodness” degree , it can be black, or white, but in some point it can be very “woody”. Another point is that the definition should be “human consciouness”. For me, it is possible that it can exist some being where the humans are not enough consciouss as they can analyse that we are some kind of finite automata with trillions of sensors and actuators.

  16. Wondering Says:

    Another odd thing about the phi measure for consciousness: isn’t a lack of modularity considered a hallmark of poor programming?

    I very much agree with other commenters that the evolutionary angle seems absolutely neglected. But in general: Why is it so hard to evolve intelligence? Or to make better use of artificial evolution in general? Both computationally and in, e.g., drug design, etc. The abstract set-up seems pretty clear. People sort of claim to do it in various contexts, but it never really seems to work. I am very curious about this!!

  17. Alexander Vlasov Says:

    I would also payd attention on combination of two prediction of IIT:

    …• Even a conventional computer running a virtual simulation of Scott’s own brain down to all its neurons and synapses would be unconscious (and of course it would “report” the same things Scott would)
    …• Even a simple but large 2D grid could be highly conscious

    Does it mean: Even a simple but large 2D grid could be highly conscious, but simulation of the same grid on conventional computer is not conscious.

  18. Scott Says:

    anders #14: Yes, as I said in the last post, it’s possible that Φ is at least an upper bound on consciousness, but I have no idea whether even that is true. One reason for skepticism is the one pointed out by Alexander #17: given any physical system with large Φ-value—including one that we’ve agreed is “conscious”—you could perfectly simulate that system by a different system with small Φ-value. (If nothing else, the simulating system could simply consist of two giant components weakly connected to one another, like a patient with corpus callosum severed.) So then you’d have to accept that the first system is conscious and the second is not, even though there’s no difference in behavior, and the only difference between the systems (in some sense) has to do with Φ itself.

  19. Will Says:

    Sid #12:

    I still don’t really understand your point. As far as I can tell Tononi’s definition of consciousness is the same as the general understanding: a 2D grid is conscious if and only if there is something that it is like to be a 2D grid. That is, if it has experience. But according to Tononi, IIT not only predicts the quantity but also the quality of experience. And he suggests that IIT says that a 2D grid would only have the experience of 2D space, similar to what you feel when you stare at a blank wall (where this prediction comes from I do not really understand). So a typical human would be capable of many different experiences (quales), among them, the experience of smelling roses, the experience of 2D space (staring at a wall), and the experience of pain. But there are people who are missing certain experiences. People with anosmia cannot smell roses. Certain people cannot feel pain. Probably there are people who can’t experience 2D space. So it’s easy to imagine different consciousnesses that are conscious of different sets of experiences. So try to imagine a consciousness (even a person) who can only experience staring at featureless walls. Do you have an ethical feeling towards that person? Maybe or maybe not, but this is a separate question from the question of experience (yes) or empathizablity (yes) or intelligence (no) or agentiness (no) or whatever. Just because all these things come together in humans doesn’t mean they must come together in 2D grids.

    “which is why the question of whether fetuses are conscious is important”

    Maybe it’s not the question of whether fetuses are conscious, but the question of what they are conscious of. Bentham’s maxim (“The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”) perhaps can be quibbled with (does he think it’s ok to kill a person if they are under anesthesia or if they have congenital insensitivity to pain?), but replacing “can they suffer?” with “can they stare at a wall?” doesn’t improve the situation. Maybe the threshold for ethical status shouldn’t be consciousness alone, but consciousness+agentiness or consciousness+(something else).

  20. Will Says:

    I think the line of reasoning I lay about above also leads to slight weakening of Scott’s argument #1. If we ask “Can a 2D grid experience pain?”, intuition says no and IIT says no. “Can a 2D grid experience smelling roses?” No and no. These could be our paradigm cases (making the paradigm case “is a 2D grid conscious?” is too vague). But “Can a 2D grid experience experience 2D space?” My intuition says no and Scott’s intuition also says no, but Tononi notes that different people have different intuitions about consciousness. The panpsychics think consciousness is everywhere and if a 2D grid is conscious it seems plausible that one thing it would be conscious of is it’s own extension in space. So maybe “Can a 2D grid experience experience 2D space?” is the x sin(1/x) of consciousness. Not that this is a really great argument, I just thought it would be interesting to point out.

  21. Nick M Says:

    This just out of Maynooth, Dublin and Pasadena:

    “In this article we review Tononi’s (2008) theory of consciousness as integrated information. We argue that previous formalizations of integrated information (e.g. Griffith, 2014) depend on information loss. Since lossy integration would necessitate continuous damage to existing memories, we propose it is more natural to frame consciousness as a lossless integrative process and provide a formalization of this idea using algorithmic information theory. We prove that complete lossless integration requires noncomputable functions. This result implies that if unitary consciousness exists, it cannot be modelled computationally.”

    http://arxiv.org/abs/1405.0126

  22. Scott Says:

    Nick M #21: Yeah, I mentioned that paper in my first IIT post. People asking me about it was one of the things that led to that post.

  23. Sid K Says:

    Will #19:

    My point is simply this: Suppose I grant that a 2D grid can experience 2D space. So what? Does this “fact” have any consequences whatsoever? We agree that it doesn’t raise the ethical status of 2D space. We agree that we can’t communicate with 2D space. We (probably) agree that it doesn’t lead to any new predictions about 2D space. As far as I can tell, saying that a 2D grid has this experience adds no explanatory power about 2D space and has no consequences to our behavior towards 2D space.

    I never wanted claim that ethical feeling is sufficient or necessary for ascribing consciousness. I just wanted to say that something that has consequences (like ethical feeling, or communicability), is necessary for your ascription of consciousness to be meaningful.

  24. asdf Says:

    I’m not sure about boiling water being hotter than ice. If ice is always colder, how are we supposed to explain freezer burn?

  25. Adam McKenty Says:

    Scott,

    Having read your original very thorough and well-stated post, and Guilio’s equally thorough and well-stated response, as well as your response to his response, I wanted to add a comment regarding intuitions about consciousness.

    It seems to me there is a fundamental difference between consciousness and other properties, such as heat or the continuity of functions, for which any accurate theory must match common sense intuitions in paradigm cases. The difference is that we can only directly perceive consciousness in one case: our own subjective experience, which occurs in a specific physical system.

    To go back to your analogy of heat, we can directly perceive heat in all sorts of systems, because our bodies are capable of measuring it directly (if imprecisely and only in a certain range). We can feel an ice cube and hot water, we can walk outside on a cold day and notice it’s cold. We have the words like “temperature” and “heat” which we use to communicate about this property of the world that we can all experience. Because 1) we can detect heat directly; and 2) we can detect the temperature of all sorts of different physical systems, we intuitively arrive at an accurate notion of “heat” as abstract from the other properties of the systems we’re measuring.

    Now, let’s imagine theorizing about heat in a situation where it is more accurately analogous for consciousness. Suppose we could only experience heat in one system — say, an electric kettle. We would know that sometimes the kettle is hot, and other times it isn’t. We could observe that in order for the kettle to be hot, the wire has to be plugged into the wall and the on-off switch has to be on, and that when it’s hot, steam comes out and it makes a certain sound. We could take good measurements of all of these parameters (the kettle-ish correlates of heat), and we could make intuitive inferences about other types of systems. For example, we would probably have the intuition that a frying pan could not be hot, because it has no wire to plug in, and no on-off switch. A dishwashing machine we would probably say could be hot, because it has a wire and an off switch, and it puts out steam after being on for a certain amount of time.

    These intuitions would in fact be accurate for a whole class of systems that are similar in construction and function to the system we can directly observe. We could predict, probably quite accurately, when any electric kettle was hot, and with varying degrees of accuracy for any kettle-like system, such as a crock pot, or perhaps (with some clever theorizing), a toaster oven. We could even dispense with some of our more egregious biases and come up with a truly “fundamental” theory of heat that predicts heat in any object that produces steam. But of course, our fundamental intuition about heat would still be completely wrong.

    It seems to me that the essence of consciousness is the existence of subjective experience, the “what it’s like to be”. In order to theorize clearly about consciousness and how it relates to physical systems, I think we have to be able to in principle separate this property from 1) the kinds of behaviours that we, as one type of conscious system, exhibit (intelligence, goal-directed behaviour, etc.); and 2) the types of content (sense impressions, thoughts, feelings, etc.) that our consciousness can be filled with, which are determined by the details of our neurobiology.

    It may be that these features /do/ in fact reliably predict consciousness in all physical systems — in our analogy, kettle-ish systems really are the only type of hot object, and the vast majority of physical systems throughout the universe have no heat at all. This may be so, or it may not be, but our intuitions about it don’t seem to me to be a trustworthy test for a theory of consciousness.

    Thanks for the fascinating dialogue!

  26. Matt Says:

    A very interesting exchange, but I think your section III largerly Tononi’s point. I understand his claim to be that we have direct evidence that the cerebellum is not involved in “our conciousness” – i.e. the usual conciousness that can verbally report its experiences. Since IIT is supposed to be able to identify the spatial extent of a concious system, it can be tested against this evidence.

    The question of whether or not the cerebellum has its own, separate, “trapped” conciousness is very different, and presumably only answerable using intuition and/or a theory of conciousness.

    The possibility of impossible to remember experiences during dreamless sleep strikes me as a somewhat intermediate case, but even a fairly rudimentary understanding of the neural correlates of (“our”) conciousness would probably suffice to rule that out.

  27. Scott Says:

    Adam #25: I agree with your point, but would phrase it as follows. The right analogue of “continuity” or “heat” in this topic is not “consciousness” per se, but rather “the observable correlates of consciousness, whether intelligence or whatever else.” The latter are what’s relevant to the Pretty-Hard Problem, and they’re indeed what I’ve focused on in these posts.

  28. Scott Says:

    Matt #26: Right, I did notice that Giulio was careful to say only that “our” consciousness doesn’t reside in the cerebellum, not that there can’t be any consciousness there. But it seems to me that this distinction is immaterial to the main point. For if you want to say that the low Φ of the cerebellum (if indeed it is low) is “suggestive evidence” in favor of IIT, it can only be so if you think that the cerebellum not being the seat of our consciousness, is suggestive of its not being the seat of any consciousness—that we’re allowed to infer from the seeming observable facts to the cerebellum’s being an unconscious lump of matter. But in that case, it seems to me that you have to say that the enormous Φ of a 2D grid is, at very least, suggestive evidence against IIT!

  29. Francisco Boni Says:

    Scott is always engaging in rich debates. It’s a master class in scientific discussions. Most blog and online debates that I’ve seen degenerate into noise (intellectual dishonesty, narcissism, etc). Congratulations to both Giulio and Scott.

  30. JimV Says:

    “… When you prick it, does it not fray?”

    I saw what you did there. Nice.

    (The rest of the argument was great also.)

  31. lohankin Says:

    Sid #12:
    I share your view, Ethics, free will, empathy, etc are properties of consciousness. They all come in a package.
    But implications are even more far-reaching than those you describe..

    To begin with, one of the consequences is that conscious “object” must have the right and ability to self-destruct if it feels that way (.e.g his suffering is intolerable and (in his view) serves no purpose). Anything else would undermine any concept of ethics in general IMO.
    Therefore, the ability of self-destruction is a minimal requirement compatible with consciousness.
    But self-destruction is a harsh measure. There’s a better option: self-modification.
    Can there be conscious beings that developed this ability? Bacteria had enough time to acquire necessary knowledge. Sorry for a bold conjecture, but I seriously believe all life, including humans, resulted from creative activity of bacteria. According to the conjecture, there’s no consciousness that is not a bacterial (=cellular) consciousness – other forms, if ever existed, (probably) self-destructed already.

    Humans are creations of bacteria. Which means that there are some bacterial species that are “smarter” than us. We just don’t know who they are and what they are up to.
    They engineered humans out of (variants of) themselves. We are just a collection of specialized cells, each having its own consciousness. We perceive it as “holistic” property of entire organism, and in a sense it is (information gets collected from complex cascade of other cells), but still… “me” is a single cell after all

    As an aside, quantitative measure of consciousness discussed in this thread (no matter positively or negatively) is in fact meaningless IMO, Even if we had some hypothetical physical device said to measure consciousness, what would this measure mean? If I have 2 dollars, and you have 4 dollars, you can buy 2 bottles of beer whereas I can buy just one. But if your consciousness is 4, and mine is 2, what does it mean? What can you do by 2 times … better, or faster, or what? Sounds like a complete absurd to me.

  32. Ben Standeven Says:

    Hmm. Actually, if I am understanding IIT correctly, the phi-value of the cerebellum only tells us whether the cerebellum has its own consciousness. To tell whether it contributes to our consciousness, we need also the phi-value of the whole brain: letting this be φ_B, the cerebrum be φ_C, and the cerebellum be φ_L:

    φ_B < φ_C < φ_L or φ_B < φ_L < φ_C: The cerebrum and cerebellum produce separate consciousnesses that communicate with each other.

    φ_C < φ_L < φ_B or φ_L < φ_C < φ_B: The whole brain produces a single consciousness, in which the cerebrum and cerebellum determine the experiences.

    φ_L < φ_B < φ_C: I don’t understand what is supposed to happen here; presumably the cerebrum will generate its own consciousness, and the cerebellum will not. But what about the whole brain? In any case, there is a very serious problem here: the cerebellum is not conscious in this scenario, but it is in the first one, φ_B < φ_L < φ_C, which differs only in the nature of the nerves connecting the cerebrum and the cerebellum. So the consciousness of the cerebellum depends on its not being connected to the cerebrum; a direct contradiction of the first axiom/postulate of IIT.

    [Of course the sixth possible case is just the same with cerebrum and cerebellum swapped.]

  33. Adam McKenty Says:

    @ Scott (#27): I’m puzzled, then. I think I’ve misunderstood you, because of this: if taken literally, it seems to me you are saying that the intuitions you referred to were not intuitions about where consciousness per se could exist, but intuitions about where (third-person) observable correlates of consciousness could exist. But this becomes non-sensically trivial: we don’t need intuition to tell us what can and can’t have the correlates of consciousness. The whole point of correlates is that they are measurable!

    Since I would like to understand your argument, and it seems to rest heavily on intuitions, perhaps I could ask the following question: what is it that you have intuitions about that limit the acceptable output of a theory of consciousness, and how do those intuitions avoid the single sample problem that I described in my previous comment?

    Thanks!

  34. Nex Says:

    For IIT and similar approaches sake I think they should be restated to make it clear that what they really deal with is not consciousness in its original, common meaning. Here is my take:

    1. consciousness as commonly understood does not exist – it is an illusion, there is no free will or anything else that singles out things commonly perceived as conscious from those unconscious, that cannot be explained in a simpler terms not involving this ill-defined concept.

    2. This simpler term is precisely the (localized) ability to act upon information coming from surrounding world. This is what our subjective experience we call consciousness really amounts to – it’s the sensation of processing information coming from the physical world.

    3. Everything has this ability and the associated sensation – from atoms to animals, so this sensation is not unique to us. Yes, the reality is weird like that. If electrons could talk they would scoff at you calling them unconscious ;P

    4. The amount of information from the surrounding world that has to be analysed to predict the behavior of the entity reliably is the closest that we can get to the common quantitative meaning of “consciousness” and still stay in the realm of science.

    This pretty much solves all the problems but one – how come the reality is so weird that there is such a thing as subjective experience of information processing. But this is really no different from the ultimate problem we cannot hope to ever solve – how come there is Reality at all?

  35. David Chalmers Says:

    Scott — this exchange is terrific. It’s helped me clarify my own thoughts on IIT. I’m inclined to think that your criticisms have force, but I don’t know whether we’re yet at the heart of the matter.

    First, on Giulio’s argument from phenomenology: Like others I think you haven’t quite got Giulio’s phenomenological argument right here. His argument seems to me to have the form (1) The 2D mesh has properties X [structure, relationships, integrated information, …], (2) Conscious systems have properties X, (3) Having properties X suffices for consciousness, therefore (4) The 2D mesh is conscious.

    Obviously the major question concerns (3), which I take it is a sort of speculative inference from (2). It turns on strengthening (2) from the claim that X is necessary for consciousness to X is sufficient for consciousness, presumably by holding that X is a full (or full enough) characterization of consciousness. But as they say, one person’s modus ponens is another person’s modus tollens, and anyone (like you) who thinks that the 2D mesh isn’t conscious will presumably say that there is some other key feature of consciousness that X leaves out. Which features? That brings us to second point.

    Second, the argument from axioms. It’s true that there’s a significant distance between Giulio’s axioms and his model, but I don’t think that’s the heart of the issue here. The key question, following the above, is whether his axioms merely give some properties of conscious systems or whether they give sufficient conditions for consciousness. Giulio needs the latter but this is far from obvious. Prima facie there are various other phenomenological properties of consciousness that these axioms leave out.

    Here’s one candidate: conscious experiences are always available to a subject (call this axiom 6). That is, experiences don’t float free but are always presented to a subject (like you or me) in such a way that we can use it. I don’t know how one would cash this out in terms of information and/or computation, but one natural thought is that the relevant information has to be able to play the right sort of computational role in a system. What the “right sort” of role is, I don’t know. But perhaps this captures the intuitive difference between the visual system and a 2D grid: the visual system uses the information in the right sort of way, the 2D grid doesn’t. Same for the expander. Whether this is the right additional axiom or not, I’d say that perhaps there is some promise in triangulating from our judgments about grids/expanders and from features of phenomenology to articulate further “axioms” setting out further conditions on consciousness.

    Incidentally, you’ve raised worries about whether integration of information in Giulio’s sense is sufficient for consciousness, but I also worry about whether it’s necessary. Say that my consciousness at a given time is divided into visual and auditory components which are quite independent of each other: maybe I’m looking at one scene and hearing some completely independent music over headphones. Then it looks like my visual experience and my auditory experience have very low mutual information. And presumably they could correspond to two different systems in the brain with very low mutual information.

    Of course these systems have potential joint effects (through audiovisual integration), thereby raising the mutual information in Giulio’s sense. But it’s not obvious that this would raise the mutual information especially high. One would have to work through the details, but this case is at least a candidate for a highly conscious system with quite low phi.

  36. David Chalmers Says:

    Having just said something that tends to support your arguments against IIT, let me now say something that tends to defend IIT against your arguments, especially the arguments from counterintuiveness.

    First, I think you’re right that for some disagreements over the application of a term (say, I call a step function “continuous” and you don’t), the right diagnosis is that the parties mean different things by the term. But for other disagreements it’s not: for example, if I think that Jesus Christ was divine and you don’t, then this probably isn’t a verbal disagreement over “divine” (even though Christ is presumably a paradigm case of divinity). It’s very tricky to distinguish disagreements of the first sort from disagreements of the second sort.

    I’ve written a paper (“Verbal Disputes”, at http://consc.net/papers/verbal.pdf) on this topic trying to outline a method and sort out some underlying issues. I won’t try to recap that ground here. But one general thought is that some concepts/words get their meaning from the way they’re applied to cases (e.g. “chair” gets its meaning from the sort of things we count as chairs), while other don’t (e.g. maybe “divine” gets its meaning some other way). Then our intuitions about which systems count as chairs are conceptual intuitions, which demarcate the meaning of the concept, and so can’t turn out to be radically false. But our intuitions about which systems are divine are substantive intuitions that in effect mark bold empirical hypotheses and can turn out to be radically false.

    In the case of ‘consciousness’, a natural view is that it gets its meaning from our first-person acquaintance with subjective experience. This single case (or a single series of cases over time) grounds our fundamental grasp of the concept. We can certainly apply it to other people and other systems, but the applications to other people aren’t what ground the meaning. If that’s right, disagreements about which other systems are conscious needn’t reflect differences in the use of the term. A solipsist who denies that other people are conscious may be irrational and megalomaniacal, but they needn’t mean something different from ‘consciousness’ from the rest of us (if they did, they needn’t be irrational or megalomaniacal!). Same for a panpsychist who says that elementary particles are conscious. And same for Giulio who says that 2D grids and expanders are conscious.

    Of course some people will reject the view that the meaning of “consciousness” is grounded wholly in the first-person case. Someone like Dan Dennett will think it is a term for a certain sort of functioning or behavior, for example, one that we learn by seeing various paradigm systems functioning. But people like this (including analytic functionalists and logical behaviorists, in the philosophical jargon) are also the people who are likely to deny that there’s a distinctive hard problem in the first place. If ‘consciousness’ were just a word for a certain sort of behavior, then to explain consciousness we’d just need to explain that behavior, and the hard problem wouldn’t be harder than the various “easy” problems of consciousness and behavior. But insofar as this whole discussion is premised on there being a distinctive problem of consciousness, then I think the first-person view of the meaning of the term is an especially natural one, and your view about differences in meaning is then harder to support.

    Of course the fact that Giulio’s theory makes a wildly counterintuitive prediction may still count against it. But taking the line above, it’s not clear that our intuitions about which other systems are conscious should carry more weight than our intuition that the Earth is flat. Of course it is a lot easier to find counterveiling empirical evidence in the latter case, so maybe intuitions get more *relative* weight compared to empirical evidence in the case of consciousness. But that still leaves them just as likely to be wrong.

    My own view is that the best we can do here is find a precise theory associating physical systems with conscious states (a potential answer to the PHP in the sense below) that (i) predicts all the clear data about consciousness that we do have, especially our first-person phenomenological data, and (ii) is simpler than any other theory that does this. Given such a theory, I think we have good reason to accept it even if it makes counterintuitive predictions about non-human systems (I think Peli suggested something like this in the other thread). For example, if the simplest theory associating physical systems with consciousness turns out to be a panpsychist one, then we have reason to believe panpsychism.

    Of course I don’t think IIT is yet such a theory. We don’t yet have good reason to think it satisfies (i), since we’re not yet close to being able to measure our complete brain state, compute IIT’s predictions for the associated experience, and compare it to our actual experience. But if IIT did turn out to satisfy (i) and (ii), then it’s not clear to me why we shouldn’t accept its counterintuitive predictions.

    Incidentally, going back to the previous discussion, I wonder if you’re happy about my modification of your definition of “pretty hard problem” so it’s the problem of constructing a theory that tells us correctly which physical system have which states of consciousness, or whether it’s essential to your conception of the PHP that it’s the problem of matching our intuitive judgments (rather than the problem of matching the facts). If the former, perhaps I’ll co-opt your name (with credit) and use it. If the latter, I’ll try to find a new name.

  37. Scott Says:

    Adam McKenty #33: What I was trying to do, was to demarcate the Pretty-Hard Problem as clearly as possible from the original Hard Problem. Consciousness, by definition, I take to be empirically inaccessible in any case other than our own—for, no matter what behavior any physical system other than you exhibited, you could always just consider that system to be a philosophical zombie, if you chose.

    However, in addition to consciousness itself, there’s also a second notion, which (for want of a better term) I’ll call “apparent consciousness,” and which basically means, “the type of intelligent behavior that ought to lead reasonable people to infer the presence of consciousness.” Now, this second notion is not obviously empirically inaccessible, in the sense that the first one is. But it should be clear that the second notion is still extremely hard to define precisely! Indeed, in explicating the second notion in more detail, it seems to me that the best we can do at present is to give various paradigm-cases (we have apparent consciousness, rocks don’t, etc.), as I discussed in the post. A solution to the Pretty-Hard Problem, as I understand it, would consist of a mathematical criterion that tells us whether an arbitrary physical system has apparent consciousness or not, analogous to the ε-δ definition of continuity.

  38. Scott Says:

    David Chalmers #36: Thanks again for the thoughts! I’ll respond to some of your other points later, but for now, let me just take up your question:

      Incidentally, going back to the previous discussion, I wonder if you’re happy about my modification of your definition of “pretty hard problem” so it’s the problem of constructing a theory that tells us correctly which physical system have which states of consciousness, or whether it’s essential to your conception of the PHP that it’s the problem of matching our intuitive judgments (rather than the problem of matching the facts). If the former, perhaps I’ll co-opt your name (with credit) and use it. If the latter, I’ll try to find a new name.

    I would be so honored to have you use my term “Pretty-Hard Problem,” that I accept your proposed modification of the problem statement. 🙂 Well, a more serious reason is that it’s easier to state and remember the PHP as “the problem of which physical systems can be associated with consciousness, and of which kinds,” than as “the problem of constructing a theory of consciousness that matches our intuitions about which systems are conscious on certain paradigm-cases.”

    If you like, however, you can also ascribe to me a separate doctrine regarding the PHP. The doctrine holds that any claimed solution to the PHP that grossly violates intuition on the paradigm-cases (by claiming, for example, that regular 2D grids are conscious), has a severe and possibly-insurmountable hurdle to overcome before it can be accepted. The reason, as I said, is that I don’t see that we have anything much other than the paradigm-cases, at present, by which to judge claimed solutions to the PHP.

    However, while I’m prepared to defend this doctrine, I’m also perfectly happy for people to discuss the PHP (and call it that) even if they disagree with the doctrine.

  39. Scott Says:

    Nex #34: Look, I’m fine if you want to advocate a deflationist account of consciousness. The only point of yours that I really insist on disputing is your point 4, where you finally get down to quantitative brass tacks:

      The amount of information from the surrounding world that has to be analysed to predict the behavior of the entity reliably is the closest that we can get to the common quantitative meaning of “consciousness” and still stay in the realm of science.

    Why “the amount of information from the surrounding world needed to predict the entity”? Why not some other measure—like, say, the number of internal computational steps that the entity performs, or the sum or product of that with your measure (not that those are any better)?

    Here are two observations that should give you pause:

    First, your measure is completely different from, and incompatible with, the Φ measure of IIT—which tries to measure “integrated information” within the system, and which isn’t interested at all in the system’s correlation with an external environment. (According to IIT, even a system completely isolated from the rest of the universe can be conscious, so long as it’s integrated enough.) Thus, what you’re proposing is not in any way an “explication” of IIT: it’s a different, rival theory.

    Second, it seems to me that your rival theory is open to counterexamples just as easily as IIT is. For example, consider a super-hi-res digital camera, which takes in way more bits from its external environment than you or I do (but then just stores the bits to memory). Do you really want to say that such a camera is quantitatively “more conscious” than you are? Or do you not count this, because the contents of camera’s memory don’t count as “behavior” that needs to be predicted? Well then, what does count as behavior that needs to be predicted? What if we could choose to query the camera about any bit in its memory, and the camera would explode if the bit was a ‘1’ and do nothing if the bit was ‘0’? Then would you say that a vast number of bits about the camera’s environment were needed to predict the camera’s behavior, and therefore the camera was more conscious than you?

  40. Jay Says:

    David Chalmer #36,

    Shouldn’t you consider a middle way to the understanding of subjective experience in between our first-person acquaintance and Turing/Dennett’s view that it’s all about behavior?

    Most neuroscients and psychologists do accept subjective report from others, but then try to understand its production by either manipulating the experimental condition or by correlating to the neurological condition.

    One exemple, that you most likely know but just to make that concrete: blind sight patients sometime avoid being hit by a pitched ball, despite their report that they don’t see anything.

    If we’re on the all behavior point of view, then we may conclude these patients lie. If we accept they don’t, then this provides all sort of very interesting cues such as, our seemingly unfied consciouness is actually composed, visual consciousness is not necessary for self-consciousness, visual system can process information without producing visual consciousness, you name it.

    If I correctly understand your Hard problem line of view, it seems we should conclude that, ok this is interesting, but this is all about the pretty hard problem, not the Hard one. Does that correctly describe your view or I’m just erring here?

    Of course many of those who work in this field are only interested in the the pretty-hard problem as a way to a full understanding of when and why conscious experience is produced, a theory that will in the long run put the Hard problem is the very same category as Solipsism (that is, not attackable on logical grounds, but clearlly less usefull to understand observations, cure patient, etc). Do you think that can’t be?

    Scottt #0,

    Amusing chess mat to IIT, or at least to IIT in its present form. But maybe that was too easy? Could you make one suggestion or two to those who would like to try to patch the various holes and self-contradictions of this theory?

  41. Darrell Burgan Says:

    It seems to me the IIT theorists should be applauded for even attempting to come up with a mathematical definition of consciousness. Their current theory is flawed but early models of the atom were similarly naive. Here’s hoping that IIT leads to a much more effective model of consciousness. It certainly seems like an important field of study.

  42. Will Says:

    Sid #23:

    OK, I see your point. I think Scott in #37 also helped clarify things.

    There’s a part of me that wants to disagree. Part of me that says the reason for studying consciousness is because it’s weird and wonderful and it would be exciting to know if a 2D grid could feel, even if knowing that fact had no consequences. Because I wonder if ascribing consciousness to anything, whether to a 2D grid or to Barack Obama or Angelina Jolie, ever has any consequences or explanatory power. If God whispered into your ear that your best friend was a zombie, would you really treat them any differently?

    But there’s another part of me that abhors thought experiments involving philosophical zombies. There do seem to be times where ascribing consciousness or not feels like it makes a difference to us. These are the hard cases that you and Scott have mentioned: fetuses, coma patients, oysters, etc. Somehow the 2D grid is not among these hard cases. So I get your point. But… wouldn’t it be cool to know what your wall is feeling right now?

  43. Darrell Burgan Says:

    On the nature article: why is it that information cannot be destroyed? I understand that the standard model of QM may forbid it, but intuitively it certainly seems that information is destroyed on a daily basis. To tie it back to the original discussion, when a human dies, i.e. when a consciousness evaporates, it certainly seems like all the information tied up in that consciousness is destroyed along with it ….

  44. Darrell Burgan Says:

    Will #42:

    But… wouldn’t it be cool to know what your wall is feeling right now?

    What I’d really like to be able to do is have a conversation with my computer. I have some pointed questions to ask it about its recent behavior … 🙂

  45. fred Says:

    Any measure of consciousness can never be validated since by definition consciousness is pure subjectivity.

    Measures that score high on anything that answer “yes” when asked “are you conscious?” and score lower on animals and zero to the rest could be of some practical use – e.g. when assessing the rights of AIs or people who are in coma.

    Measures that score high for both humans and 2D grids (or whatever object that doesn’t appear to be intuitively conscious) are ultimately of no practical use.

    Btw, Tononi seems to insist that we should only measure “actual physical structures”. But can’t I trivially implement any finite structure by taking a pen and drawing it on a large piece of paper?
    Also, how does the Φ measure of IIT account for the brain of a living person vs the brain of a person who died 5 min ago?

  46. Scott Says:

    Darrell #43:

      why is it that information cannot be destroyed? I understand that the standard model of QM may forbid it, but intuitively it certainly seems that information is destroyed on a daily basis. To tie it back to the original discussion, when a human dies, i.e. when a consciousness evaporates, it certainly seems like all the information tied up in that consciousness is destroyed along with it

    It seems that way, but according to modern physics, it ain’t so. All the equations that we know governing the evolution of subatomic particles are completely time-reversible—i.e., if you take any solution to the equations and replace t by -t (and swap left with right and particles with antiparticles, but that’s irrelevant to this discussion), then you get another valid solution. As such, whenever we see anything anywhere in the physical world that looks irreversible, the irreversibility can only be “apparent”—i.e., a byproduct of the increase in entropy associated with the Second Law, or (if you like) of the extremely low entropy of the universe’s initial state. So for example, if you burn a book, it must be possible in principle to recover the book from the smoke and ash and emitted photons; if you scramble an egg, it must be possible in principle to unscramble it, etc. These things are “effectively” impossible, but only for the same statistical reason that you never see the gas particles in a box all collect themselves in a single corner.

    If you want to reject the above, then you need to rewrite not only QM, but essentially all of physics going back to Galileo and Newton!

    (One caveat: above, I was talking only about the evolution of isolated physical systems, ignoring quantum measurement involving “observers.” If you’re a Many-Worlder or equivalent, then you don’t even need this caveat, since you regard quantum measurement as just another special case of ordinary, reversible unitary evolution, with an effective increase in entropy governed by the Second Law—just like the book being burned or the egg being scrambled. But if you’re an instrumentalist or agnostic about quantum measurement, then you do need the caveat.)

  47. Quentin Says:

    Could we say that the PHP amounts to proposing kinds of bridge laws between physical and mental properties, and the HP to metaphysically explaining the bridge laws?

    About consciousness having different meanings: I think it is not sufficient to say that we refer to our inner experience to say that there is a straightforward meaning. The meaning could still be vague e.g. do we count intentionality, phenomenality, short time memory, self representation as essential or not?

    I can conceive of borderline cases (pure phenomenality without memory and self). The problem is that we as human usually have all aspects together and the meaning of consciousness in non-human cases is still debatable.

  48. Darrell Burgan Says:

    Scott #46:

    So for example, if you burn a book, it must be possible in principle to recover the book from the smoke and ash and emitted photons; if you scramble an egg, it must be possible in principle to unscramble it, etc.

    Then it must be possible in principle to resurrect a dead consciousness, right? I certainly would not propose to rewrite any physics, just trying to understand the implications. Seems like this is a pretty big one … !

    By the way, I was reading this article and thought it might be of interest.

  49. fred Says:

    Scott #46
    ” All the equations that we know governing the evolution of subatomic particles are completely time-reversible—i.e., if you take any solution to the equations and replace t by -t then you get another valid solution.”

    If you don’t mind a question regarding this – how can time reversibility coexist with the inherent randomness of QM?

    Say I have a system with one atom that absorbs a photon at time t1 then later re-emits it at time t2 > t1.
    If I understand correctly, as of t1, time t2 is actually often pretty random (QM deals with probability, etc).

    But when we “reverse” time on this system after t2, the equations tell us that the photon reverses its direction, then it gets absorbed by the atom at t2, then it gets emitted at a later time t1′.
    But do we have t1’= t1 (we would in a purely deterministic universe)?
    In “reverse”, according to QM, t1′ is should just be as random as t2 was when time was going forward, so t1′ != t1.
    But if t1′ != t1, what is the actual meaning of “the evolution of subatomic particles are completely time-reversible”, and how would that be compatible with the space-time continuum view of general relativity?

  50. Scott Says:

    fred #49:

      how can time reversibility coexist with the inherent randomness of QM?

    See the “one caveat” in my comment #46.

  51. luca turin Says:

    I am shocked by the first sentence of Tononi’s reply: “Scott Aaronson has taken upon himself to show that integrated information theory (IIT) of consciousness is wrong.” Grow up.

  52. Nex Says:

    Scott #39.

    I see my phrasing of point 4 was unfortunate since it seems to have suggested only the surrounding information at a given point in time matters, hence the camera analogy. But that is of course not true. I should have said “The amount of information that has to be analysed to predict the behavior of the entity completely,” and besides sensory information it of course includes all the relevant memories and molecular makeup of the entity which are derived from its past. This is also why I don’t find it all that different from IIT (if I understand it right) since this part can be thought of as integrated information and it dwarfs sensory information.

    So the camera is not a good counterexample since it’s not forming any memories. But of course you could modify it to extract information from each photo and then act on on this integrated information in addition to sensory information and if it were complex enough and had already taken enough photos it could be more conscious then us.

  53. Scott Says:

    luca #51: Eh, I don’t see anything wrong with that sentence (and it’s even conceivable that some readers would find whiffs of immaturity in my response 😉 )

  54. luca turin Says:

    Scott #53: Good for you! I find it condescending. Your immaturity is of a higher order 🙂

  55. Scott Says:

    Nex #52: OK, thanks for the clarification. However, your corrected idea is still very different from IIT, which is emphatic about only counting the “integrated” information, and not any other forms of information. So for example, even if you had to analyze terabytes of data in order to predict the behavior of a system, if the data was organized (say) in a 1-dimensional array like a Turing machine tape, rather than being “integrated” in the specific way that produces a large Φ-value, then IIT would say that there’s no consciousness there. It’s the arbitrariness of the proposal (to my mind) that I object to more than anything else.

    Of course, if you want to propose a rival theory that’s different from IIT, you’re welcome to do so! 🙂 However, you would need to specify the rival theory in considerably more detail than you have. For example, what does it mean to “act on” information in memory? As in the example from my last comment, is it enough that a certain bit in memory would be acted on given some environmental stimulus, or does it need to actually be acted on to count?

    More generally, you’d need to propose an answer to the same question IIT tries to answer (boldly but unsuccessfully in my opinion): namely, given a wiring diagram for any given mechanical device, how do I program my computer to tell me how much “consciousness” that device has?

  56. Scott Says:

    Darrell #48:

      Then it must be possible in principle to resurrect a dead consciousness, right?

    If you want to ask a physics question, it’s considered polite form to rephrase it so that it doesn’t involve the word “consciousness.” 😉

    Having said that: if you believe that “consciousness supervenes on the physical”—that is, that the facts about what consciousness(es) are present are entirely a function of the current state of the physical world—and you also believe in the reversibility of physical law, then yes, it must be possible in principle to resurrect a dead consciousness.

    But this isn’t so surprising, when you realize all it’s actually saying is that you could, given unlimited time and unlimited technological control over the state of the entire universe, create a configuration of atoms that was indistinguishable from that of a person who had just died.

    In practice, this might be like unscrambling the egg, etc.—i.e., an effective thermodynamic impossibility. On the other hand, if you talk to the friends of mine who wear Alcor bracelets, they don’t see such things as effectively impossible at all!

  57. Christof Koch Says:

    I would like to point out three mistakes in Scott’s response to Giulio’s comments on his Integrated Information Theory.

    (1) Scott claims that IIT states that while a properly wired 2-D array of XOR gates has a non-zero PHI and therefore that it feels like something to be this system, this is not true for a 1-D array of XOR gates. This assertion is false. Even a 1-D array of XOR gates has a non-zero Phi. Indeed, in the limit, a single photodiode connected to a 1-bit memory has an elemental quale. It feels like something to be this circuit; it is conscious of one thing – the distinction between this and not this (not of light or dark, because that requires many more concepts).

    (2) IIT does not claim that the cerebellum by itself would not be conscious. IIT does, however, explain a puzzling clinical observation – namely that patients with partial or complete loss of their cerebellum do not complain of a loss of consciousness or aspects of consciousness. Patient may become ataxic, their gait or speech may become disorganized but they don’t appear to suffer loss of conscious experience. Thus, the empirical finding that IIT successfully addresses is that in the presence of a functioning cortico-thalamic system, the cerebellum does not contribute in a major way to conscious experience (because of it’s stereotyped makeup and its simplified connectivity). By itself, the cerebellum most likely does has a PHI value – as defined by IIT – different from zero.

    (3) It is nonsense to claim – as Scott does – that because my experience of something – such as 2D space – is rich and structured that this is evidence that that something is rich and structured enough to be conscious. What Giulio was pointing out was the phenomenological observation – that goes back to Kant, Husserl and the astute William James in his 1890 Principles of Psychology – that the conscious perception of space is very rich, much richer than we give it credit for. Like a fish swimming in water, we are rarely aware of this richness as it constitutes the very basis of what we see and feel and how we organize our experiences, even for most blind subjects.

    While IIT may well not correctly explain how subjective experience fits into the objective world of science, it is important to criticizes IIT for the claims and inferences it actually makes.

    Christof

  58. John kubie Says:

    Concerning the evolution of consciousness. As I understand IIT, it’s a one-way street: complexity in the world produces consciousness, but consciousness has no access to affecting the natural elements that create complexity (or anything else). If true, then consciousness cannot be under selective pressure and won’t evolve. This makes the approach much less interesting.

  59. Nex Says:

    Scott #55: Ok, I see now that my understanding of IIT was wholly inadequate, sorry bout that 😛

    As for my alternative theory, I think the best way to measure the amount of “consciousness” would be by Kolmogorov complexity of the smallest program plus dataset able to completely simulate the entity given all it’s possible stimuli. Of course one would have to decide if internal bits which change state but never actually influence the behavior/output of the entity should be included in the simulation or not. Personally I would say no just for practical reasons, but I admit I haven’t given this much thought. I am mostly interested in the other, philosophical points which you did not object to, so I’ll just leave it at that.
    Thanks for the comments.

  60. Pete Says:

    Christof #57

    Your point (2) about the cerebellum still appeals to the traditional notions of consciousness as proof in the “cerebellum is not very important to consciousness” argument. Scott’s point is that you can’t use this natural experiment (the loss of part or all of the cerebellum) to support your theory, because then at the same time you are appealing to the traditional notion of consciousness here, and rejecting it your argument about the 2-d grid. You want to have your consciousness and eat it too.

  61. Scott Says:

    David Chalmers #35, #36: OK, I remembered the two other points of yours that I wanted to respond to.

    First, regarding large Φ possibly being not even a necessary condition for consciousness, let alone sufficient: we think alike! I made the same point in comment #18 above.

    Second, regarding the people who think of consciousness as defined by paradigm-cases, generally being the same people who deny that there’s any distinctive Hard Problem in the first place: well, I guess the first thing to say is that, if I’m pioneering a new but logically-consistent combination of positions, then so be it! There are worse fates. 🙂

    More seriously, I think it’s crucial here to make the distinction between consciousness and “apparent consciousness” that I made in comment #37, and I apologize if I wasn’t clear about it before.

    Unlike the eliminative materialists you mention, I’d grant that it’s possible that there’s an underlying fact of the matter about some physical systems being “truly conscious” and others not, so that disputes about which systems fall into which category can’t be resolved even in principle by simply agreeing to use different words. I.e., to expand on your “divine” example, maybe God just grants souls to some physical systems and not to others, and we can disagree about which systems, but (by divine decree!) there’s no possible disagreement about what the question means—so that, for example, it makes perfect sense to ask whether a tree has a soul, even if the answer is “no.”

    But I’d also say that “true consciousness,” in the above sense, is empirically inaccessible almost by definition. And therefore, I’d say, if we’re trying to make a connection (as IIT is) to the actual facts of neuroscience, or to what we do or don’t or should or shouldn’t regard as being conscious in everyday life, then our only hope is to switch over to what I called “apparent consciousness” (which is close, if not identical, to intelligent behavior). And unlike with “true” consciousness, I’d say that we do have a tenuous empirical grip on this notion of apparent consciousness—but the tenuous grip that we have seems to me to consist almost entirely of paradigm-cases (e.g., that we have apparent consciousness while rocks do not).

  62. fred Says:

    Christof #57
    The idea that “a single photodiode connected to a 1-bit memory has an elemental quale” would make sense – a single neuron would have an elemental consciousness.
    Similarly there’s no way to tell whether the consciousness I regard as the “I” is the only one living in my brain – there could be thousands of them, adding up bottom up, from the neurons, all the way up to the entire brain. I picture the analogy of a big swarm of birds,
    https://www.youtube.com/watch?v=hzMfOTKxBwA
    sometimes moving in the same direction (when the mind is focused to a single point of attention) or getting diluted and moving in different simultaneous directions (thoughts constantly bubble up).
    This feeling is familiar to anyone who has seriously experimented with meditation.

  63. Christof Koch Says:

    Regarding the link between evolution and consciousness. IIT takes no position on the function of experience as such – similar to physics not having anything to say about the function of mass or charge. However, by identifying consciousness with integrated information, IIT can account for why it evolved. In general, a brain having a high capacity for information integration will better match an environment with a complex causal structure varying across multiple time scales, than a network made of many modules that are informationally encapsulated. Indeed, artificial life simulations (“animats”) of simple Braitenberg-like vehicles that have to traverse mazes and whose brains evolve, over 60,000 generations, by natural selection, show a monotonic relationship between the (simulated) minimum of integrated information and adaptation (Edlund et al 2011; Joshi et al. 2013).

    That is, the more adapted individual animats are to their environment, the higher the integrated information of the main complex in their brain. Furthermore, ongoing work (Albantakis et al. 2014) demonstrates that over the course of the animats’ adaptation, both the number of concepts and integrated conceptual information increases. Thus, evolution by natural selection gives rise to organisms with high PHI because they are more adept at exploiting regularities in the environment than their less integrated competitors.

  64. Christof Koch Says:

    Pete #67

    Well, I want to explain the known facts about consciousness. One is my own phenomenal experience – including that of space; two,
    there are a large number of neurological facts about consciousness (Francis Crick and I called them the neuronal correlates of consciousness or NCC). One of them is that the cerebellum doesn’t appear to be terrible important for consciousness.

    If you take IIT as theory seriously, it makes a number of non-trivial predictions. One is that even a silent cortex, in which neurons are currently not firing, can generate a conscious experience. Indeed, this is what might be happening in some forms of meditation. IIT would also predict that if you were to artificially silence cortex – say by anesthetizing the brain – you would not experience anything. Note that in both cases, the critical parts of cortex would not be generating relevant spiking activity. In the former they could, but do not (similar to the dog that didn’t bark in the famous Sherlock Holmes story) while in the latter neurons can’t fire (because of the anesthetic agent). Two. is the very counterintuitive prediction (here I agree with you, Scott and others) that even a simple network of coupled logical gates – simple if you use an intrinsic measure of complexity like Kolmogorov complexity – generate concepts, and a quale; that it
    feels like something to be such a grid.

  65. Scott Says:

    Christof #57: Thanks for the comment! As I said in my last post, I liked your Consciousness memoir a lot and am happy to have you here.

    To respond to your points in order:

    (1) No, I never said that the Φ of a 1D line is zero. I understand full well that it isn’t—that’s why I said it was O(1) (a constant independent of n), rather than 0. However, it remains the case that the Φ of a 1D line is tiny—about as small as it would be for (say) a photodiode with just two elements. In contrast to the Φ of a 2D grid, which we can increase arbitrarily by scaling the grid, we can’t increase the Φ of a 1D line at all by lengthening the line. So, if we take Φ seriously as a measure of the “amount of consciousness” present in a system, then it seems to me that we have to say that a 1D line has only a “tiny amount” of consciousness, in contrast to the 2D grid, which can have an unboundedly large amount (scaling with the square root of the number of vertices). And that suffices for all of my arguments to go through. Nowhere did I need, or assume, that the Φ of a 1D line is literally zero.

    (2) Regarding the cerebellum, I was going to respond, but Pete #60 already said exactly what I wanted to. Thanks, Pete!

    (3) I freely admit that I might have misunderstood Giulio’s phenomenological argument—it was far from obvious to me where he was going with it. (And I say that as someone who read and enjoyed James’s Principles of Psychology.) And of course I completely agree with you that it’s nonsense to pass from my experience of something being rich and structured to that something being rich and structured enough to be conscious—that was my point! However, if that inference wasn’t Giulio’s intended argument, then I’m still left wondering what the intended argument was. Several commenters, above, said that the argument involves the important role played by 2D structures in the brain.

    So then again I find myself wondering: what would it take for Giulio’s phenomenological argument to apply as well to a uniform 1D array of logic gates? (The mathematician in me can’t help asking: what’s so special about two dimensions in the argument, besides of course the 2D grid having a larger Φ-value than the 1D line, which you can’t invoke here on pain of circularity?) Would it be enough for our experience of a 1D line to be rich and structured, and also for 1D structures to play important roles in brain function? If so, then do axons count?

  66. Darrell Burgan Says:

    Scott #56:

    Having said that: if you believe that “consciousness supervenes on the physical”—that is, that the facts about what consciousness(es) are present are entirely a function of the current state of the physical world …

    To be honest, I have never even considered the possibility that consciousness was not a feature of the physical. If it’s not part of nature, what is it? And if so, doesn’t that mean IIT and theories like it are doomed to fail?

  67. John kubie Says:

    Re: Christop Koch #63.
    I have little doubt that brains with lots of complexity and phi have selective advantage. But, as I see it, that would be the case if there was no consciousness; that is, if the high phi brain were in a robot or zombie. The consciousness that IIT is supposed to produce doesn’t do anything. No apparent value. In my mind, consciousness must DO something, must provide some value (even if what it does is determinist) to even be interesting.
    Begins to make me think Dennett is right; everything will be in the known material domain, and consciousness will disappear when we understand it.

  68. fred Says:

    Scott #65
    “what’s so special about two dimensions in the argument, besides of course the 2D grid having a larger Φ-value than the 1D line, which you can’t invoke here on pain of circularity?”

    Similarly, what’s so special about 3-dimensional matching that makes it so much harder to solve than 2-dimensional matching? 😛

  69. Scott Says:

    fred #68:

      Similarly, what’s so special about 3-dimensional matching that makes it so much harder to solve than 2-dimensional matching?

    Well, at least that one has an answer. It’s better to start with the related, but easier, question of why 3SAT (and k-SAT for all k≥3) are so much harder than 2SAT. There, the short answer is that having 3 variables per clause lets you encode the logic of a Boolean gate (like AND, OR, or NAND), which has two inputs and one output (2+1=3). So then you can encode arbitrary computations, and the satisfiability of arbitrary Boolean formulas. By contrast, with 2 Boolean variables per clause, you can rewrite each clause as a simple pair of logical implications (A→B and not(B)→not(A)), and then your formula is satisfiable if and only if there are no paths leading from a literal to its negation or vice versa. It’s as if you only have 1-bit gates (like the NOT gate) available, which are not enough for universality.

    (By contrast, though, note that if you generalize 2SAT beyond Boolean variables, then you get problems like 3-COLORING, which are NP-complete.)

    For 2- vs. 3-dimensional matching … eh, it’s late, go ask someone else. 🙂

  70. fred Says:

    Scott #69 Thanks! Very interesting!

  71. Scott Says:

    Darrell #66:

      To be honest, I have never even considered the possibility that consciousness was not a feature of the physical. If it’s not part of nature, what is it? And if so, doesn’t that mean IIT and theories like it are doomed to fail?

    Well, you can accept that the physical world is “causally closed”—i.e., that all physical events are governed by the laws of physics—but then hold that, in addition to the facts about the configurations of quantum fields, etc., there are also “phenomenological” facts, facts about “what something looks like, tastes like, etc.,” which (whether one wants to call them “physical” or not) aren’t reducible to the other kinds of physical facts. And one can hold that without believing in souls, angels, heaven, or any of that other frou-frou. 🙂 This is the position that David Chalmers explores at length in his book.

    Now, if you hold this position, then naturally you’ll want a theory that at least tells you which physical systems do and don’t have any phenomenological facts to tell about them. Even if you don’t hold the position—if you reject it entirely—you might still enjoy toying with such theories, but in that case, you wouldn’t see the theories as even attempting to get at any “fundamental” facts about the world; they would merely be telling you about various types of higher-level structures.

    Now, in either case—whether you see a consciousness-theory as trying to tell you about actual “phenomenological facts,” or merely about the higher-level structure that we like to call consciousness—if your theory assigns unbounded amounts of consciousness to expander graphs and 2D grids, then I personally would want to go back to the drawing board. 🙂

  72. Christof Koch Says:

    Scott #65

    Thanks for having me on your blog. I enjoy the debate.

    So we agree on a 1-D array having a small but non-zero Phi value while Phi for a 2-D array increases as \sqrt{# of nodes}. Why this is so has ultimately have have to do with the geometry of space. Remember that 2-D space gives rise to all of classical Euclidian geometry while a line gives rise to not much more than points.

    2-D grids are of interest to us as neurobiologists as we find that the mammalian brain is full of them, in particular in the neocortex, whether it is visual, auditory or somatosensory cortices. Indeed, you can think of the cerebral cortex as a 2+epsilon dimensional computational tissue that varies by 50,000 in area (from the tree shrew to the blue whale while its thickness only varies by 2-4 across all mammals. And one of the neurological facts we’re very sure of is that neocortex provides all of the content of consciousness.

    Your point about 1-D grids having a tiny Phi is an interesting observation but doesn’t invalidate the fact that as species with a sheet-like neuronal architecture we have a lot of consciousness. If we ever find creatures like those in Abbott’s Flatland, they would have only a tiny consciousness!

    Scott – what do you think about the prediction that according to IIT 3.0, any feedforward network – such as those used by the deep learning community – have an associated Phi value of zero and therefore are true zombies while an equivalent network, that performs the same function as the feed-forward one but is heavily interconnected, has a non-zero Phi and is therefore conscious. It implies that you can’t merely judge consciousness by input-output behavior but you have to look at the underlying circuitry.

    Pete #60 and Scott #65

    IIT seeks to explain the known facts about consciousness. One is my own lived phenomenal, conscious experience. Another set of facts are neurobiological observations concerning the NCCs. One of them is that the cerebellum isn’t terrible important for consciousness. I would say any theory of consciousness would need to explain such facts.

    Darrell #66 and Scott #71

    In IIT, consciousness is supervenient upon the physical. That is, if you change the underlying mechanism and its state, the conscious experience will change. There is no Cartesian res cogitans floating about somewhere. On the other hand., IIT starts with the Cartesian deduction – the most famous in Western though – je pense, donc je suis. In modern language, the only way I know about existence, the only way I know about anything is grace to my conscious experience. Descartes even admits that an evil demon could confuse him about what he’s conscious off but that he is conscious is not in question. And that comes prior to physics. I can only infer the existence of mass and energy, of people but I have direct acquaintance of my conscious experience. In that sense, consciousness needs an additional explanation beyond that contained in QM and GR.

    John #67

    Good question.

    Dennett, in his ironically named “Consciousness Explained” book, basically denies that there is anything special to explain as there is nothing like it to be conscious. It’s a set of sensory-motor contingencies about which people are confused. Indeed, Dennett compares it to ‘fame’. As I stated above, I could not disagree more strongly (try telling me this when I’m in pain due
    to an infected tooth). Most people do.

    Let’s compare consciousness to electrical charge. No physicist asks for an evolutionary explanation of electrical charge. Charge is a fact and there are laws, such as 1/r^2, that describe its effect. So even if you accept that consciousness is real and is associated with highly evolved organisms such as you and me, it may not have been directly – but only indirectly – selected for.

    You could even argue that it is epiphenomenal – as Huxley, Darwin’s bulldog, famously did; that is has no direct function. But even then, we need a theory that explains who has it and who doesn’t, how it relates to various bits and pieces of the brain and so on.

    There is a new development linking the maximal irreducible conceptual structure generated by a complex (that is the consciously experienced quale per IIT) to the maximal causally effective structure in the sense of Judea Pearl. See, for instance, Hoel, Albantakis & Tononi Proc. Natl. Acad. Sci 2014.

  73. Abel Says:

    John kubie #67: Your point seems (at least to me) to be mostly about the difference between consciousness and intelligence, and whether they are necessary for each other – I’d lean towards a negative answer in both cases.

    To make my interpretation of those terms a bit clearer, the first one I take to be be “whatever can be determined by the Turing Test” – as such, it is a black-box characteristic of a process, determined exclusively by the relation between its inputs and outputs over time. My intuition is that an intelligent process need not be conscious, because of lookup-table arguments. More exactly, you could theoretically (though most likely not physically within the universe) have a table that maps every input to the process to an output a human would consider intelligent. Then the process would pass a Turing Test, but it would just be performing lookups on the table, which doesn’t quite seem like a conscious activity to me.

    As for consciousness, my intuition is that it is better defined as an internal characteristic of a process, and how is the information within that process propagated/stored. It seems like you propose that consciousness should imply intelligence to some extent, but I’m not sure about that – I’d expect a human that decides not to talk or move or generate much output to fail a Turing Test, but I wouldn’t judge them less conscious because of it.

  74. David Chalmers Says:

    Scott #61: I’m certain that Tononi intends IIT as a theory of true consciousness, not as a theory of apparent consciousness. Do you mean to say that if one construes it that way, your objection isn’t that it makes counterintuitive predictions but that any theory of true consciousness is empirically unverifiable? If so then the expander will drop out as an objection to the theory Tononi intends. I suspect that you don’t really mean this and that you think that the counterintuitiveness still counts against the theory as a theory of true consciousness. But then I think you need to say a bit more about just why and how counterintuitiveness counts against it, in light of the considerations above. At least, the paradigm-case considerations that apply to apparent consciousness won’t apply so well here.

    Also, I’d think that the question of which physical systems have apparent consciousness is pretty easy rather than pretty hard. Maybe it’s even easier than the standard easy problems. One could presumably answer the problem by doing some experimental psychology/philosophy concerning people’s intuitions about which systems are conscious. Lots of people have recently done this sort (Joshua Knobe, Jesse Prinz, Justin Systma, Dan Wegner are some who come to mind) to see which factors matter most. A traditional philosopher could even pursue this project by studying their own intuitions from the armchair. Of course the question of finding the mechanism that supports apparent consciousness, in humans say, is a harder one — now about as hard as the standard easy problems!

    For what it’s worth, I don’t think that theories of true consciousness are completely empirically unverifiable. At the very least we have an array of observed data about consciousness from the first-person case. Those data may be limited but they put strong constraints on a theory, and as yet we’re not close to having a theory that can even explain these data. I suspect that most existing theories are already falsified by these data. One can naturally enough extend this to the project of accommodating all human first-person data about consciousness (this requires accepting that other people aren’t zombies and that their reports reflect their consciousness to some extent; but this isn’t a crazy assumption and all science requires some assumptions to get off the ground). The project of coming up with the simplest universal theory that accomodates/predicts human first-person data about consciousness is itself an interesting problem — maybe another intermediate “pretty hard problem”. It may well be that a theory of this sort may be as good as we can do in “solving” the other pretty hard problem, i.e. giving a theory of which systems have which sorts of consciousness. Of course its predictions about (true) consciousness in nonhuman systems may well be in some sense untestable, so perhaps we’ll never have full confidence in this theory, but that doesn’t mean we’ll have no confidence in it.

  75. David Chalmers Says:

    Jay #40: Your “middle way” is just the sort of methodology that I propose. Certainly verbal reports play a crucial role in informing us about the conscious states of others, and we can then integrate all this with brain data across a range of cases to build a theory of consciousness. See my “How Can We Construct a Science of Consciousness?” (http://consc.net/papers/scicon.pdf) for how I’d put all this together.

  76. domenico Says:

    I am thinking that each sensorial net with inner enegy could have consciousness, the measure of a great consciouness could be the presence near a critical point (so that there is a difference with the dimensionality of the net); I see two type of possible consciousness: hardware (biological or physical) and software (critical program for random hardware errors) with different realizations.
    If there is a critical index then the net have a great dynamical change of a measured quantity with a little pertubation (free will?), and there is not a physical measure of the consciousness, only the measure of the large-scale effect of a little pertubation (with or without an external stimulus).
    An Ising net near the critical point have a measurable large scale effect like an unpredictable awakening, like the change in the neural signal in the brain when we think, but there is not an inner energy source to trigger the system indipendently by an external stimulus.

  77. John Kubie Says:

    Christof Koch #72. Many thanks for the thoughtful. I agree with what you say. I’ll chase the links.

    A critique I have of IIT is that it has too few postulates (or is it axioms). That is, its phenomenal constraints are narrow and in the limited domain that corresponds to sensory perception — “what it feels like …”.

    If consciousness has a function (as I hope it does), one should look for the output/action correlates, such as the decision to act or the feeling of force or the feeling of exhaustion. Using action to guide and constrain theory is teleological, but we need all the help we can get. If consciousness is causal, it must correlate with output. Rather than look for a correlation with “feeling”, look for a correlation with action. Roughly the “will” or “free will” side of consciousness.

    The “will” side of consciousness is NOT does not cover all behavioral output. We have little direct conscious awareness of the ANS, endocrine output or spinal reflexes. We feel we have direct awareness of voluntary action and setting up action strategies. We feel we have conscious control over action sets of striated muscle, but not other muscle types. Are there hints here? Yes, I think. The conscious action sets tend to be whose body and integrated, and requiring sensory motor controls and fine-tuned feedback. These are the decisions that require integrated information. (I’m rambling and I”m tired; I hope I’ve made part of the argument.)

    Abel #73, consciousness and intelligence. I think i agree, if by “intelligence” you mean clever output. That certainly is the test of a Turing machine. I also agree that there is no obvious reason why clever output cannot be produced by a machine.

    Final comment. As I recall, Tononi rejection of behavioral constraints is that individuals who suffer the “locked-in syndrome” have consciousness. My argument is that people who are “locked in” have had long life-history of action. In the locked-in state they can imagine and feel action.

  78. John Says:

    A good read all round. One thing that comes to mind though, is that changes in behavior depending on dimension are not all that unfamiliar. For example, the Ising model exhibits rich behaviour including magnetization and phase transitions in 2D, which all vanishes when you go to 1D. In this sense, it doesn’t bother me so much a priori that the phi values for the 1D and 2D cases are very different

  79. annon Says:

    Scott #65:
    …what’s so special about two dimensions in the argument…

    There may be a connection to the problem of probabilistic inference in continuous domain.
    Considering e.g. the most simple pure Gaussian settings, we have that Gaussian Markov random fields over \( \mathbb{R}^d\) are by definition Gaussian processes with a covariance function from \( \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}\). Inference in this model is just the computation of a conditional Gaussian. So if there is \(n\) observations, one needs to invert an \(n \times n\) matrix independently of \(d\)… except when \(d=1\). In one dimension a Gaussian process can always be transformed into a state space representation for which the inference is \(O(n)\) for \(n\) observations (Kalman filtering and smoothing).
    So for \(d=1\), random fields (undirected graphical models) are reducible to sequential directed models.
    Well… just a thought 🙂

  80. James Cross Says:

    I take the evolutionary perspective.

    Consciousness began with hard-wiring – “learning” through evolution – for the natural selection advantages of enhanced perception and ability to predict the environment. The nuts and bolts of qualia are probably hard-wired. The more advanced form of consciousness that apes, dolphins, magpies, and we have – self-reflection – has evolved in response to the selection pressures of social organization.

    Consciousness may be an evolutionary product of natural selection advantages provided by increased control of the body, better perception of the environment, and increased ability to predict the outcomes of interaction with the environment. The final development of what we might more properly think of consciousness occurred when we needed to predict the outcomes of interactions with other members of our species. The development of consciousness may be more or less equivalent in the grand scheme of things to the evolution of eyes or ears. Quite remarkable and amazing but not something we should regard as sui generis.

    http://broadspeculations.com/2014/06/01/consciousness-much-ado-about-almost-nothing/

  81. fred Says:

    I’m really surprised there is no mention to self-reference in this thread.
    In my opinion, Hofstadter’s book “I’m a strange loop” went as far as one can hope in the explanation of the mystery of consciousness.

  82. John Kubie Says:

    James Cross #80. Evolutionary perspective. I agree; the properties of phenomenal consciousness are great; it would be seem broadly useful for an organism to have access to phenomenal consciousness to predict the future, make decisions and act. The problem, as described above, is how can phenomenal consciousness affect action? What is the output? How can consciousness causally influence behavior? There’s the rub.

  83. Scott Says:

    fred #81: See the previous IIT thread for discussion of self-reference. (And now this thread has become both self-referencing—in this very sentence, no less—and other-thread-referencing.)

  84. Scott Says:

    John #78 and annon #79: Yes, of course there are all sorts of phenomena in statistical physics, computational complexity, graphical models, topology and more that change dramatically when you go from 1 dimension to 2. And there are other phenomena that change dramatically when you go from 2 dimensions to 3, and still others that change dramatically when you go from 3 to 4. And I understand exactly why Φ depends so much on the difference between 1 and 2 dimensions, and less on further dimension increases.

    But the question at hand is whether any of this has anything to do with consciousness. And, in trying to make a connection, it’s crucial that we don’t circularly suppose that consciousness is measured by Φ. Instead, one needs to start with the paradigm-cases of consciousness (or just possibly, with phenomenology or something else—but it ought to be convincing), and then derive that 2 dimensions are more conscious than 1 dimension. And that’s the case that I don’t think has been made. At this point, I could just as easily believe that 1 and 2 dimensions are equally unconscious, but that the jump to three dimensions is what lights the fire of Mind. 🙂

  85. Alexander Vlasov Says:

    I have seen briefly version 3.0 2014 and find diagram 21 rather illustrative. IMHO, left, “conscious” side resembles style of description of a programming language by Wirth in his books about Pascal. Right, “unconscious” one, resembles YACC generated output, hardly readable by human, but very effective for computer.
    It may be compared with task to keep equlibirium – often rather impossible to do that consciously on ice (and so something like fast YACC generated structure is needed), but to learn that it is necessary first to have a lot of experience made in conscious way (and here “slow”, but understanding Wirth style is convenient).

  86. Scott Says:

    David Chalmers #74: Thanks, as usual, for making me think! Here are two clarifications of my view that might be helpful.

    (1) I think that even if we restrict ourselves to apparent consciousness, the Pretty-Hard Problem is still pretty hard, 🙂 for the following reason. My position is that intuition renders a clear verdict for certain paradigm-cases (e.g., we’re conscious and rocks are not), but fails to do so for other cases (fish, fetuses, coma patients, AIs). And our only hope for resolving those other cases is to come up with a theory that’s simple and elegant, that fits the paradigm-cases, and that also can be applied to the other cases. And I expect this to be a highly nontrivial undertaking if it’s possible at all—not a mere matter of experimental psychology (no offense to experimental psychologists 🙂 ).

    So, to return to my example from the post, to decide whether xsin(1/x) was continuous or not, mathematicians didn’t resort to a poll of one another’s intuitions about the continuity of xsin(1/x). Instead, they looked at the cases where intuition rendered a much clearer verdict—e.g., x2 and the step function—formulated a general definition of continuity that gave the “right” answers for those cases, and then applied the definition to the hard cases. And at least for the time, this was a highly nontrivial accomplishment. I expect that doing something similar for (apparent) consciousness would much, much harder.

    (2) I acknowledge the logical possibility that there might be something other than paradigm-cases that could provide data relevant to the Pretty-Hard Problem. However, after thinking it over a bit, I believe I can put my finger on why I consider phenomenology inherently extremely limited as a source of such data. With an ordinary scientific experiment, it’s not just that it’s in principle open to anyone to observe the results of the experiment for themselves. Rather it’s that the results are common knowledge. So for example, if I try talking to a rock and the rock just sits there and doesn’t do anything in response, you and I can both see that, but in addition, I see that you see it, and you see that I see that you see it, etc. This allows for the formation of a powerful shared intuition that the rock isn’t conscious.

    By contrast, suppose someone says that, if you spend enough time introspecting about your subjective experience of staring at a blank wall, then you’ll come to the conclusion that a 2D grid is conscious. And suppose, for argument’s sake, that that’s actually true for 99% of people: almost everyone, after introspecting long enough, can indeed reach a certain meditative state where they “just see” the consciousness of a 2D grid. Even then, the achievement of this meditative state won’t be common knowledge: you may achieve it, and I may achieve it, but I won’t know that you’ve achieved it or vice versa.

    And worse yet, suppose there’s someone (like me, for example) who fails to achieve the meditative insight. In that case, there seems to be almost nothing anyone else can do, even in principle, to debug what went wrong with my attempted replication of the phenomenological experiment. Maybe I just don’t have the inner gift for “seeing” why a 2D grid is conscious, and that’s all there is to it.

    So if I’m right, and if phenomenology is inherently extremely limited (if not completely useless) as a source of data about consciousness, then that basically brings us back to a consideration of paradigm-cases. Or is there some other source of data, besides phenomenology and paradigm-cases?

  87. fred Says:

    Scott #84
    One could make a case that a necessary structure for consciousness is any sort of graph that can represent a hierarchy (at a minimum, a binary tree?) – so that lower “symbols” can be summed up into higher “symbols”.
    If “2D” means that each node can have 2 children, then we at least need 2 dim to represent such trees?

  88. fred Says:

    Scott #86

    “And worse yet, suppose there’s someone (like me, for example) who fails to achieve the meditative insight.”

    You’re a zombie. I knew it.

  89. Scott Says:

    Christof #72:

      what do you think about the prediction that according to IIT 3.0, any feedforward network – such as those used by the deep learning community – have an associated Phi value of zero and therefore are true zombies while an equivalent network, that performs the same function as the feed-forward one but is heavily interconnected, has a non-zero Phi and is therefore conscious. It implies that you can’t merely judge consciousness by input-output behavior but you have to look at the underlying circuitry.

    First of all, I don’t agree with calling such things “predictions,” since I don’t see any way even in principle that they can be tested. I’d call them “implications” or “consequences.”

    Now, precisely because they aren’t empirically testable, the only way I can see to judge such implications of IIT is by reference to the prior intuitions about consciousness that we wanted a theory to capture in the first place. And by that standard, I don’t like this implication at all. In fact, I like it just as little as I like the implication that 2D grids and expander graphs should be unboundedly conscious—and the only reason why I didn’t focus on the “no feedback, no consciousness” implication in my posts, was that I didn’t completely understand it, so didn’t think I could formulate quite as crisp of an argument about it.

    For one thing, could you please remind me which aspect of the (current, 3.0?) definition of Φ makes it 0 for feedforward networks? If we used the definition from my first post—a definition Giulio didn’t object to, except for it being a little outdated—Φ would not be zero for feedforward networks. Specifically, I talked about minimizing, over all bipartitions (A,B), the (normalized) sum of the mutual information between A’s inputs and B’s outputs, and that between A’s outputs and B’s inputs. And this can obviously be nonzero, even if (as in my Vandermonde example) all the computation goes in just a single direction, from inputs to outputs.

    Now, I agree that we could play around with the definition of Φ to make it zero for feedforward networks, if we wanted to (and maybe that’s what the current version of IIT does?). But the fact that we can do this doesn’t particularly impress me. The reason is that we can take any “feedforward” network we like and turn it into a network with feedback, by the simple device of hooking the outputs back up to the inputs. Indeed, that’s just what I suggested to do in clarification (6) of my original post.

    So, I’m supposed to accept that this trick—wiring the outputs back up to the inputs—is what makes the difference for consciousness? A deep learning network (even if it passes the Turing Test, etc.) can never be conscious, but if I apply the Vandermonde matrix over and over to my input vector in a “feedback loop”—mapping w to Vw, then Vw to V2w, then V2w to V3w, etc.—that’s conscious? (Just how high a power do I need to raise it to, anyway, before consciousness ensues?)

      IIT seeks to explain the known facts about consciousness. One is my own lived phenomenal, conscious experience. Another set of facts are neurobiological observations concerning the NCCs. One of them is that the cerebellum isn’t terrible important for consciousness. I would say any theory of consciousness would need to explain such facts.

    Once again, my point is that, if it’s a “fact” that my cerebellum isn’t terribly important for consciousness, then it’s certainly also a fact that (let’s say) my skin isn’t terribly important for consciousness—even if my skin cells happen to be arranged in a 2D lattice in which every cell talks to its immediate neighbors. I don’t see any principle by which nontrivial facts from neuroscience should get “privileged” over blatant facts from physiology and everyday life. Facts like the unconsciousness of the skin (or more generally of regular 2D lattices), it seems to me, are left out of the neuroscience textbooks only because the authors consider them too obvious to state.

  90. Scott Says:

    Jay #40:

      Amusing chess mat to IIT, or at least to IIT in its present form. But maybe that was too easy? Could you make one suggestion or two to those who would like to try to patch the various holes and self-contradictions of this theory?

    First of all, I’m not sure you get to call my refutation of IIT “too easy,” while others are still arguing that my refutation is wrong and IIT is fine. 😉

    As I said in the first post, my disagreement is not merely with the details, but with the entire idea of equating consciousness with “integrated information” or “complex organization” or anything of that kind. I don’t see how any notion of the latter sort could possibly provide a sufficient condition for consciousness. For measures of “complex organization,” etc. strike me as ironically way too simple. As soon as you specify exactly what you mean by integrated information or whatever, my guess is that it will inevitably be child’s-play to construct what I’d regard as counterexamples: that is, systems that have unboundedly-large values of your complexity measure, yet that clearly give off no external sign of “consciousness,” unless we stretch the definition of “consciousness” so far that it no longer has any connection to its ordinary meanings. As I wrote in the original post:

      As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it. But our intuition is wrong. If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.

    So, OK then, do I have any positive suggestions? I suppose my single most important suggestion would be for people to scale back their claims. By all means, say you’re studying the neural correlates of consciousness, trying to make some tiny amount of progress on the Pretty-Hard Problem for future generations to build upon. But if you’re going to claim to have discovered a necessary or a sufficient condition for consciousness in any physical system whatsoever—or, like the IITers, a necessary and sufficient condition (!)—then the ease of constructing counterexamples ought to be the foremost thing in your mind.

    More concretely, what do I think might be additional necessary conditions for (advanced forms of) consciousness, beyond the things that have been talked about in these threads (ability to integrate information, to weigh options, and to recursively self-reflect)? Well, one idea that I’ve toyed with is that an important aspect of consciousness might be unclonability. This would imply, in particular, that the only systems with a chance of being conscious would be those that amplify microscopic events to a macroscopic scale, in a way that external observers can’t even predict probabilistically without badly damaging the systems in the process of gathering enough information about their state. In other words, external observers would need to have fundamental “Knightian uncertainty” about what the system is going to do next. Another consequence would be that irreversibility and entropy production would be necessary components of consciousness.

    The requirements above narrow down the possibilities sufficiently far, that it’s not so easy to think of anything in the known universe other than the brains of animals that satisfy the requirements. (It’s trivial to find physical systems that amplify microscopic events and that are chaotically unpredictable, but most of those systems can be pretty cleanly “decoupled” into a deterministic component and then a stochastic component, in a way that’s not obviously possible for animals.) Even then, however, I myself remain extremely skeptical, not only about whether this idea is right, but about how far it gets you even if it is right. For (much) more along these lines, see my essay The Ghost in the Quantum Turing Machine.

  91. JimV Says:

    “I’m certain that Tononi intends IIT as a theory of true consciousness, not as a theory of apparent consciousness. Do you mean to say that if one construes it that way, your objection isn’t that it makes counterintuitive predictions but that any theory of true consciousness is empirically unverifiable? If so then the expander will drop out as an objection to the theory Tononi intends.”

    Wow. Dr. Tononi intends to make a theory that is empirically unverifiable? (So there can be no possible objections to it.) In other words, a religion. Okay, that will dispose of the expander problem – but at quite a cost. (I am aware that Dr. Tononi does not consider his theory as unverifiable, but am attempting to follow the above argument to its conclusion.)

    My guess is that at this point we are lacking in the three most important things in science: data, data, and data. Once we have found several extra-terrestrial, non-DNA-based intelligences and studied them, we will be better able to propound theories. (Unfortunately we will probably not survive long enough as a civilization to do this.)

    (Yes, I am equating consciousness with intelligence, in the sense of being able to solve problems. I don’t believe in zombies. Or vampires.)

  92. domenico Says:

    I am thinking that a universal “engine” near a critical point can be conscious (there is a state transition, there are an external and internal reservoirs); an engine can operate with symbols (search engines), with chemical reaction (chemical engine, brain, rna), with electricity (parallel analog computer and parallel digital computer), with kinetics energy (mechanical or hydraulic computer), with qubit (?) (quantum computer): there is a transformation of objects (for example molecules) in other objects (molecules) with transformation of energy, and there is an equivalence of description.
    If the point is critical then the answer of the computer is precise and unforeseeable, like a person, and sometime happen errors.

  93. Jay Says:

    Chalmer #75

    Thx, I was not rendering justice to your philosophy.

    Scott #90

    >my disagreement is not merely with the details () child’s-play to construct what I’d regard as counterexamples

    Point taken. I’ll count that as a positive suggestion: check if you can’t construct obvious counterexamples.

    >unclonability

    Just to clarify, do you count the following as clonable or non knightian?

    -a robot that make high level decision based on the output of a bosonsampler (inputs from noise amplification)
    -a robot that make high level decision based on the output of a one way function (inputs from noise amplification)
    -a robot that make high level decision based on the output of a one way function, constructed on the flight (inputs from noise amplification)

    >The Ghost in the Quantum Turing Machine.

    Yeah, I’ve read and shared my critics on your blog post at the time (maybe too harshly). Ironically, one of my main critic was that this freedom idea allows simple counterexamples defying intuition, such as an all predictable robot forcing an artist in chain to produce exquisite knightian litterature.

  94. quax Says:

    “When you cut a rope, does it not split? When you prick it, does it not fray?”

    Priceless 🙂

    Have to wonder if Giulio ever read Immanuel Kant. IMHO he quite nicely established why we see what we see when we are looking at anything. Sound logic sufficed, no neuroscience required.

    Also love the arguments you give for why intuition is so important and how it flies in the face of IIT. The need for good intuition is why I am so unhappy with the proliferation of QM interpretations as well as the shut-up and calculate approach. The current situation doesn’t seem to very conducive to developing QM intuition.

  95. fred Says:

    When considering a group of individuals, I would expect a measure of consciousness to give a clear non zero reading.
    At least if we consider individuals who are in a strong social group where all the members share common experiences/memories, through language and all being at the same place at the same time, etc (we need to consider not just the patterns in the brains but as well the external memories associated, such as books, built structures, etc).
    At a high level civilizations are definite entities that live through their members and outlive them, like cells in our bodies come and go but the identity persists (with high level symbols like ideologies and religions having a strong guiding influence).
    If civilizations can’t be conscious to some level, then I don’t see why the sum of individual neurons/cells in my body would amount to consciousness either…
    At the limit, earth itself should be pretty high on the scale of consciousness (Gaia theory).

  96. Scott Says:

    Jay #93:

      Just to clarify, do you count the following as clonable or non knightian?

      -a robot that make high level decision based on the output of a bosonsampler (inputs from noise amplification)
      -a robot that make high level decision based on the output of a one way function (inputs from noise amplification)
      -a robot that make high level decision based on the output of a one way function, constructed on the flight (inputs from noise amplification)

    For the first two, no. For the third, I’m not sure what you mean by “constructed on the flight.”

      Ironically, one of my main critic was that this freedom idea allows simple counterexamples defying intuition, such as an all predictable robot forcing an artist in chain to produce exquisite knightian litterature.

    Wait, why is that a counterexample at all? The system you’ve described clearly has a conscious component, namely the artist! Moreover, one could argue that we’re not all that different from your example! We’ve got probabilistically predictable robots—our genes—that keep our brains in chains, forcing them to expend almost all of their Knightian creativity toward silly goals like winning status and getting laid. It’s possible for the brain to break out of these chains, but it’s not easy.

  97. brecht Says:

    Thanks for the wonderful posts and comments, it’s great to be able to follow this.

    One question that (as far as I know) IIT does not currently answer is: If someone asks me “Are you conscious?”, why do I then say “Yes”? What is the mechanism in my brain that makes me think I’m conscious?

    I would expect the mechanism that makes me conscious to be closely related to the mechanism that makes me eventually answer “Yes”. Sometimes I might answer from memory or just make things up, but if I am actually conscious then surely there is a link here? So it would be nice if IIT or another theory that measures the amount of consciousness also predicts the mechanism that makes humans think and say they are conscious.

    Such a prediction could be empirically testable. And even if it’s only testable for the case of humans, couldn’t it provide pretty compelling evidence? I have no idea what such a mechanism would look like though.

  98. Adam McKenty Says:

    Scott (#37) said:

    However, in addition to consciousness itself, there’s also a second notion, which (for want of a better term) I’ll call “apparent consciousness,” and which basically means, “the type of intelligent behavior that ought to lead reasonable people to infer the presence of consciousness.”

    To answer my question (from #33), then, your theory-bounding intuitions are about “true” consciousness, not the third-person observable correlates of consciousness, since “apparent consciousness” is defined by your intuitions about where “true” consciousness ought to be found.

    This puts us just where we don’t want to be: back into my hot kettle from comment #25. Consciousness is only directly perceivable through one system, and there is therefore no reason to believe that what you perceive as necessary conditions of consciousness are not merely ancillary features of the one system in which you can observe it.

    If this argument by analogy doesn’t convince you that consciousness is a special case that makes for dangerously unreliable intuitions (I thought it did convince you, because of the shift to “apparent” consciousness shortly afterward, but perhaps not), then let’s look at a few sample intuitions and see if they can be trusted.

    In his reply Tononi pointed out how intuitions differ among people, but the situation gets more hopeless if you look across cultures. In many Buddhist societies it is taken for given that all living things are conscious. In nearly all pre-industrial societies, it is taken as self evident that to be conscious requires a soul or spirit, which is an immaterial something that is inherently conscious, and can move about and inhabit and animate physical forms. Remember that people in these different cultures have just as much ability to directly observe consciousness and form intuitions about it as you or I do. And, it would be foolish to claim they are not reasonable, since included among them are some of history’s most impressive reasoners.

    To return once again to our friend, the kettle: as far as I can see, the only way we can come up with an accurate and justified theory is to study, very closely and with great care, the heat we can directly observe in the kettle, and the physical system in which it occurs.

    First-person experience is an enormously detailed and complex source of empirical information about consciousness. Since all your third-person observations occur in your consciousness, but not all of your conscious content is observable in the external world, you have, strictly speaking, more empirical access to consciousness than to anything else in the whole wide and wonderful universe.

    There are complications involved in using this empirical access for the purposes of science (as you pointed out in your comment about meditating on a blank wall), but they’re not insurmountable. At the risk of being obvious, reports from first-person experience combined with neurobiological experiments and measurements are how we know most of what we do know about the relation between consciousness and the nervous system.

    How can we use subjective and objective investigation to go beyond inherently unreliable intuitions and get to a true theory? To do this we have to be like good software troubleshooters and isolate consciousness from the other aspects of the physical system. This has been done in a number of clever ways, including experiments on blind-sight, binocular rivalry, etc.

    There is, in principle, plenty of room to create a very, very extensive theoretical model based on observations of first-person experience and neurobiological correlates. If a model is developed that can generate very accurate, detailed, non ad-hoc predictions about first person experience from neurobiology, and vice versa, then I think the PHP will be solved. At that point, we will have reason to take very seriously whatever nut butter statements that theory has to say about consciousness in other systems. Until then, in my view, the domain of the PHP and its solutions is in the space between the minds we can experience and the brains we can measure, and not anywhere else.

  99. Adam M Says:

    By the way: I think it sucks that IIT predicts 2D grids can be conscious. I was hoping, based on Tononi’s presentation at TSC 2014, that with IIT machine consciousness would require some new and interesting form of hardware. I’m glad, though, that I encountered your post and thereby avoided blundering forward in indefinite ignorance on this point.

  100. Mark H. Says:

    It appears that the great Scott ignores that there are studies that show that people’s actions have been predicted (using cerebral signal) before the person exercising his will.

  101. Adam M Says:

    I should mention two caveats to my somewhat pompous and categorical earlier statements:

    1) I think the exclusive relevance of experiments and observations in the mind/brain (as opposed to intuitions) applies in general to the PHP, but not so much to the HP. The Hard Problem and associated ontological and metaphysical questions can’t be dealt with by brain science directly.

    2) Certain extraordinary situations from outside the two-sided coin of our experience and brain could be relevant to the PHP. For example, if someone produced incontrovertible proof of psi phenomena or a really useful solution to the measurement problem in QM that involved consciousness, those could have bearing on the PHP.

    Bonus caveat: I don’t know enough about IIT to either defend or refute it per se. My argument is only that intuition is a shaky criteria for any PHP theory.

    Wonderful comments, well-informed participants, and civil dialogue — I feel rather privileged to get to participate!

  102. TF Says:

    @scot 89

    “For one thing, could you please remind me which aspect of the (current, 3.0?) definition of Φ makes it 0 for feedforward networks?”

    the new Phi [is defined recursively on the basis of the old phi (as a weighted sum of all non zero “old phis” of subgroups of elements) and] uses a modified notion of of bi-partition: first retaining only connections from A to B an d computing phi and the going the other way around and taking the minimum => no bilateral connections=>phi=0.

    I think it would then be fair to say that IIT 3.0 postulates that FF cannot integrate (conceptual) information

  103. David Chalmers Says:

    Thanks, Scott! Fair enough about the indeterminate cases of apparent consciousness. Still, finding a simple theory that fits our intuition still looks like a job that a mathematician/philosopher could do in principle from their armchair.

    Regarding verifiability and common knowledge: for most scientific results, the sense in which we have “common knowledge” of the results is that experimenters tell the rest of us the results and we believe them. Of course this also applies to phenomenology: people tell us about their states of consciousness, and we believe them, so we have common knowledge in the same sense. (This is precisely how much scientific work on consciousness works.) Of course in the ordinary scientific case people can try to replicate the experiments and thereby verify the results. But in many cases we can do this too with phenomenology: for example, one can perform a psychophysical experiment on oneself to experience a given illusion. Of course this result will concern a different person, but likewise in the scientific case, the result will concern a different particle or organism. In both the phenomenological and the scientific case, if the results replicate, a corresponding generalization will be accepted as a general principle, and if not, it will be either rejected or heavily qualified (for example, being limited to certain special conditions).

    I suppose the main difference is that in the ordinary scientific case, there is the possibility that many people can observe the same experimental result at once: we can all crowd around and look at the same oscilliscope, for example. That’s harder to do for consciousness! But of course this rarely happens even in science, and certainly in most cases the community never gets together and crowds around an oscilloscope. So testimony and replicability seem the crucial things for common knowledge in practice, and phenomenology doesn’t seem obviously worse off there. All this suggests to me that while there may be epistemological worries about phenomenological data (for example, that our primary data concern only human consciousness), an analysis in terms of common knowledge doesn’t quite capture these worries.

    JimV #91: I was attributing the objection that theories of consciousness are empirically unverifiable to Scott, not endorsing it myself (I went on to reject the claim).

    Brecht #97: I think this point that what explains our reports about consciousness should be closely connected to what explains consciousness itself is an important one that has been underestimated in the literature. It’s the main focus of my old unpublished paper “Consciousness and Cognition” (http://consc.net/papers/c-and-c.html).

  104. Endel Says:

    Brecht #97

    I would go a bit further and say that having consciousness actually is the belief in having consciousness. Theories like IIT can hardly tell anithing about that. This should be mainly a topic of sociology or social psychology. Real researchers of consciousness don’t like that idea.

  105. fred Says:

    Now that we can measure “Consciousness” of various clumps of atoms, it should be trivial to come up with a similar measure to figure the amount of “Love” that’s floating around any given cubic feet of space.

  106. Scott Says:

    Mark #100:

      It appears that the great Scott ignores that there are studies that show that people’s actions have been predicted (using cerebral signal) before the person exercising his will.

    Well, the Soon et al. fMRI experiment in 2008 (probably the state-of-the-art version of the 1970s Libet experiments with EEG) managed to predict which of two buttons a subject was going to press, a few seconds in advance and about 60% of the time. And while that’s better than chance, it’s far from obvious to me that you couldn’t do just as well by training a machine learning algorithm on the subject’s sequence of past button-presses, and not looking inside their brain as well. In fact, I did an informal little experiment back in grad school that suggested as much! But even if you do get an improvement in predictive power by looking at the fMRI scans … well, it’s obvious that people can be predicted better than chance. Cold readers, seducers, demagogues, and advertisers have known as much for millennia. What I was talking about was something different: “machine-like predictability” (or at least probabilistic predictability), of the sort possessed by a digital computer with a random-number generator or by a hydrogen atom. While it’s conceivable that our understanding of the human brain will advance to the point where we can achieve that, not only are we far from it right now, but there might even be in-principle obstacles to getting the requisite predictability while also keeping the subject alive.

    For more, see my Ghost in the Quantum Turing Machine essay (especially sections 2.12 and 5.1).

  107. Scott Says:

    David Chalmers #103: You’re right, I can’t rule out the theoretical possibility that mathematicians and philosophers could solve the Pretty-Hard Problem without leaving their armchairs. However, I’d say exactly the same about deriving the fact that heat is disordered molecular motion! In retrospect, if Democritus had only been thinking clearly enough about the “manifest facts of experience” (i.e., not the results of any subtle experiments), it’s conceivable that he could’ve derived the bulk of modern physics. But in practice, we did need experiments (and lots of them), because not even the smartest people who ever lived were smart enough to get there by pure thought. And I think the same about consciousness: even if it’s possible in principle to solve the PHP from our armchairs, in practice we’ll need all the empirical data we can get from neuroscience, psychology, and many other fields—and we’ll need to be lucky even then.

    Now, regarding common knowledge: you argue that it would be silly to ground a deep epistemological difference between phenomenological data and ordinary empirical data on the fact that only for the latter can we all “gather ’round the oscilloscope.” And I agree, it does sound silly when you put it that way. 🙂

    However, I claim that, closely related to the ability to “gather ’round the oscilloscope,” there’s another issue that’s much less silly. Namely, while everything ultimately comes down to subjective experience with both kinds of data, it’s only with ordinary empirical data that we have a strong common understanding of what experience you’re supposed to be having when you observe the results of the experiment.

    So for example, suppose we’re both staring at an oscilloscope, and you say, “aha! there’s the waveform that disproves the theory!” And I say, “which waveform? I don’t see anything.” Then you can reply, “this waveform,” and point to it. And, OK, maybe I still won’t see it—I remember plenty of experiences in high school and college when my lab partner would see whatever we were supposed to see under the microscope or in the spectrograph, and I wouldn’t see it (which experiences might have convinced me not to become an experimentalist 🙂 ). And of course, there were famous cases in the history of science that hinged on one person “seeing” things that others couldn’t (e.g., Percival Lowell’s canals on Mars). But at least it’s relatively clear, in such cases, what sort of conversation we need to have to come to agreement (“do you mean that thing?” “no.” “oh, that one?”).

    Let’s contrast this with pure phenomenological data. Suppose Martin Heidegger tells me that, if I just ruminate enough on the nature of Being, I’ll have a certain transformative inner experience, which will give me a type of data about consciousness that I couldn’t obtain otherwise. And other people swear that, indeed, it worked for them. So I try it. And suppose it doesn’t “work.” Where can anyone point to to say, “this is where you should’ve been ruminating?” Worse yet, how will I know when I’ve “seen” it? Maybe eventually I convince myself that, fine, OK, I’m having the experience. How can others be sure I’m not just making it up? With the oscilloscope, at least there are followup questions you could ask me—“OK then, what’s the waveform doing now?”—that would expose me if I’d merely been pretending to see it. By contrast, if my self-report about my meditative state differs from your self-report, who are you to tell me that my self-report was “wrong”? So then how do we even know that we’ve entered the same state at all?

    I readily admit that these are all differences of degree, and not of kind. But they strike me as extremely large differences of degree.

  108. Jerry Says:

    A bit off-topic, but not really.

    “Unconditional quantum teleportation between distant solid-state quantum bits”:

    http://www.sciencemag.org/content/early/2014/05/28/science.1253512.full.pdf

  109. Jay Says:

    Scott #96

    >no

    Ok. How would you clone these devices? (let’s forget third case as likely the same as case 2)

    >Wait, why is that a counterexample at all? The system you’ve described clearly has a conscious component, namely the artist!

    Sure, I was talking about your proposition to replace the free will question with your “Knightian freedom” question.

    In the same vein as your present post, we can’t help that the artist is a paradigmatic case of a slave, irrespective of its Knightianity. From that, we are forced to conclude that whatever “Knightian freedom” means, it’s definitively not paradigm-case freedom. We can call that… let’s stick on “knightian freedom” 😉

  110. Scott Says:

    Jay #109: I don’t think the most full-throated libertarian free-will advocate in history would deny that free will can be overridden by putting someone in chains and enslaving them…

  111. Itai Says:

    Scott ,
    If you are already interesed about mathematical definitions for intuitive notions ( like consciousness) .
    Have you heard about the principle of least action ?
    It is a mathematical definition of the intuitive notion that nature “computes” in the most efficient way possible in the universe ( that’s why closer to nature fundamental laws computational models seems more efficient, instead of wasting energy as heat like in classical computers, QC aims towards reversible computation )

    Adiabatic QC is based on this principle , and the opposite physical connection between energy and time.

    In my opinion,
    physical action should be the physical definition for complexity ( mathematical time complexity actually measures total action, as it presumes the machine step takes the same energy always and counts the steps as a function of input size) .
    Maybe studying the question ( which is why QC interesting) :
    “what is the minimal action needed for solving computational problem on any physical possible
    machine on input of size n?”
    is more interesting than P vs NP, or will lead new insights about it.

  112. Jay Says:

    Scott #110: Well… that’s my point.

  113. fred Says:

    Scott #110
    “Free will” is about “will” not “action”, no?
    No amount of free will (assuming it exists) would let me “override” the laws of physics and fly like superman.
    Paraplegics have as much (or as little) free will as anyone else.
    But maybe our definitions of free will aren’t aligned…
    I picture free will has an irreducible little magic bubble, the ultimate essence of every conscious being, disconnected from the physical world and the logical world, allowing conscious beings to make decisions independently of any outside and causal influences, allowing them to ultimately be responsible for their own decisions, regardless of the good or bad cards nature gave you at birth. In other words, a contradiction, for if it existed we would all make the same good decisions. At best it’s an invention of society allowing a few lucky “winners” in the grand lottery of life to feel good about their own brilliant “choices”.
    If free will is simply the ability to make decisions without knowing why and in a manner that nothing else can predict my decisions (like it would make a difference), then I guess a rock rolling down a hill has as much free will as I do.

  114. anonymous Says:

    Thanks for your entertaining post.

    I have a small question: what does it mean (empirically) to say that something is *more* conscious than something else?

    Thank you!

  115. Michael Gogins Says:

    “More conscious” in the empirical sense means able to respond more intentionally to stimuli, to perform appropriate actions, to understand situations and propositions, it is closely allied to intelligence. This is basically how doctors test for it.

    After all, there is no empirical evidence of intelligence that is not conscious.

    I would argue that this link of consciousness with intelligence is why, in the Darwinian sense, we are conscious.

    In the subjective sense it means that, plus more vivid, perhaps time moves more slowly, one sees and hears better, one has a sense of being closer to the essence of things, it is closely allied to religious or mystical experience.

  116. fred Says:

    Michael #115
    “After all, there is no empirical evidence of intelligence that is not conscious.”

    Although, a few decades ago, lots of people would have said that beating a chess grand master requires a non trivial amount of intelligence that machines will never exhibit.

  117. fred Says:

    Is software real or is it just an illusion?
    What is the nature of software?
    Is it the software that runs the hardware, or the other way around? Or are they different interpretation of the same thing?
    Considering two machines that produce the same output given the same inputs, is it conceivable that one is running software while the other isn’t? (a zombie machine).
    Looking at a given machine (a windmill, a car, a thermostat, a chess robot), can we analyze its atoms and their patterns in order to measure the amount of software that’s controlling it? (its “softwareness”)

  118. Peli Grietzer Says:

    I think for many people the intuitive appeal of IIT comes out of having a prior intuitions of the form ‘maybe all systems are conscious,’ and IIT being an attractive formalization of our intuitions about what makes something a real system vs. just being something we call a system cause it’s useful to do so.

  119. David Chalmers Says:

    Scott #107: I suspect that even an ancient genius would have had a hard time coming up with the molecular theory of heat from the armchair. Our armchair genius could perhaps find many different potential mechanisms that would generate apparent heat (the sort of behavior associated with heat), of which the molecular mechanism is just one. If so they’d need to do empirical work to find out which mechanism does the job in the actual world. I take it that the problem of apparent consciousness is closer to the first stage here. The analog of the second stage (finding the molecular theory) is finding the mechanism that generates apparent consciousness in the actual world. That’s a harder empirical problem — about as hard as the other easy problems of consciousness, all of which also involve the empirical investigation of mechanisms. Of course even for the first stage it may well be that empirical work about actual mechanisms will in practice help us find potential mechanisms.

    As for phenomenological data: of course there are some aspects of phenomenology that are obscure and hard to pin down, like those associated with the spiritual traditions. But many purported external phenomena are also obscure and hard to pin down. And in both cases, there are also some phenomena that are much less obscure and easy to pin down: e.g. what is the meter reading, or are you experiencing an X in the center of your visual field? So insofar as obscurity is the main problem, we can at least have a science of the non-obscure aspects of phenomenology and worry about the rest later. We can always hope that our science of the non-obscure will help to clarify our analysis of the obscure, as has happened in the case of the ordinary science of external phenomena.

  120. John Kubie Says:

    IIT is incompatible with free will unless it had an output function; unless IIT consciousness can influence, not just reflect, neurons (brain state). The “identity” of IIT is that specific brain state determines conscious state. Free will says that conscious state can give rise to another conscious state. Impossible if the transition from conscious state 1 to conscious state 2 is not accompanied by a determined brain state change. From what I can gather, IIT is a one-way street between brain state and consciousness. Chistoph Koch (#63) appears to support this.

  121. Juur Says:

    Just wanted to ask you, whether I understand correctly that for IIT:

    1.You start from some arbitrary “axioms” or “postulates”
    2.You make some arbtirary mathematical definition of \phi loosely based on these postulates
    3.You find some arbitrary facts that seem to correlate with this mathematical definition of \phi
    4.And then you claim that this \phi descirbes the amount of consciousness for an arbitrary system?

  122. James Cross Says:

    #115 Michael Gogins

    Intuitively we seem to think that intelligence and consciousness. To complicate matters, some people seem to almost use them as synonyms.

    They seem to me to be distinct but somehow complementary.

    Look at this example from Nature:

    http://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

    It describes how slim molds “can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.”

    Are slim molds conscious? I don’t think so. It seems to me that life has a sort of neural network (in the computer sense of the term) foundation that enables memory, decision making, and what appears to be purposive behavior. Much of this has developed through evolution and does not involve consciousness. Our own intelligence may be built on this basis which is why I have argued that even human works of science art derive initially from the unconscious processes being brought into consciousness.

  123. James Cross Says:

    Meant to add “are linked” at end of first sentence in previous.

  124. TF Says:

    @#119

    I totally agree with most of what you said but have to wonder if it makes sense to go into this of that content (that is specific aspects of phenomenology) before pinning down the basics – that is level of consciousness! We would want to start say with the effects of different doses of anaesthetics.

    Moreover, it is not at all clear to me why we must have a theory of specific content to feel that we understand what’s going on, our other so called natural laws are certainly not that specific. We think we understand planetary motion pretty well, does that mean we can model EVERY movement (e.g. three body ….).

    If for example the difference between visual and auditory experiences derives from the geometrical properties of the associated conceptual spaces, it might be that singling out the pertinent features is beyond what the tools we have, let alone explaining this or that type of sound.

    I think it would make more sense to reach a point in which we have a good understanding of the neural mechanisms of learning+a good understanding of the genetic “priors” biasing circuit structure for auditory and visual areas+have a good model for natural statistics in auditory and visual scenes. What more could you want – and why? This would basically give us all the tools we need to build intelligent machines, and enhance and repair our corresponding systems.

    For any other physical phenomenon we would feel in great shape if that were the case ….

  125. Jochen Says:

    First of all, I just wanted to say what an excellent discussion this has been so far (though now that I’m entering it, that’s likely to change).

    But there’s a few points regarding the notion of paradigm cases that occurred to me that I haven’t seen made (though if I missed them, I’d appreciate a pointer). In short, they are:

    (1) What we think are paradigm cases for some phenomenon may, in fact, be instances of quite different, but superficially similar phenomena that we falsely lump in together; seeking for a theory explaining all those paradigm cases then will be impossible.

    (2) We may, in fact, live in a world in which 2D-grids are conscious (while 1D grids aren’t). Restricting our theories to those in which those grids aren’t conscious then likewise would make finding the true theory of consciousness impossible.

    (3) Regarding consciousness, we really only have one paradigm case, because we really only have access to the fact that we are conscious ourselves (and the hardest of the eliminative materialists would probably deny even this). (I think David Chalmers above already pointed this out.) Higher animals may be conscious, but it could be that consciousness necessitates, e.g., sophisticated language processing, or the capacity for universal symbol-manipulation, etc. What kinds of things are conscious can really only be decided once we know what it is that makes us conscious, which is very unlike the case of, say, heat, where we have access to the fact that several things may feel hot to us, regardless of what determines this ‘heat’.

    Regarding (1), we might have paradigm cases of heat, such as, ‘the feeling I have upon touching metal that’s been sitting in the sun’, ‘the feeling I have upon touching water that’s boiling’, ‘the feeling I have close to a fire’, or ‘the feeling I have when a jalapeno touches my tongue’. Now, I invent a thermometer. It shows high values for the first three cases, but a low value for the fourth. Am I now to discard the thermometer? If I did, I would never actually succeed in constructing a working measurement device for temperature, because the effect of jalapeno on my tongue, while superficially similar to touching something hot, is entirely unrelated—I mistakenly identified it as a paradigm case of the phenomenon I’m interested in.

    Similarly, relating to (2), consider the division of animals into different taxa, which was first done using certain morphological characteristics. On this basis, several animals were once thought to be closely related, to be ‘paradigm cases’ of a certain clade, say, which now, based on more sophisticated genetic techniques, are known to be quite distantly related. But likewise, animals once thought to be hardly related at all are now known to be, in fact, very close—nobody would have thought that birds are dinosaurs, and in fact, a taxonomy proposed accordingly would likely have been met with the same scepticism you have regarding the claim of 2D grid consciousness. A tyrannosaur and a chicken don’t seem to be paradigm cases of the same clade. But in fact, by now we know that exactly that is the case. Had we held fast to the paradigm case classification, we would not have discovered this.

    Note that these don’t argue for the fact that later refinement can overturn our initial assumptions, which you had already dealt with in your post, but rather, that relying on paradigm cases at the outset of theory building may hamper finding the right theory, because we’re working from the wrong paradigm cases.

    But I think (3) is the most severe reason for scepticism regarding the aptness of paradigm cases in the case of consciousness. The inference you make from your own phenomenal experience to the phenomenal experience of other entities, it seems to me, can only be based on an argument of the form ‘I am conscious, and I have property X; property X is sufficient for consciousness; entity Y has property X; hence, entity Y is conscious’. I’m not saying you’re explicitly making this argument, but it must implicitly be present in order to substantiate any of your claims that, say, higher animals are conscious. Otherwise, I don’t see how you could justify your belief that some entity is conscious, and another one isn’t.

    But what we’re looking at with Tononi’s theory is precisely a candidate for property X, his phi. But then, the paradigm cases argument is not a reason for rejecting Tononi’s proposal independent of flatly rejecting it—you’re implicitly proposing that there’s some different X’ which accounts for conscious experience, draw up your list of paradigm cases using this X’, and find that it disagrees with the cases proposed to be conscious using Tononi’s X. But this just says that X and X’ are different, or, that you don’t believe that phi is sufficient for consciousness. But that’s just what you put into the argument by creating your list of paradigm cases!

    So this comes down to not more than flat-out not believing that phi is sufficient for consciousness. Of course, that’s a reasonable stance to take—you may buy what Tononi’s selling, or you may not—, but it’s not in and of itself an argument against the theory.

    Of course, none of this really amounts to a positive argument for Tononi’s theory, and like you, I’m somewhat sceptical regarding the consciousness of 2D graphs (or panpsychism in general, if for unrelated reasons—having read James, you’ll be familiar with the notion that twelve men thinking of a word each doesn’t amount to there being a conscious experience of the whole sentence everywhere; I know that Tononi tries to address exactly this worry, but it seems to me that there’s an explanatory gap here between integrated information and integrated consciousness). But ultimately, I don’t think disbelieving that 2D graphs can be conscious is sufficient grounds to reject it on its own.

  126. Michael Gogins Says:

    fred #116:

    Now that we have computers, “intelligence” is harder to pin down, no doubt about it.

    But intelligence requires intentionality, “being about something”, “having a purpose.” No computer program has this. You make it run, and it does or breaks. It’s not resisting your will, it’s not agreeing with your purposes, it’s just a deterministic function.

    If the IBM program designed an improved version of itself because it wanted to, I would call it intelligent.

  127. fred Says:

    Michael #126
    Ok, but you’re not directly in control of the amount of “intentionality” or “purpose” you have – it’s the result of natural selection: by definition the only organisms that breed are the ones who have a “drive” towards avoiding danger, a “drive” towards food, a “drive” towards sex.
    Intelligence or potential for it is a result of the richness of the surrounding environment and its inherent selection processes.
    It’s easy to setup such an environment for programs – using genetic programming it’s been possible to “evolve” code that mutates and incrementally adapts, therefore exhibiting a “drive” towards the various objective functions of the environment.
    Also what constitutes and what doesn’t constitute a unit of “intelligence” is always arbitrary.
    You can consider the DNA as the evolving unit. Or its shell individual. Or the current population as a whole. Or the population and all its byproduct (civilization). Or the entire ecosystem.
    From an evolution standpoint all that potentiality was already there in the primordial earth, billions of years ago.

  128. Adam McKenty Says:

    Jochen (#125):
    Your point 3 is pretty much the point I was trying to make in #25 and #98, though perhaps with less clarity.

  129. sf Says:

    There are a couple of points that probably go back beyond William James, and if they’re not considered as axioms characterizing consciousness, I wonder if they could
    be used to test any mathematical characterization. First there’s the continuity of consciousness, in space and in time, (flow of consciousness) which might be related to IIT’s unity axiom (integration postulate), but I think there’s more to say. Second, there’s a notion that to make sense of perception, it has to be treated as an aspect of action, though without the motor activation. There’s some mystery in making sense of this, but say for
    mathematical concepts, it means that a geometric percept/concept (like circle or triangle) is inseparable from capacities of putting it into practice, and even of our daily
    sensorimotor interactions with objects. It should even apply somehow to qualia, though this maybe goes beyond anything that detailed scientific theories (say of color) can handle for now.

    But first I’d like to say I enjoyed Tononi’s book, Phi, which like Koch’s memoir, “Romantic Reductionist”, is filled with interesting cultural and historical gems, and to admit that before finishing the last part of Phi I needed to go to the technical articles to get details, that I’m not all clear on yet.

    Wondering how “continuity” or flow can fit in with IIT, typical challenges in spatial filling-in include the well-known blindspot and change-blindness aspects of vision. Typical approaches can either provide just enough information in the brain to pass any continuity test that may come up, or to censor more tests than intuition usually allows for (change-blindness definitely supports the latter). Or, there can be a
    diabolical conspiracy where each test is cunningly preceeded by just the right information capture (it surely must be someone’s demon?). This just applies to spatial translation, saccade like motion etc., but dilation, or scaling should also be involved in perception of continuity, and so maybe the epsilon-delta tests that calculus students love may come up again, and not just as a metaphor. Otoh its not likely that
    the ingredients of defining Phi can recapture this epsilon-delta continuity test, i.e. if there’s a clear notion of perception-continuity in the IIT framework then
    there’s a formal problem to see how Phi captures it.

    As for the second point, “perception as action”, its much more fundamental an issue, but not sure how to relate it to IIT yet.

    Also important: not to forget the time-continuity issue, and related to this, I’m a bit sceptical about the “singularity axiom” in IIT, because the apparent singularity may be just an artefact of the type of diabolical conspiracy mentioned above — everytime we want to check for a duality (like checking for a discontinuity) maybe something intervenes to patch over any such impression, so it seems OK to
    introspection, but needs more subtle testing. If we are simultaneously conscious over different duration-scales and capable of jumping between them, focussing as necessary, then singularity could fail, but we’d never catch it explicitly.
    Certainly this is not very clear, but I only want to see if there’s some fair room here for doubting the axioms.

  130. Matthew O Says:

    Here’s my brief summary of why Scott rejects IIT:

    Counter-intuitive:
    Scott’s counterexamples (one of which includes an expander graph that iteratively applies a Vandermonde matrix) exhibit high values of Phi, but are not (intuitively) conscious
    IIT predicts that a 2D grid will be conscious and that can’t possibly be so especially when compared to his paradigm cases.

    Simplistic, Bold, Arbitrary, and NP:
    No one scalar metric is likely to solve PHP (“the problem of which physical systems can be associated with consciousness, and of which kinds”), and yet, Phi is claimed to be both necessary and sufficient for something to be conscious. Why not choose Kolgormov complexity? And the mathematical definition of Phi keeps shifting.
    Phi for the human brain is probably NP and not practical.

    “True” (versus “apparent”) consciousness can only be verified from the inside

  131. Alexander Says:

    I can easily imagine that a thermostate with a low-pass filter may possess a small degree of consciousness. At least it has some kind of memory and primitive pattern recognition capabilities – two necessary conditions that I ascribe to consciousness.

    I can also imagine that a 2D grid of logical gatters my possess consciousness.

    Bit I have severe issues with the following statement:

    • A conventional computer running a program that behaves just like a human being (think of the movie “Her”) would be unconscious, even though it may provide “reports” indistinguishable from a conscious human (Scott’s intuition seems to agree on this, but many others disagree)

    If a 2D grid of logical gatters maypossess consciousness, why should it be impossible for a computer program to possess consciousness?

  132. Dror Cohen Says:

    Hi all,

    Thanks Scott for opening this discussion, and to everyone for their comments. This has been highly enlightening.

    For me, the strength (or rather contribution) of IIT has come from Tononi’s attempt to unify various phenomenal aspects of consciousness into a theory, and then having the balls to define it mathematically.

    But the order is undoubtedly – Phenomenology (“facts” about consciousness) -> Postulates -> math.

    Scott has pointed some oddities in the 2008 formulation of Phi (Virgil Griffith has also pointed out a few http://arxiv.org/abs/1401.0978).

    All three aspects of the chain are assailable, but the original post focused on the last (math). Failures in the the math do not immediately refute the postulates or (Tononi’s) phenomenology.

    I find the following statement from Scott key:

    “..my disagreement is not merely with the details, but with the entire idea of equating consciousness with “integrated information” or “complex organization” or anything of that kind. I don’t see how any notion of the latter sort could possibly provide a sufficient condition for consciousness. For measures of “complex organization,” etc. strike me as ironically way too simple. As soon as you specify exactly what you mean by integrated information or whatever, my guess is that it will inevitably be child’s-play to construct what I’d regard as counterexamples: that is, systems that have unboundedly-large values of your complexity measure, yet that clearly give off no external sign of “consciousness,” unless we stretch the definition of “consciousness” so far that it no longer has any connection to its ordinary meanings. As I wrote in the original post”

    First, (as Scott says) its unknown whether such a measure does exist, i.e. its unknown if for every measure one could construct a ‘trivial’ case for a system with large values – can progress be made on this?

    What if the measure is un-computable? (in the sense that a goal is well defined but its impossible to devise a recipe to solve the problem (e.g. busy beaver, Kolomogrov complexity. of interest http://arxiv.org/pdf/1405.0126v1.pdf and Chaitin’s https://www.cs.auckland.ac.nz/~chaitin/sicact.pdf).

    Second, we (clearly) have no consensus on how to decide if a machine built to have this large value is actually conscious, but I think such machines ARE possible.

    This is a challenge for the empirical evaluation of IIT or any other theory of consciousness that sets similar lofty goals.

    For me the weakest point is that currently the axioms are not fundamental enough so that they can be translated to math in a variety of ways. I find the Exclusion postulate particularly difficult.

    Lastly, I’m not sure IIT says nothing of free-will. After all, in IIT, the ‘important’ (i.e. conscious) description of the system is in the state space where the system had the most possibilities for the past and chose (to be provocative) a particular one.

    Dror

  133. Alexander Says:

    Scott #18: Yes, that is exactly the issue I have with Tononi’s reply. For me, it does not make much sense to claim that a physical structure as simple as a 2D grid of logic gates possesses consciousness, but a computer simulation of such a grid cannot possess consciousness. If that was true, what would actually be the source of that consciousness? Remember that the 2D grid will behave absolutely deterministically. No quantum effects involved here.

    Something else: I am not convinced that philosophical zombies can exist at all. As long as nobody shows how to distinguish zombies from conscious beings, it might as well be the case that appearing conscious is a sufficient condition for being conscious. If we had a computer program that appears to experience fear and that appears to feel pain, why should we assume that it does not experience fear and that it does not feel pain?

  134. Alexander Says:

    To make my point clearer: If a zombie and a conscious being cannot be distinguished based on their behavior, then consciousness would not be much more than some kind of aether from my perspective. In particular, consciousness would not have any evolutionary advantage if it was independent of the observable behavior.

  135. Jochen Says:

    Adam McKenty #128, ah yes, thanks for the pointer, your argument is indeed closely related to my point. And I must admit I don’t quite get how Scott’s (#27) rejoinder is supposed to work: it seems to me that in your example, the observable correlates of heat would be things like steam production, electric connection, switch in the ‘on’-position, etc., leading maybe to a list of paradigm cases including the kettle, the dishwasher, the coffee maker, and one of those ultrasound humidifiers. And of course, trying to find a theory of heat on this basis will be doomed, as there are both instances of hot objects that are not on this paradigm cases list, and instances of ‘paradigm cases’ that are not actually hot objects. So it seems to me that the move to observable correlates does not do any useful work.

    The difference between the cases of heat, or continuity, and consciousness seems to me to be the following: in the first two cases, we have direct access to several exemplars of the thing we are trying to capture—we know directly several hot objects, and we know directly several continuous functions. What we are trying to characterize is immediately exemplified in these paradigm cases.

    Not so in the case of consciousness, where we have direct access only to a single instance. Coming up with paradigm cases there, as Scott does, includes an intermediate step of proposing some property/observational correlate/what have you whose presence licenses us to conclude the presence of consciousness. But the list of paradigm cases so created is a very different animal from the lists of paradigm hot objects or continuous functions, in that it is determined by a hypotheses about what else is present if consciousness is. And phi is just such a proposal, so again, if you’re saying that the list of cases phi picks out is not the same as the list of cases you’ve picked out, then all you’re saying is simply that what you believe is sufficient to conclude the presence of consciousness is something else than phi. So the list of paradigm cases doesn’t do more than saying ‘I don’t believe Tononi’s theory is right’.

  136. Alexander Says:

    Jochen #135:

    Not so in the case of consciousness, where we have direct access only to a single instance. Coming up with paradigm cases there, as Scott does, includes an intermediate step of proposing some property/observational correlate/what have you whose presence licenses us to conclude the presence of consciousness.

    The problem is even more severe: We are no external observer to the one instance that we have access to.

    I find it interesting that even very skeptic persons tend to be so convinced that consciousness is more than what can be observed externally.

    Yes, o.k., I can feel pain. But is pain really more than a signal telling me that something very harmful might be happening to part of my body? O.k., it hurts, but is that really more than an interrupt telling my brain to pause any other unimportant task and try to identify the root cause of the pain in order to stop it?

    Yes, o.k., I can see colors, but is my experience of “red” really qualitatively different from color codes in RGB, HSV or HSL color models? If yes, in what sense?

  137. sf Says:

    Alexander #131
    As regards vanishing Phi for feedforward networks, regardless of whatever it comes from in the definition of Phi, it probably boils down to the fact that Phi measures some kind of intrinsic information; information involves, in some form or other, sender and receiver, reaffirmed in IIT as a “difference that makes a difference”. But using backinduction, from the leaves of the feedforward network, intrinsic information vanishes.
    (The feedforward network is not necessarily a tree, but can be “lifted” to a tree by following forward directed paths, so “leaves” still makes sense.)
    i.e. if the process ends in a “dead-end” there is no intrinsic information, and no consciousness.

    The case of conventional algorithms or programs is similar; they have a fixed input/output repertoire, and they stop after output, so there’s no intrinsic information. Koch’s book discusses alot the lack of teleology or teleonomy in
    algorithmic approaches, the gap between goal-directed systems and deterministic systems, and IIT seems to endorse this strongly, which is what leads to these “deductions”. There are also good books by physiologist Denis Noble on the teleology theme and systems biology; recent and nice read — http://musicoflife.co.uk/
    and for a debate with philosophers see
    Goals, No-Goals, and Own Goals: A Debate on Goal-Directed and Intentional Behaviour
    Alan Montefiore & Denis Noble (eds.)
    also in http://en.wikipedia.org/wiki/Denis_Noble
    see “Ten Principles of Systems Biology” which includes:
    There is no genetic program
    There are no programs at any other level
    There are no programs in the brain
    — he tends to question and explain things seriously, so worth looking.

    More generally it seems that IIT is based on the “incommensurability” of two types of information; physical (what they call “causal”) information, which is what one sees
    looking at the system from the outside as an object, and intrinsic information, which is what the system sees within in its role as subject. In the case of Scott’s counterexample, “incommensurability” essentially means a strengthened form of non-diagonalizability. But I guess they deny that conventional algorithms or programs have anything beyond “causal” information. On the other hand, their discussion of
    causal information is not so satisfying; it essentially boils down to a choice of a noise-model which rigs things so that the obvious types of units (neurons, assemblies) will stand out. They’re obliged to do this because they don’t want to invoke the sensorimotor loop, which is the natural basis of this “view from the outside”. This banishing of sensorimotor loop is, of course, to preserves consistency with the Crick-Koch NCC viewpoint.

  138. Clive Says:

    Why does putting your hand in a fire and holding it there produce a conscious experience that is “painful”, and unpleasant? (At least for most people, apparently.)

    Some possibilities:

    1. Pain is “hardwired”.

    The painful experience is somehow intrinsic to the particular structures or kind of processing (and so on) involved in our brains as a result of holding a hand in the fire.

    No matter where it is found in the universe, that particular structure or kind of circuit or whatever combination of attributes underlies the “pain”, will always produce exactly that same “pain”.

    2. Pain is “learned” or perhaps “constructed” but everything preceding that conscious experience doesn’t really have any particular effect.

    In other words, the details of structure or type of processing, etc., are neither here nor there, but there is at least one aspect of our brains (involved with generating our inner conscious experiences) that has found a way to generate some suitable “qualia”. And by “suitable”, I mean that the nature of that experience is such that it helps us to function and survive.

    So this might be a little like a paint by numbers picture where “you” get to choose the mapping from numbers to colours in any way that seems to work out well for “you”. On the other hand, “I” may do it in in a completely different way from you, but no matter as we have a common language to communicate, and this allows for a common understanding of the numbers underneath our own particular palette.

    3. ?

    The point that I trying to make is that it seems to me (perhaps with a few exceptions) that our “qualia” are such that they have a causal role to play in our behaviour. Pain is unpleasant and we generally try to avoid behaviour that leads to pain. On the other hand, eating energy rich food is generally a happy experience. From this I infer that consciousness seems very unlikely to be an epiphenomenon. Instead evolution has found a way to leverage some particular “capabilities” of the universe in order to endow us with “consciousness”, and that is something useful for us, at least so far!

    What is it like to be the Universe?

  139. Scott Says:

    Jochen #135:

      Not so in the case of consciousness, where we have direct access only to a single instance. Coming up with paradigm cases there, as Scott does, includes an intermediate step of proposing some property/observational correlate/what have you whose presence licenses us to conclude the presence of consciousness. But the list of paradigm cases so created is a very different animal from the lists of paradigm hot objects or continuous functions, in that it is determined by a hypotheses about what else is present if consciousness is. And phi is just such a proposal, so again, if you’re saying that the list of cases phi picks out is not the same as the list of cases you’ve picked out, then all you’re saying is simply that what you believe is sufficient to conclude the presence of consciousness is something else than phi.

    This is an interesting objection, but one response would be to question whether we have “direct access” to the paradigm-cases even in the examples of continuous vs. discontinuous functions and hot vs. cold.

    For example, someone might say: “You might think boiling water is hot and ice is cold because that’s how they feel when you touch them. But in actual scientific fact, boiling water is intrinsically cold; it’s only the act of your touching it that makes it seem hot. Likewise, ice is intrinsically hot but your touching it makes it seem cold.”

    Or again, someone might say: “It might seem obvious to you that a parabola is continuous while a step function is not, but that’s only because you’ve implicitly assumed the ‘traditional,’ Zermelo-Fraenkel axioms for set theory. There are other, equally-valid foundations for mathematics in which the situation is reversed.”

    In both cases, someone who’s not an expert in the field might not be able to explain immediately why the claim is wrong. But even a non-expert could say that, at the very least, anyone who claims such things takes upon themselves a pretty staggering burden of evidence, which they’d better justify by whatever they say next. It’s on them to prove that the meanings of hot and cold, continuous and discontinuous ought to be stretched in the strange ways they’re claiming, not on the rest of us to prove that they shouldn’t be.

    And I’d say the analogous thing about IIT. If you tell me that a 2D grid of XOR gates has subjective experience just as I do, whereas a being behaviorally identical to me but simulated on a computer with a low Φ-value has no experiences—well, I might not be able to reject your claim as surely as 1+1=3, but I can say that you’ve assumed upon yourself a staggering burden of evidence. And the whole point of the remainder of my post—the part about arguments (2) through (4)—was that, in my view, IIT as it currently stands doesn’t come close to meeting that burden.

  140. Adam Barrett Says:

    Having published a couple of papers about integrated information theory, and as a former string theorist, here is where I stand in this debate: how I think we might move forward with the basic idea of IIT, my defence of the basic idea, and my critique of the current formulation.

    Please see my most recent paper for more details http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00063/full

    (i) It is a beautiful idea that consciousness is in some sense intrinsic integrated information. We don’t yet have a correct mathematical formulation of this, or even a fully-fledged conceptual formulation, but I haven’t come across a reason why we shouldn’t try to build a theory from this idea.

    (ii) Information can only be intrinsic to fundamental physical entities. Existing formulations of IIT are not applicable to standard models of fundamental physical entities, but rather rely on an external observer-dependent graining of the system into discrete nodes.

    (iii) In modern physics, fields are considered fundamental. I therefore hypothesise that consciousness arises from information intrinsic to fundamental fields, and in practice it is specifically the electromagnetic field that is from a fundamental perspective the substrate of (complex) consciousness. (Neurons are a scaffolding that enable a complex electromagnetic field configuration to come into being.) I would love to see a reformulation of IIT in terms of continuous fields.

    (iv) I don’t see a problem with the positing of very small levels of consciousness in all kinds of non-complex systems. From my field perspective, the phenomenology assigned to an isolated electron in a vacuum, or even a tree, which has no complex electromagnetic field, would be very minimal. Since the only consciousness we can be certain of is our own, the positing by integrated information theories of germs of consciousness everywhere is no reason to dismiss them. A theory should stand or fall on whether or not it can elegantly and empirically describe human consciousness. That said, different people have different ideas about what the definition of consciousness should be. Perhaps some people would find it more elegant if intrinsic integrated information were equated with consciousness only when the content/structure of the intrinsic integrated information satisfied certain constraints. This could be accommodated into the theory whilst maintaining that, most fundamentally, it is intrinsic integrated information that underlies subjective experience.

    (v) Suppose that, as for PHI, a 2d lattice structure of logic gates can generate a large amount of integrated information according to a sensible definition. To me, such a structure would still not have rich phenomenology because it does not cycle through a large variety of different states over time; in other words it does not have complex dynamics. On my view, generating a large amount of intrinsic integrated information at one moment in time is not sufficient for a system to experience consciousness worthy of being compared to that experienced by an adult human with a brain that rapidly cycles through vast repertoires of different states from each second to the next. It would be hard to imagine any formulation of IIT that did predict a rich phenomenology for a non-complex system.

  141. Jochen Says:

    Scott #139:

    This is an interesting objection, but one response would be to question whether we have “direct access” to the paradigm-cases even in the examples of continuous vs. discontinuous functions and hot vs. cold.

    In my view, the paradigm cases here define what we mean by hot, or continuous, i.e. we start out with the paradigm cases, which appear naturally grouped to us according to some common property, and that’s where we get our first notion of the concept of e.g. ‘heat’ from. That’s then a ground from which we can start theorizing, and deciding edge cases, and so on. It might be plausible in a Kantian sense that the things themselves are of a totally different character, but let them be; we have access only to the phenomena anyway, and they determine our concepts.

    But that doesn’t seem to be how things work with the paradigm cases for consciousness: we get the notion of consciousness solely from our own experience of it, and to find other instances, must already form some notion of what property must be manifest in order for there to be (perhaps only plausibly) consciousness. But the paradigm cases thus found then can’t be used anymore to inform our pretheoretical notion of consciousness (or what property indicates it), since we’ve used that notion already to find the ‘paradigm cases’.

    Regarding the evidential burden faced by IIT, I don’t think I disagree much; even if the idea is right, there’s clearly a long way to go for it. I’m not sure its proponents think otherwise. But it seems perfectly conceivable that at the end of this road, if the theory should not fall prey to other obstacles, there might well be a judgment that yes, 2D XOR grids do have some form of conscious experience; it’s just that, in my view, using the counterintuitiveness of this claim as a priori grounds for rejecting the theory seems like putting Descartes before the horse (sorry).

    Alexander #136:

    I find it interesting that even very skeptic persons tend to be so convinced that consciousness is more than what can be observed externally.

    Well, there’s of course many arguments attempting to demonstrate that there’s more to consciousness than what can be ascertained by monitoring external behaviour (indeed, behaviourism is by and large considered to be refuted). To my mind, they’re pretty convincing, especially the modal argument due to Kripke (of which Scott has presented a version in the argument from zombies), and the knowledge argument (defended most famously by Frank Jackson in his story of Mary the super-scientist). But I won’t recount them here, as I think that would be too much of a side-track.

    Scott’s right, I believe, in saying that what you think about the hard problem doesn’t have any immediate consequences regarding your stance towards the pretty hard problem. Perhaps the following argument clarifies this somewhat: consider your standard philosophical teleportation experiment, which, inevitably, goes wrong, leading to two copies of you, one waking up in room A, the other in room B. If all facts about consciousness can be externally ascertained (in which case the PHP would become a substantially easier problem), then the question which of the two is you should be easily settled: one would just ask.

    Both, of course, will take themselves to be the real you, and on behavioural grounds, there is no reason to disbelieve either one. But, regardless of whether you are a materialist, a qualophile, a panpsychist or a dual-aspect monist, I have a feeling that to you, the story would look decidedly different: you will, after the teleportation, open your eyes either in room A or in room B. Your subjective experience will be that of walking into the teleporter, and then finding yourself in either of the rooms. This fact, regardless of your stance on the hard problem, does not seem to me to be reducible to behaviour—again, both copies will indignantly insist on being the real you. But then, there’s facts about consciousness that can’t be gleaned merely by external observation, and there is a genuine PHP.

  142. Alexander Says:

    Adam #140:

    Since the only consciousness we can be certain of is our own, the positing by integrated information theories of germs of consciousness everywhere is no reason to dismiss them.

    In which sense can we be certain of our own consciousness? How can we exclude that appearing conscious and being conscious is actually the same? And why should consciousness require intrinsic information?

  143. Davide Says:

    Regarding the comparison with continuity, or temperature, I think you have been to aggressive with those examples. Many (useful and important) quantities in physics seem really “strange”. Especially in quantum physics, but even in classical one, e.g. the action. WTH can be a time integral of the *difference* (the difference?) between kinetic and potential energy could be good for? Maybe the sum, but the difference? Well, you know the story.

    I agree with most of the rest you say and I myself am not a II Theorist either. But I am glad some people are and are passionate about it. Maybe in the future they will be able to refine the theory and make it work and convince you and I. Or maybe not, but at least they did what they are passionate about (and yes, all this theory sounds quite a lot of philosophy, but many sciences were, once upon a time…)

  144. Scott Says:

    Davide #143: Far be it from me to stop people from doing what they’re passionate about! Well, OK, if someone’s passion is cross burnings or spree killings, then close be it to me to stop them, if possible. But I don’t want to stop anyone from making honest efforts to theorize about consciousness.

    On the other hand, as I said, it would make me happy if the IIT theorists were to scale back their claims, and consider more seriously the idea that “global integration of information” might be an extremely general property of complex systems, whereas what we’re trying to get at by “consciousness” is much more specific. I’d like to see more ideas on the table, rather than a focus on a single “mathematical measure of consciousness” whose definition has many arbitrary-looking aspects, and that I’d say doesn’t do too well at capturing what we mean by “consciousness” anyway. Let a thousand Φ’s bloom! It’s not the questioning that’s at issue here; rather it’s what strikes me as premature confidence about a single, highly-specific, not-overly-promising sort of answer.

  145. Alexander Says:

    Jochen #141:

    Well, there’s of course many arguments attempting to demonstrate that there’s more to consciousness than what can be ascertained by monitoring external behaviour (indeed, behaviourism is by and large considered to be refuted).

    I do not share the perspective of behaviorism, but rather that of cognitivism.

    Behaviorism is more than just monitoring of external behavior. Behaviorism is based on a very narrow definition of behavior, and it focused too much on classical and operant conditioning. That is the main reason why behaviorism has been replaced by cognitivism.

    But cognitivism is by no means a refutation of behaviorism. One of the most famous criticisms of cognitivism is Searle’s Chinese room thought experiment, and I have to confess that I find this thought experiment rather pointless.

    consider your standard philosophical teleportation experiment, which, inevitably, goes wrong, leading to two copies of you, one waking up in room A, the other in room B. […] Your subjective experience will be that of walking into the teleporter, and then finding yourself in either of the rooms. This fact, regardless of your stance on the hard problem, does not seem to me to be reducible to behaviour—again, both copies will indignantly insist on being the real you.

    And from my point of view, they would both be right and wrong at the same time! But one thing should be very uncontroversial: both of them would certainly be conscious beings.

  146. fred Says:

    Clive #138
    We need to consider pain from an evolutionary point of view: if an environmental situation is dangerous, natural selection will eventually make it so that individuals who
    1) recognize the situation and
    2) grow overriding circuits to avoid it at all cost
    will be the ones who makes lots of happy babies.
    The keywords are “overriding” and “at all cost”, that is why pain “hurts”… Unlike circuits that merely bring up a gradual attraction to some environmental situations (food, opposite sex genitalia), pain circuits spring into action in a brutal unconditional way… It is the relative overriding action of various emotional circuits that define them.
    The classification of emotional circuits aren’t necessarily permanent either, sometimes pain becomes pleasure (masochism), pleasure becomes pain (eating disorders).
    Its key to realize that emotions are probably as strong, if not stronger, in animals than in humans.
    Primordial qualias about colors are probably related to pain/pleasure as well… Say a type of plant is beneficial and another type is lethal, if the plants have different pigments its ver likely that color vision will evolve, with an association of one color with pain and the other color with pleasure.
    Colors don’t exist in a vacuum for animals, that’s why we associate red with heat, fire, blood, sex, passion and blue with cold and ice, water, death, sadness …

  147. fred Says:

    How would IIT rank a computer that perfectly simulates a human brain?

    Or where is the flaw in the following?
    I.e. we reverse engineer neurons, axons, cell divisions, … every chemical process in the brain. I assume that all the brain functions are “classical”, so we can reasonably well capture a snapshot state of a real brain, then reproduce it digitally, along with its I/Os, using a time step small enough compare to speed of the fastest chemical reactions. And let the simulation runs.
    The simulation might diverge from the actual brain because of classical chaos uncertainty, but that’s not the point, we’re not trying to predict the behavior of a given brain but just creat an object that has the same states as a typical human brain – I believe that this system will enjoy as much or as little consciousness as the actual brain, it will have the same “drives”, “motivations”, and “feelings”. The computer and the brain are both made of atoms after all, they have the same inputs/outputs, and exhibit the exact same patterns (albeit on different levels of abstraction), so they’re equivalent.
    The same equivalence holds if we use a computer made of water bubbles instead of electronic gates, and it holds too if the simulation is done on paper writing calculations down with a pen and going mechanically through a rule book to derive the next state (in the case the I/O rate has to be slowed down considerably)… Yes, symbols on papers are conscious, and so are the big integers that can encode the full evolution of the brain simulation… I.e our universe is ultimately mathematical.

  148. Ben Standeven Says:

    If there’s one thing computers are good at, it’s integrating information! So the phi-value of a computer simulating a human brain would be extremely high; probably higher than an actual human brain. But according to IIT, the consciousness of a system doesn’t depend on its exact state; so this simulation has no more or less consciousness than, say, a simulation of a traffic jam. Of course, the “concepts” involved in the two cases are different, so we might suppose that the two computers would have different experiences.

  149. Joe Fitzsimons Says:

    Perhaps I should keep my mouth shut here, but this debate is quite painful to watch unfold. Consciousness is clearly a word we have made up to describe our human experience, and outside of this setting it does not have an unambiguous meaning. We can apply it with some success to other settings where there are sufficient shared ground with our own experience for it to be relatively clear what we mean, but as we move further away from the usual human experience the ambiguities become overwhelming, and the word has no meaning. This is exactly why people get into debates over whether AI can be conscious or whether an ant is, because consciousness does not have a specific meaning when applied in this way. Different people will extend the definition in different ways to support their own position. Such questions are literally meaningless. I have no problem with people setting out to mathematically define some measure of complexity which they hope to correlate with certain information processing ability, but it seems misguided to hijack a word which already has a meaning (albeit one restricted to a certain setting). Doing so is equivalent to asserting that you know the true colour of an electron. It is a completely meaningless statement unless one buys into you extended definition of colour. But this is the best case scenario. Another possible outcome is that the extended definition is not an extension at all, is actually incompatible with the conventional meaning. This latter scenario seems to be exactly what is happening here, since, as Scott points out, there seems to be significant tension between the conventional meaning of consciousness and the meaning ascribed to it in the context of IID.

  150. James Cross Says:

    fred #147

    “Or where is the flaw in the following?
    I.e. we reverse engineer neurons, axons, cell divisions, … every chemical process in the brain. I assume that all the brain functions are “classical”, so we can reasonably well capture a snapshot state of a real brain, then reproduce it digitally, along with its I/Os, using a time step small enough compare to speed of the fastest chemical reactions. And let the simulation runs.”

    Maybe not.

    The brain may be synchronized and faster in ways that cannot be explained by classical physics.

    http://archive.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb

    With all of the talk about integrated information, I have see little or speculation on how the information is stored or modified in this discussion. Or for that matter even what the unit of information is in the brain.

  151. Alexander Says:

    http://www.rifters.com/real/Blindsight.htm

    Has anyone read this novel? I am just asking, because it claims that consciousness might just be some kind of weird evolutionary accident.

  152. Adam Barrett Says:

    @Alexander #142

    “In which sense can we be certain of our own consciousness?”

    I adopt Cartesian reasoning for being certain of my own consciousness. If you don’t want me to call my own subjective experience consciousness, then you have a different definition of consciousness from me!

    “How can we exclude that appearing conscious and being conscious is actually the same?”

    We can’t! But I would enjoy a theory that provided an elegant scientific description of exactly when things appear conscious. This kind of question applies to all science. How do I know that Newton’s laws aren’t an illusion?

    “And why should consciousness require intrinsic information?”

    For the detailed argument I would read through Tononi’s work. In short because conscious experiences have the property of being private /subjective/non-externally observable.

  153. fred Says:

    James #150
    Well, sure, let’s just assume the brain is actually an extremely sophisticated quantum mechanical system.
    But that’s just pushing the simulation from the chemical realm to the lower QM realm.
    A brain is “just” 3 lbs of carbon, oxygen, hydrogen, nitrogen.
    It all boils down to how much of the physical world can be simulated on a computer.
    And as I underlined, efficiency doesn’t even enter the picture as we can slow down the simulation as much as we want (while scaling I/O), unless somehow there’s something intrinsic between the speed of the “clock” of a real brain relative to the Planck time scale?
    But the point is that once you can simulate a brain, you effectively have a program that exhibits the same drives, creativity, motivations, dreams, longings, capacity to learn as an intelligent being.
    Would this program be conscious? I think so. Unless my assumption that consciousness is about organization and information is wrong. Maybe consciousness is directly physical in its nature (e.g. consciousness is the actual electromagnetic field created by the brain)?
    It seems to me it all boils down to the relation between reality and simulation (or hardware vs software). The problem is that the ultimate nature of “physical” objects will always be elusive to us.
    The only direct object we know of is consciousness itself (The “I”).
    It seems that it’s more elegant to assume that physical reality and consciousness are of the same nature (the mathematical universe hypothesis).

  154. Jay Says:

    Joe #149, would you plead we shouldn’t talk about weather when it concerns exoplanets and stars?

  155. Jochen Says:

    Alexander #145:

    And from my point of view, they would both be right and wrong at the same time!

    That may be so, but your experience won’t be that of becoming both. Suppose the teleport worked with an additional flaw, meaning that the copy waking up in room A is reconstituted with a severe heart defect leading to cardiac arrest in a matter of minutes. Or suppose that of both copies, one will be thrown, for reasons obscure to anyone but those philosophers so keen on designing this kind of thought experiment, in a pit of fiery death, while the other will receive one million dollars in gold.

    It seems to me that if you truly hold the view that consciousness is reducible to the behavioural, then, since both copies will equally be you, you’ll enter into the teleport without trepidation: after all, you’ll certainly get the million dollars. You’ll also die—and before you say that you wish to avoid the pain that tends to come part and parcel with fiery pits of death, we could arrange for a completely painless death by nitrogen asphyxiation—but after that, you’ll be as before, just a million bucks richer.

    But speaking for myself, I’d think twice about this: it seems to me that there’s a real chance that I’ll wake up in room A, only to be snuffed out in whatever way and then, nothing. So from an external, behaviour-based perspective, while there may be no difference between both copies, it may matter greatly to me (or you, I suppose) which one I end up becoming, pointing to there being a difference after all.

    But one thing should be very uncontroversial: both of them would certainly be conscious beings.

    Of course, that question is essentially where all the controversy regarding the mind-body (‘hard’) problem resides. 😉 Either of the copies—or both—might well be a zombie (or so at least those that buy into this sort of argument will argue—I take it you don’t, but the controversy manifestly exists).

  156. Richard Cleve Says:

    Scott, I agree with your point in part 1, but have a minor quibble with your illustrative example. The function x sin(1/x) looks pretty continuous to my intuition—prior to resorting to any epsilon-delta definitions. Perhaps a more illustrative example is the standard definition of a connected set, and the fact that it does not imply “path connected” (e.g., by the counterexample of the so-called “topologist’s sine curve”, http://planning.cs.uiuc.edu/node140.html)

  157. Joe Fitzsimons Says:

    @Jay: Perhaps you missed it in my comment, but I did mention that we can and do use the term consciousness when the settings are close enough for the meaning to be relatively clear. Certainly there is enough commonality between earth and other planets for the word weather to be relatively meaningful. But in any case, weather isn’t a property to have or not have, so it doesn’t lead to these types of debates.

  158. Scott Says:

    Richard #156: OK, fair enough. (But is your intuition really 100% comfortable with x sin(1/x) being continuous but undifferentiable at x=0, as is x2 sin(1/x) “but for a different reason,” whereas x3 sin(1/x) is not only continuous at x=0 but has a derivative of 0 there?)

  159. Alexander Says:

    Adam #152:

    I adopt Cartesian reasoning for being certain of my own consciousness.

    O.k., but then how can you be certain that your understanding of consciousness is equivalent to that of others?

    But I would enjoy a theory that provided an elegant scientific description of exactly when things appear conscious.

    From my point of view, the main difficulty is that their is no common understanding of consciousness. For some, consciousness seems to be closely related to cognitive abilities. Others would deny that view.
    Jochen #155:

    That may be so, but your experience won’t be that of becoming both. Suppose the teleport worked with an additional flaw, meaning that the copy waking up in room A is reconstituted with a severe heart defect leading to cardiac arrest in a matter of minutes.

    No, in my opinion your a putting the cart before the horse. The question is not whether I will become both of the copies. Rather, the question is whether both of the copies are future versions of me. And yes, I would agree that this is likely to be true. We change constanly. Right now, I am not the same person than I was yesterday. However, this question seems to be more about personal identity than about consciousness.

    Either of the copies—or both—might well be a zombie (or so at least those that buy into this sort of argument will argue—I take it you don’t, but the controversy manifestly exists).

    From that perspective, anyone might be a zombie. But is there a good reason to believe that the teleported copies are more likely to be a zombie than anyone else? If yes, why? My default assumption is that people surrounding me are no zombies, unless there is no evidence that contradicts with this default assumption.

  160. Alexander Says:

    James #150:

    The brain may be synchronized and faster in ways that cannot be explained by classical physics.

    http://archive.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb

    So Hameroff claims that trillions of computations per second take place in the microtubules of a cell? O.k., I cannot refute this. But it would imply that every amoeba had the computing capacity of a small supercomputer. Seems unlikely to me, but again, I cannot refute it. Now, let’s focus on the brain: If every neuron was a small supercomputer of its own, how would all these small supercomputers be supplied with a sufficient amount of data to be processed? The synapses are most certainly not fast enough. But if there is no sufficient I/O between the neurons, then all those small supercomputers cannot really contribute much to the overall performance of the brain, even if they exist.

  161. Aaron Sheldon Says:

    On a funny note…Giulio Tononi’s acceptance that expander graphs and error correcting codes can have measurable consciousness is getting a bit close the the Masaru Emoto boundary.

    http://en.wikipedia.org/wiki/Masaru_Emoto

  162. Sniffnoy Says:

    Nitpicking, but x²sin(1/x) is differentiable at 0, with a derivative of 0; it just isn’t continuously differentiable there.

  163. Scott Says:

    Sniffnoy #162: Duhhhh, sorry, yes, thanks! Only strengthens my argument about the lack of intuitiveness, I suppose. 🙂

  164. Adam McKenty Says:

    Scott #139:

    This is an interesting objection, but one response would be to question whether we have “direct access” to the paradigm-cases even in the examples of continuous vs. discontinuous functions and hot vs. cold.

    For example, someone might say: “You might think boiling water is hot and ice is cold because that’s how they feel when you touch them. But in actual scientific fact, boiling water is intrinsically cold; it’s only the act of your touching it that makes it seem hot. Likewise, ice is intrinsically hot but your touching it makes it seem cold.”

    This seems like a restatement of your earlier scenario of a counter-intuitive (and therefor erroneous) theory of heat, and subject to exactly the same challenge: our experience of heat (direct or indirect) isn’t an accurate analogue for consciousness, because we can experience it in a multitude of different systems, with radically different structures and dynamics.

    The situation with conscious would be this: you attempt to feel the heat in different objects, but when you touch them you can’t feel anything at all. Because of this, you’re left to make assumptions based on what goes along with heat in the one system where you can observe it. But, until we have some clear understanding of consciousness in that one system, those assumptions will be biased and unreliable.

    As Jochen pointed out, if we take our assumptions about the presence or absence of consciousness as data that a PHP solution has to explain, then our task becomes arbitrary. We’re then no longer attempting to devise a theory of consciousness, but a theory of “Scott’s intuitions about consciousness,” or of “Adam’s intuitions about consciousness”, or of “Buddhist intuitions about consciousness”, or of “male North American 21st-century intellectuals’ intuitions about consciousness”.
    To look at this from the opposite side: If I’m talking to you over coffee and you say you’ve come up with a rock solid theory of consciousness, and it proves beyond a shadow of doubt that I am an unconscious zombie, I’m going to say “you may have made a theory of something, but it sure ain’t consciousness because I know that right now I’m having a subjective experience.” That’s the analogue to heat. But that analogue doesn’t apply to anything except one’s self, as far as I can tell.

    Once again thanks for the excellent discussion. If was about 100 km south of here I’d offer to buy you a drink (hot or cold — but to who?) while you’re in Vancouver….

  165. Adam McKenty Says:

    Fred #147:

    Or where is the flaw in the following?
    I.e. we reverse engineer neurons, axons, cell divisions, … every chemical process in the brain. … And let the simulation runs.

    The flaw, as I see it, is in the assumption that consciousness = computation. Is there any evidence that this is the case? To borrow an analogy from Tononi, you can compute everything that goes on in a nuclear reactor, and no radiation will be generated.

  166. James Cross Says:

    fred #153

    If there is a quantum element to consciousness (meaning it can’t be explained completely in classical terms), it might be difficult to take that snap shot of the brain you wanted to take and do anything useful with it.

    Regarding your electromagnetic field idea, you might want to look at some of the research of Michael Persinger,

    alexander #160

    I’ll rate that half true about the amoeba.

    Microtubules are core structural elements of cells and are involved in movement. Look at this video.

    http://cryoem.berkeley.edu/microtubules

    “The flexibility of tubulin and the consequent versatility of its self-assembly can hardly be an accident. We propose that the polymorphism of assembly unique to tubulin reflects an exquisite tuning mechanism for the complex interaction of different microtubule intermediates with cellular factors that need to detect or make direct use of the growing or shortening state of microtubules to play functional roles at the right time and place in the cell. ”

    I don’t find it difficult to imagine that as brains and nervous systems evolved these structures and molecules might have taken on roles in producing consciousness. Microtubules are the key structural components even in single celled organisms of cilia and flagella. Cilia are sense organelles and flagella provide the ability to move. My whole idea is that brain, nervous system, and consciousness evolved because of selective advantages relating to body control and perception.

  167. wolfgang Says:

    @Adam #165

    >> you can compute everything that goes on in a nuclear reactor,
    >> and no radiation will be generated

    But simulated radiation will be generated in the computer and it will have the exact same affect on the simulated matter in the computer as in the real world.

    So we have to assume that simulated consciousness will be experienced by simulated people just as the real one.
    And simulated people will ensure each other that their conscious experience feels very real – otherwise the simulation would not be very good.

    So at this point you have only a few possible choices:
    i) Complex calculations generate consciousness.
    ii) There is something about real reality whcih physics does not describe and therefore cannot be simulated.
    iii) There is something which will always prevent us from simulating brains.

    ad iii) It could be (purely hypothetical) that human brains require entanglement with the environment to work properly and we cannot simulate this (perhaps Gil Kalai is right and a quantum computer cannot exist).

  168. Abel Says:

    Re: Christof Koch #72, this is a pretty off-topic remark, but I’d argue that if anthropic arguments count as evolution, some physicists do indeed look for evolutionary explanations for concepts such as electrical charge : )

  169. Alexander Says:

    Wolfgang #167:

    ad iii) It could be (purely hypothetical) that human brains require entanglement with the environment to work properly and we cannot simulate this (perhaps Gil Kalai is right and a quantum computer cannot exist).

    If this was true, it should be valid for nun-human brains as well. It seems quite unlikely that a fundamental mechanism like quantum entanglement suddenly comes into play at a very late point in the history of brain evolution. Thus, I make the following prediction:

    If we can simulate much simpler brains like those of honey bees, and if the output of the simulation comes close to that of a real honey bee’s brain, than a simulation of the human brain will result in output similar to that of human brain as well.

    Of course, this says nothing about the consciousness of the simulated brain.

  170. Alexander Vlasov Says:

    Sorry for being pedantic, but x sin(1/x) could hardly be “intuitively continuous” at x=0, because the point does not belong to domain of definition. The continuous one is modified version.
    I think, there is similar problem with IIT – if people use “information” in the title, they should use some basic principles of information science, but all the discussions about consciousness of 2D grid that may not be reproduced on computer look as unjustified attempt to reject CT principle for rather simpler example then it looks like try to claim \(1+1 \ne 2\).

  171. Richard Cleve Says:

    Scott #158: No, I don’t claim intuition about those functions being differentiable or not. Only intuition about the continuity of f(x) = x sin(1/x). That f(x) gets “closer and closer” to 0, as x approaches 0.

    Alexander #170: I was following the usual convention, at least for this particular function, that f(0) = 0.

  172. wolfgang Says:

    @Alexander #169

    >> we can simulate much simpler brains

    I think it is *possible* that consciousness is a side effect with little evolutionary advantage.

    As far as I know, the (classical) neural network of e.g. snails is sufficient to explain all the behavior of those animals.
    This could still be true for larger brains, however, consciousness *could* be an inevitable quantum side effect of wet computers.

    It could very well be that the (classical) neural network of human brains would explain (almost) all of our behavior; indeed most of our brain activity is most likely unconscious.

    But it is possible that the simulation of a *conscious* brain would require a quantum computer and it is *possible* that quantum computers cannot exist due to some unknown reason.

    Otherwise, we have to accept one of the other conclusions, which seem equally strange.

  173. Scott Says:

    Alexander #170: If a function has a limiting value at 0 (like the entropy function p log(1/p)), I thought the usual convention was just to set it equal to that limiting value.

    Richard #171: OK then, you should substitute differentiability of xk sin(1/x) for continuity of x sin(1/x), and the rest of my analogy should go through as before. 🙂

  174. Gil Kalai Says:

    Scott (#46): “So for example, if you burn a book, it must be possible in principle to recover the book from the smoke and ash and emitted photons; if you scramble an egg, it must be possible in principle to unscramble it, etc. These things are “effectively” impossible, but only for the same statistical reason that you never see the gas particles in a box all collect themselves in a single corner.

    If you want to reject the above, then you need to rewrite not only QM, but essentially all of physics going back to Galileo and Newton!

    … One caveat: above, I was talking only about the evolution of isolated physical systems,..”

    It is always nice to return to these examples, and while I don’t fully understand what Scott means by “possible in principle” and yet “effectively impossible,” let me give my (tentative) take on it:

    It is impossible to recover the book from the smoke and ash and emitted photons; The reason (or principle) behind it is that the amount of information on the ambient quantum system required for such a “recovery” exceeds by much the amount of information required to create the book from scratch.

    One aspect of this explanation is that it is information-theoretic and not computational-complexity theoretic so the principle for why the recovery is not possible extends also if we assume unlimited computational resources for the “recoveror”.

    (Again, I don’t know how my explanation is related to Scott’s.)

  175. Alexander Vlasov Says:

    Scott, Richard, it is not only my idea about necessity to be accurate with x=0 for the function.

  176. Alexander Vlasov Says:

    PS. It is interesting an appearance word “convention” in such circumstances, cf. the debates about consciousness

  177. Scott Says:

    Gil #174: No, the issue isn’t merely the amount of information. The issue is that the same information is literally there in the smoke and ash. I.e., if you took the quantum state of the smoke and ash, and ran the evolution equations of physics backwards, you would recover the book.

  178. Gil Kalai Says:

    Scott #178: so I guess we are in small disagreement. The way I see it, running the equations of physics backwards to retrieve, with good approximation, the book, requires an amount of information about the “isolated physical system” which exceeds by far the amount of information needed to create the book from scratch. (Of course, if you have full information about the “isolated physical system” then the question about retrieving the book loses its content.)

  179. Alexander Says:

    wolfgang #172:

    I think it is *possible* that consciousness is a side effect with little evolutionary advantage.

    Yes, that’s certainly possible. How likely it seems depends on what you mean by consciousness. On the one extreme, there is this meaning of consciousness as in the zombie thought experiments, but then, there is also this meaning as in “losing consciousness”.

    This could still be true for larger brains, however, consciousness *could* be an inevitable quantum side effect of wet computers.

    Yes, that’s the Penrose/Hameroff hypothesis. Of course, we cannot refute it. But I never really understood why consciousness resulting from quantum effects should be more plausible than consciousness resulting just from the interaction of 10^11 neurons and 10^15 synapses.

  180. James Cross Says:

    wolfgang #172

    wolfgang = snail++;

    🙂

  181. Ben Standeven Says:

    @Gil Kalai:

    Huh? The amount of extra information to retrieve a book from its smoke and ashes should be equal to the entropy produced by burning the book; while the information needed to reproduce the book from scratch should be equal to the entropy of the book itself. Surely the latter amount is much greater?

    On the other hand, even if you do have complete information about all the material that went into an “isolated” black hole, predicting the form of its Hawking radiation is still an interesting task.

  182. Ben Standeven Says:

    Sorry! I meant to add that this is why complexity theory is useful for black holes, even though it has not been useful in ordinary thermodynamics.

  183. Alexander Vlasov Says:

    Scott #173, I do not think that x log(1/x) = – x log x and x sin(1/x) can be treated as “equally counterintuitive” (after all, why did you choose the second example). The definition of continuity you used was developed more or less natural and fit usual intuition.
    If you want to construct something counterintuitive, you may use topological one: the function is continuous if pre-image of any open set is open. After that you may choose some nonstandard topology, e.g. with all points are open – then all function are continuous.

  184. Scott Says:

    Alexander #183: I never said they were equally counterintuitive. I was just using x log(1/x) to illustrate a subsidiary point, about the convention for defining function values by their limiting values.

  185. Alexander Vlasov Says:

    Scott #184, Your initial point sounds like the problem with intuition is due to definition of continuity, but it is not necessary so and the problem may be with the function
    x sin(1/x) itself. The behavior of the function and x log x near x=0 are certainly different and it may be discussed both using intuitive and formal arguments.

  186. wolfgang Says:

    @Alexander #179

    >> why consciousness resulting from quantum effects should be more plausible

    I don’t know if it is more plausible, but I wanted to point out an important difference:

    If consciousness is associated with a classical neural network, then it can be (in principle) simulated on a classical computer.

    If it is a quantum side effect (not necessarily a quantum computer a la Penrose), then it may not be possible to simulate it; The reason being that the human brain would be entangled with the whole environment.

  187. fred Says:

    Gil #178

    “which exceeds by far the amount of information needed to create the book from scratch.”

    I’m a bit confused by the concept of “amount of information” in this context…
    On one hand, all the information “contained” in the book is defined by the position and orientation of every molecule in the book.
    But on the other hand, what if the book was listing the Chaitin’s constant? (something that would require nearly infinite computing power to derive)
    One can never really know the amount of information contained in a given book because information is subjective (i.e. the content might be hidden/encrypted).
    Also, is the “burning reversal” argument any different from saying that all the information contained in every book ever written by humans can be derived by considering the solar system as of 1 billion years ago (assuming it’s an isolated system) and letting things run forward?

  188. Kai Teorn Says:

    Playing devil’s advocate: Maybe what Phi tries to measure is not consciousness itself but rather ability to be conscious, given proper inputs and conditions. For example I don’t think that Phi of my brain, as measured by neuronal connections, changes all that much the moment I go to dreamless sleep or the moment I die or “lose consciousness”. What changes at that moment is the regime, inputs, metabolism of the brain, not its connectivity. So it might be (weakly I admit) argued that if you take a really large 2D grid, carefully design its inputs and interpret its outputs, then it could in some aspects be comparable to what brain does – that is, could be argued to be barely conscious the moment it processes the inputs (but not when it just lies idle and unpowered). At least, if I were Tononi I would try pursuing this line of argument.

    That aside, I really enjoyed your thorough and thoughtful rebuttal. Theories like IIT that attempt to magically reduce something seemingly incalculable to a single number are very seductive, so it’s instructive to observe how one such theory breaks down on carefully selected counterexamples.

  189. Michael Gogins Says:

    Fred #127:

    An evolutionary algorithm is not intentional. The whole point of neo-Darwinian biology is to provide a mechanistic, purposeless theory of biology. It is pretty successful but like any other scientific theory it deliberately excludes intentionality. The same is true of evolutionary computing.

    I do not doubt that it is possible to simulate, perhaps even convincingly, intentionality. But a simulation of intentionality is not the real thing.

    If you want to assert the substantial identify of the simulation of intentionality with intentionality that can be seen as making a sort of sense, but it also means that intentionality isn’t really intentional because then we aren’t really free and there is no objective good to orient us.

    I’d spell out the consequences of this, they are brutal, but can summarized in a word: nihilism.

    By the way, I am in control of my own intentionality. I could kill myself. That would reduce it to zero.

  190. fred Says:

    Michael #189
    “but it also means that intentionality isn’t really intentional because then we aren’t really free and there is no objective good to orient us.”

    But if you’re taking intentionality/free will as some sort of magical ingredient in “real” organisms (as opposed to digital simulation of those organisms), the problem is that no matter how you slice it, we’re just made of atoms, and you have to explain where that intentionality/free will is coming from.
    If the property is somehow intimately related to matter, then the intentionality was already there in the earth a billion years ago before the apparition of life (and possibly all the way back to the big bang).
    If the property is related to patterns/information/organization, then I don’t quite see how simulations and the real thing wouldn’t be equivalent.
    It reminds me of a passage in Zen and the Art of Motorcycle Maintenance, where the author looks at cities built by humans and wonders whether all that incredible apparent complexity/intelligence was really contained as potential whitin atoms and forces.
    Maybe that goes back to Scott’s previous post about evolution of complexity in systems.

  191. Ben Standeven Says:

    @fred-187:

    The book is only finitely long, so it contains only finitely many digits of Chaitin’s constant, and the number of digits is quite small compared to the total information content of the book. These digits will be extremely hard to compute, but traditional information theory doesn’t take difficulty into account.

    I presume that in a complexity-based information theory, the Omega-analog (for the chosen complexity class) will again contribute a fixed (but small for a typical book) fraction of the book’s entropy. Omega-analogs for larger classes would behave the same way, since their special properties cannot be recognized in the class we are using.

  192. fred Says:

    Ben #191 – thanks!

    Michael #189 – “I’d spell out the consequences of this, they are brutal, but can summarized in a word: nihilism.”

    Well, a theoretical nihilism, yes.
    But, in practice, whether free will/consciousness is a gift God gave to particular lumps of atoms or an illusion doesn’t matter much – in the same way that even if the universe were perfectly deterministic (in a classical way), we still wouldn’t be able to do any practical accurate and long term predictions because of chaos and the fact that we’re part of the very system we’re trying to predict (you can’t perfectly model a system from the inside).

  193. Mark Says:

    Thanks for your essay!. However, I think it possible that you have misunderstood the Libet experiments, which is different to Soon’s. The Libet experiment suggests that some unconscious process is activated before the volitional acts. On contrast, Soon et al experiments try to predict some actions by scan EEG signals wich can be equivalent to your college experiment. Yes, it is very simple to use a machine learning technique in order to predict such signal. But, the main point in Libet work do not appear to be the predictability of acts but the precedence of cerebral signals before to some particular action.

    The question is: “If the free will is an illusion, how it can be detected?”. A reasonable answer is given in the experiments of Libet and others. Of course, an exact definition of this question (or quest?) is missing.

    http://en.wikipedia.org/wiki/Benjamin_Libet

    Perhaps, ITT is a seed of a new branch of machine learning called “artificial conscience” or something.

    Scott #106:
    Well, the Soon et al. fMRI experiment in 2008 (probably the state-of-the-art version of the 1970s Libet experiments with EEG) managed to predict which of two buttons a subject was going to press, a few seconds in advance and about 60% of the time. And while that’s better than chance, it’s far from obvious to me that you couldn’t do just as well by training a machine learning algorithm on the subject’s sequence of past button-presses, and not looking inside their brain as well. In fact, I did an informal little experiment back in grad school that suggested as much! But even if you do get an improvement in predictive power by looking at the fMRI scans … well, it’s obvious that people can be predicted better than chance. Cold readers, seducers, demagogues, and advertisers have known as much for millennia. What I was talking about was something different: “machine-like predictability” (or at least probabilistic predictability), of the sort possessed by a digital computer with a random-number generator or by a hydrogen atom. While it’s conceivable that our understanding of the human brain will advance to the point where we can achieve that, not only are we far from it right now, but there might even be in-principle obstacles to getting the requisite predictability while also keeping the subject alive.

    For more, see my Ghost in the Quantum Turing Machine essay (especially sections 2.12 and 5.1).

  194. Adam McKenty Says:

    wolfgang #167:

    But simulated radiation will be generated in the computer and it will have the exact same affect on the simulated matter in the computer as in the real world.

    To continue in the role of naive computational curmudgeon: so what? I can simulate a massive electromagnet on my computer, and the simulated magnetic field will have the appropriate effect on the simulated matter. But, I don’t have to shield my hard drive so it doesn’t get erased, because of course there’s no actual magnetic field.

    For no other property of nature is a computer simulation expected to magically generate the thing it’s simulating. Why would this be different with consciousness?

    So at this point you have only a few possible choices:
    i) Complex calculations generate consciousness.
    ii) There is something about real reality whcih physics does not describe and therefore cannot be simulated.
    iii) There is something which will always prevent us from simulating brains.

    The existence of subjective experience seems to indicate that there is something about “real reality” which current physics doesn’t describe. This applies even if consciousness turns out to be, somehow, the result of computation. It’s conceivable a computational theory could solve the PHP, but that still wouldn’t tell us why there’s this other, experiential dimension to the universe in the first place.

    More generally, though, there is no reason we’re limited to these three options. If consciousness is dependent on a certain physical architecture, and the computer the brain is being simulated on doesn’t have that architecture, then no consciousness will appear, even if the simulation is accurate as far as it goes. Remember that there are, necessarily, layers of software between the simulation and the hardware, because the hardware of a computer does not operate in the same way as the hardware of a nervous system. This means that even with total knowledge of the physical brain, and a perfect simulation, we still have not recreated the conditions of consciousness, because at the level at which the calculation actually occurs, it doesn’t match what happens in a real brain.

    This is a key point in IIT (which is why you have to actually construct a physical 2D grid, rather than just simulating one on a computer), but it is relevant regardless of the theory.

  195. themgt Says:

    The idea that indistinguishable p-zombies could actually exist seems like begging the question.

    Let’s say I ask “what is it like to be conscious?” of a normal human and otherwise identical p-zombie. If we take as a postulate that one’s conscious state is “entirely a function of the current state of the physical world”, then while the human has a physical representation of her own subjective experience of the world which she can introspect and report back on using physical processes, the p-zombie would not have such physical structures or way to introspect it’s own consciousness, and could only generate a response to the question using a completely different physical mechanism than a conscious being.

    The verbal/body language response could in theory be indistinguishable, but looking at a sufficiently advanced scan of the brain p-zombies would not be able to use the same physical mechanisms to produce responses requiring introspection in conscious beings.

  196. Scott Says:

    themgt #195: As a point of information, the claim of the zombieists is not that p-zombies could actually exist in our physical universe (maybe they can and maybe they can’t). Rather, their claim is that p-zombies are at least conceivable: i.e., that there’s some way that the universe could’ve been, such that p-zombies would have existed.

  197. Alexander Says:

    Adam #194:

    For no other property of nature is a computer simulation expected to magically generate the thing it’s simulating. Why would this be different with consciousness?

    Actually, that only seems to be valid for energy/matter, but not for information processing that takes place in the simulation.

    Let’s assume a simulated author writes a poen. How would that poen differ from a “real” poem?
    Let’s assume that a simulated filmmaker produces a movie. How would that movie differ from a “real” movie?
    Or let’s assume that a simulated computer scientist proves P != NP. How would that proof differ from a “real proof”?

    Finally, let’s assume that a simulated brain produces consciousness. How can we be sure that this consciousness differs from “real” consciousness?

  198. Adam McKenty Says:

    Alexander #197:

    For no other property of nature is a computer simulation expected to magically generate the thing it’s simulating. Why would this be different with consciousness?

    Actually, that only seems to be valid for energy/matter, but not for information processing that takes place in the simulation.

    Let’s assume a simulated author writes a poen. How would that poen differ from a “real” poem?
    Let’s assume that a simulated filmmaker produces a movie. How would that movie differ from a “real” movie?

    A poem is not a property of nature. A poem, strictly speaking, is a property of consciousness 🙂

    Seriously: what is a poem (or any other informational/symbolic product) outside of consciousness? As far as I understand, information in the physical sense is just how something is — the particular state of all the components of some (physical) system, as opposed to all other possible states. As long as the amount of matter/energy in the system remains constant (along with the dimensions of the state space of all its possible states), then so does the amount of information, since every state contains as much information as any other.

    This means that in any physically fundamental sense a book of Shakespeare’s sonnets contains no more information than the same book with the ink molecules distributed randomly among the pages, in the same way that a roll of 10 dice that all come up as sixes contains no more information than any other role of 10 dice.

    If we take a higher level of description (which is convenient, but not fundamental), we could say that certain configurations of matter acquire “meaning” in systems that can make sense of them: a poem has meaning for a (conscious) human, machine code has meaning for a computer, a key has meaning for a lock, etc. This is convenient, but all these putative meanings can, it seems to me, be reduced to the fact that certain states alter the probability of other states in the future, in ways that are governed by very complex rules.

    Obviously there is, strictly speaking, no poem, no person, no movie, and no author in the simulation. If the simulation is on an ordinary silicon semiconductor-based computer, there are just small electrical charges moving around in little blobs of metal and silicon. The apparent meaning of the output is provided by its interpretation in (human) consciousness.

    In a brain, there are also small electrical charges moving around in complex patterns. But the nature of those charges, the patterns they form, and physical environment they exist in are all rather vastly different from their counterparts in a digital computer running a simulation, even if that simulation is mapped to human-like behaviours.

    If consciousness is a result of information processing, then what is information, and what does it mean to process it, aside from 1) the physical dynamics of a specific physical system; or 2) the interpretation of physical dynamics in an already-existing consciousness?

  199. Jay Says:

    Scott #90 #96

    Could you come back to your definition of Knightian uncertainty? I gave some thoughts to “one idea that I’ve toyed with is (…) only systems with a chance of being conscious would be those that (…) external observers can’t even predict probabilistically”. Unfortunatly, this definition (nor anything I could find in your paper and book) seems not enough to understand what you mean by knightian freedom. Below a scheme for which fundamental uncertainty exists without relying on quantum properties nor microscopic events amplification.

    Consider a modified version of Newcomb’s paradox, where a computer program can choose sequentially to take box B (which will contain $0 or $1) then to take box A or not (which always contains $1). A predictor has access to the computer code, and will put $1 in box B iff it predicts the computer program will not take box A).
    At first sight this is exactly Newcomb’s paradox, except that it is clear that the scheme is physically possible. A key difference however, is that the computer program can choose to take A or not based on a future event: what it’ll collect from B. In other words, there’s a deterministic strategy that would put the predictor into trouble: “take box B and box A iff box B contains $1, otherwise take only box A”. What could the predictor predicts? Nothing at all, not even probabistically.

    What I like the most from this scheme, is that it shows that the same computer program can be completly deterministic or completely uncertain, based not on its intrinsic properties, but on the environnement it belongs to*. I suspect this is true of consciousness as well. But that’s hardly your position, so my question: how do you define Knightian so that the above program is not knightian uncertain, from predictor point of view?

    *it also depends on the observer. If the predictor is deterministic to you, then that’s true of the computer program. If you can only predict the predictor probabilistically, that’d still enough to remove knightianity from your point of view.

  200. Jay Says:

    “take box B and box A iff box B contains $1, otherwise take only box A” => “…otherwise don’t take box A”

  201. fred Says:

    Interesting article about the “human brain project” (wasn’t aware of this)

    http://spectrum.ieee.org/tech-talk/computing/hardware/can-the-human-brain-project-succeed?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29

  202. Ben Standeven Says:

    Adam McKenty #198:

    “A poem is not a property of nature. A poem, strictly speaking, is a property of consciousness :)”

    And likewise, consciousness is not a property of nature; it is a product of consciousness…

    “Obviously there is, strictly speaking, no poem, no person, no movie, and no author in the simulation.”

    And obviously, there is, strictly speaking, no such thing as a poem or movie in the real world either.

    Actally, I don’t agree that it is obvious that there is no person in the simulation. It would only be obvious if you first showed that there is no person in the simulation…

  203. Ben Standeven Says:

    Jay 199:

    I don’t see any uncertainty in the program in your scenario. Obviously, the predictor will always lose, but that’s because the predictor is unable to predict its _own_ behavior. It’s straightforward to predict the program’s behavior as a function of the predictor’s.

  204. Adam M Says:

    Ben (#202),

    I’m not sure I quite understand your comment. Strictly speaking, it’s hard to see how there is even such thing as a thing, except 1) the distinctions made in consciousness and 2) the most fundamental components of the physical universe, whatever those turn out to be. (Mathematics could be a third category, but that’s a different question.)

    More practically, though, when I said there is no person in the simulation, I meant that there is no physical person; I trust this is obvious, though I my comment might not have been very clear.

    If, as you say, consciousness is a “product of consciousness,” then we’ve jumped headlong into a horrible hamster wheel of circularity from which there clearly is no escape!

  205. Ben Standeven Says:

    What I meant is it doesn’t make sense to distinguish “real poems” and “simulated poems”, because they have the same effect on people who read them. And we could conceivably have a “real poem” that also exists only in digital form (and the minds of people who read it, of course).

  206. Nicolas Kuske Says:

    Dear Scott

    Thank you so much for your sharing your thoughts.
    I read your post on why you are not an ITTist, and
    now read this closing comment, and have to say that
    I have tears in my eyes and actually laughed a little too
    hard (still in the office 😉 a couple of times.

    I (whatever I mean by that 😉 have little options for my future life except studying consciousness, since first of all I guess there is nothing else to study (literally ;p) and second whenever I am not surrounded by people talking about other stuff, I naturally start contemplating that topic.
    I admit there is a little contradiction here, but since I started this paragraph with a word which I can not define (yet, but honestly maybe never without contradiction), the truth of what is to come after that is debateable by any means (;p).

    Therefore I am glad to have read this, since we might have made a little progress regarding the “explanation” of consciousness through your discussion.
    That is because if we find some other necessary criteria for consciousness we can actually test ITT by testing the root n thing you talked about 🙂
    (Of course also only if both of you are right in your conclusion drawn from ITT for this particular example)

    Keep it up 🙂

  207. Nicolas Kuske Says:

    Correction: I admit there might be a little contradiction here.

  208. Ellis D. Cooper Says:

    Tononi is not only barking up the wrong tree, he is not even in the right forest.

    The phenomenology of conscious thought is available directly to everyone, and there are ancient as well as modern disciplines to guide its study.

    It was discussed, at great length for example, by Edmund Husserl. More recently Izchak Miller explained and extended Husserl’s mental model and microlect of temporal awareness. Tononi’s “phenomenological axioms” are not remotely cognizant of William James’ discussion of “specious present,” Henry Bergson’s “duration,” or Husserl’s distinction between objective time and subjective time.

    Tononi’s highly crafted framework seems to have no room for the all-important phenomenology of temporal awareness. Nor does he discuss the phenomenology of memory, nor of the focus of attention. Husserl also discusses at great length the phenomenology of the perception of shape in space. But Tononi can barely get past staring at a blank wall.

    Tononi mistakes the low-level grid-like apparatus of perceptual apparatus in the brain for the high-level phenomenological apparatus of the mind, which is at the apex of a multi-level tower of virtual machinery based, of course, on physics.

    A small quibble about terminology: in Euclidean Geometry an “axiom” is taken to be what is obvious, such as the reflexivity, symmetry, and transitivity of equality, whereas “postulates” are the assumptions of a theory, such as statements about points and lines. In mathematical logic one defines a “first-order theory, with equality.”

    Tononi’s “phenomenological axioms” mention a large number of undefined terms, such as “consciousness,” “reality,” “structure,” “experience,” “information,” “possibility,” “irreducible,” “component,” “content,” “borders,’ “spatial and temporal grain,” “flow,” and “resolution.” His “postulates” are actually a verbal microlect intended, on one hand, to interpret the “axioms,” and on the other, to ground a diagrammatic and mathematical microlect called IIT, which is some kind of probabilistic cellular automaton graced by pretentions terminology such as “concept” and “quale,” all reduced to a single number, Phi.

    As I said, none of this is remotely cognizant of the phenomenology of conscious thought, which is far more accessible than the relatively inaccessible abstraction called “consciousness.”

  209. Mark Gubrud Says:

    I am particularly fascinated by Tononi’s use of language like “consciousness is generated” by high-phi systems, as you might say a radio transmitter generates an RF field.

    What happens to this “consciousness” once it is “generated”? Does is radiate away, hang around and accumulate in the skull, or what? Does it act back on the system that generated it? Does it act back in some way that causes an effect which would not be observed if the system defined by the input-output relations of its components did not have this additional property of generating consciousness?

    In other words, does it induce an exception to those input-output relations, which (we believe) are those expected according to standard physics, i.e., does it induce a deviation from the predictions of standard physics, one that could be observed? If not, what effect does it have? How would we know if it is there or not, other than that IIT says so (if, indeed, even that is clear)? What is it then, an epiphenomenon? If consciousness is an epiphenomenon, unable to affect the world, how does consciousness report its own existence – e.g., all this discussion?

    It seems to me that either IIT predicts a deviation from standard physics, or it predicts nothing new at all. In the latter case, every phenomenon predicted by IIT would have an alternative account not involving IIT. The IIT description might turn out to be clarifying, efficient or useful, but it would not be a new physical mechanism or a new “state of matter” as Tegmark says.

    I have not seen that IIT is actually useful in this way; but we have seen that at best it could constitute a set of necessary conditions for a system that we would call conscious, not a sufficient set, for the reasons you have shown.

  210. Nathaniel Whitestone Says:

    I see that I am commenting after the conversation has moved on, and I have only skimmed many of the comments, so I apologize if I am duplicating another person’s remarks.

    In any case, although I find your article interesting, I also wonder at the toaster comments. I will admit that I am heavily influenced by Gregory Bateson. Bateson’s concept of Mind includes many systems which we might not intuitively consider to be conscious, but it allows us to think of Mind in various orders of magnitude, stratified by their ability to reflexively modify their own models based on “differences that make a difference”.

    With this as the background to my reading of Tononi, it seems to me that a toaster might be capable of Bateson’s Learning 0 — it is a simple cybernetic system very similar to a thermostat. If we think of Bateson’s logical levels of learning as being roughly equivalent to Tononi’s concept of Phi, then we might imagine that subunits within the cerebellum are each conscious to a degree, and that this degree is perhaps equivalent to Bateson’s Learning 2 but not to Bateson’s Learning 3. While Tononi uses Phi in an attempt to deal with a quantity of information integrated, and not with Bateson’s concept of recursive orders of context, I think the two may still be roughly equivalent. In any case, I am not put off by the idea that simple self-regulating systems are in some small way conscious, and that consciousness is a matter of degree. Once we have crossed that Rubicon, we are not left with a random equivalence between toaster, screen, rope, and wall; instead, these items can be sorted according to their capacity for modifying their internal structures in response to perceived “differences that make a difference”.

    In performing that sort, Phi may be a useful construct. We can continue to redefine our understanding of the scale, or even drastically recalibrate it, but many of your critiques of Tononi’s work rest on a denial of the value of distinguishing degrees of consciousness which extend into the world of non-living systems. If you accept that there are degrees of consciousness which vary between different living systems, why draw the line at animation, and deny the barest thread of consciousness to the simplest learning systems?

  211. Robin Says:

    A very good response, Scott, I doffs me lid to you.

    I have always thought this theory was rather strange, in that it apparently implies that there can be functional equivalents of humans that are not conscious.

    So obviously this means that consciousness can have no effect on behaviour (otherwise the zombies would not be functionally equivalent), including all of our language, including language about consciousness.

    So all of our language about consciousness was produced by a mechanism in which consciousness played no part and could therefore not possibly know what consciousness is.

    So how do we have language about consciousness produced by a mechanism that cannot know what consciousness is? A good guess?

    For example the functionally equivalent zombie Tononi could have come up with this theory about consciousness just as well as the non-zombie Tononi, but then how could it be about consciousness?

    It seems that IIT is a theory about consciousness that is either right or about consciousness – but not both.

    So all of our language is produced by a mechanism

  212. Yuri Says:

    Any argument that employs the word ‘enough’ when setting the bar for a qualitative distinction is suspect. Why should a quantitative distinction lead to a qualitative one?

  213. Carl Darkis Says:

    Good article, thanks.

  214. Cece Says:

    It’s great to have some reasoned arguments laid out against facile claims that consciousness (and free will) have finally been mapped out and their causality explained… There are other discussions, e.g. Ned Block, Don Hoffman, Stuart Hameroff that bring up things to consider. I note that Ed Witten commented in some YouTube video when quizzed about consciousness, that the mystery might be puzzling researchers for awhile yet to come.

  215. Dave_C Says:

    Scott, could you elaborate on the comment you have in #55? You said:
    “… your corrected idea is still very different from IIT, which is emphatic about only counting the “integrated” information, and not any other forms of information. So for example, even if you had to analyze terabytes of data in order to predict the behavior of a system, if the data was organized (say) in a 1-dimensional array like a Turing machine tape, rather than being “integrated” in the specific way that produces a large Φ-value, then IIT would say that there’s no consciousness there.”

    Would it be correct to say that the Church-Turing thesis proves that a Universal Turing machine can perform any computation and therefore simulate any computer? If so, I assume that includes any neural network?

    If that is a valid statement, then I also assume this is why you are stating that IIT predicts the Turing machine, despite being capable of performing any calculation (ie: behaving the same as) a neural network with a high Φ-value, has a very low Φ-value itself and would thus be considered “not conscious” per IIT.

  216. German Says:

    You said: “But how does Giulio know that the cerebellum isn’t conscious?”… “But even if you put those examples on the same footing, still the take-home message seems clear: you can’t count it as a “success” for IIT if it predicts that the cerebellum in unconscious, while at the same time denying that it’s a “failure” for IIT if it predicts that a square mesh of XOR gates is conscious. ”

    Giulio does in fact believe that the cerebellum has a certain level of conscious given by its Φ value.

    He also believes that many consciousnesses can and do coexisting within one human brain. To support this theory (not his own), he sites the split brain experiment results, where individuals develop two consciousnesses, one for each brain hemisphere, after their corpus callosum has been severed.

    It is my understanding that what Giulio proposes is a form of panpsychism (i.e. the doctrine or belief that everything material, however small, has an element of individual consciousness.). However, rather than assigning consciousness to matter itself, he assigns consciousness to any “complex” (system) with the “capacity for integrated information,” the level of which is given by its Φ value. In some of his videos and literature, he even brings up the example of a binary photo-diode (a complex) having a rudimentary form of consciousness (e.g. it is only conscious of the presence or absence of something, which we refer to as light).

    So, Giulio is indeed consistent in his belief that both the cerebellum and the square mesh of XOR gates have a certain level of consciousness given by their corresponding Φ values.

    That being said, I also do find it difficult to understand the language and terms he uses in his postulates.

  217. Consciousness I | if truth exists Says:

    […] a rebuttal to Aaronson, Tononi claims that a simple 2D grid of identical logic gates contains enough integrated information to be […]