Review of “Inadequate Equilibria,” by Eliezer Yudkowsky

Inadequate Equilibria: Where and How Civilizations Get Stuck is a little gem of a book: wise, funny, and best of all useful (and just made available for free on the web).  Eliezer Yudkowsky and I haven’t always agreed about everything, but on the subject of bureaucracies and how they fail, his insights are gold.  This book is one of the finest things he’s written.  It helped me reflect on my own choices in life, and it will help you reflect on yours.

The book is a 120-page meditation on a question that’s obsessed me as much as it’s obsessed Yudkowsky.  Namely: when, if ever, is it rationally justifiable to act as if you know better than our civilization’s “leading experts”?  And if you go that route, then how do you answer the voices—not least, the voices in your own head—that call you arrogant, hubristic, even a potential crackpot?

Yudkowsky gives a nuanced answer.  To summarize, he argues that contrarianism usually won’t work if your goal is to outcompete many other actors in a free market for a scarce resource that they all want too, like money or status or fame.  In those situations, you really should ask yourself why, if your idea is so wonderful, it’s not already being implemented.  On the other hand, contrarianism can make sense when the “authoritative institutions” of a given field have screwed-up incentives that prevent them from adopting sensible policies—when even many of the actual experts might know that you’re right, but something prevents them from acting on their knowledge.  So for example, if a random blogger offers a detailed argument for why the Bank of Japan is pursuing an insane fiscal policy, it’s a-priori plausible that the random blogger could be right and the Bank of Japan could be wrong (as actually happened in a case Yudkowsky recounts), since even insiders who knew the blogger was right would find it difficult to act on their knowledge.  The same wouldn’t be true if the random blogger said that IBM stock was mispriced or that P≠NP is easy to prove.

The high point of the book is a 50-page dialogue between two humans and an extraterrestrial visitor.  The extraterrestrial is confused about a single point: why are thousands of babies in the United States dying every year, or suffering permanent brain damage, because (this seems actually to be true…) the FDA won’t approve an intravenous baby food with the right mix of fats in it?  Just to answer that one question, the humans end up having to take the alien on a horror tour through what’s broken all across the modern world, from politicians to voters to journalists to granting agencies, explaining Nash equilibrium after Nash equilibrium that leaves everybody worse off but that no one can unilaterally break out of.

I do have two criticisms of the book, both relatively minor compared to what I loved about it.

First, Yudkowsky is brilliant in explaining how institutions can produce terrible outcomes even when all the individuals in them are smart and well-intentioned—but he doesn’t address the question of whether we even need to invoke those mechanisms for more than a small minority of cases.  In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to.  It simply wasn’t the case, I don’t think, that I would’ve been equally obstinate in the bureaucrat’s place, or that any of my friends or colleagues would’ve been.  I simply had to accept that I was now face-to-face with an alien sub-intelligence—i.e., with a mind that fetishized rules made up by not-very-thoughtful humans over demonstrable realities of the external world.

Second, I think the quality of the book noticeably declines in the last third.  Here Yudkowsky recounts conversations in which he tried to give people advice, but he redacts all the object-level details of the conversations—so the reader is left thinking that this advice would be good for some possible values of the missing details, and terrible for other possible values!  So then it’s hard to take away much of value.

In more detail, Yudkowsky writes:

“If you want to use experiment to show that a certain theory or methodology fails, you need to give advocates of the theory/methodology a chance to say beforehand what they think they predict, so the prediction is on the record and neither side can move the goalposts.”

I only partly agree with this statement (which might be my first substantive disagreement in the book…).

Yes, the advocates should be given a chance to say what they think the theory predicts, but then their answer need not be taken as dispositive.  For if the advocates are taken to have ultimate say over what their theory predicts, then they have almost unlimited room to twist themselves in pretzels to explain why, yes, we all know this particular experiment will probably yield such-and-such result, but contrary to appearances it won’t affect the theory at all.  For science to work, theories need to have a certain autonomy from their creators and advocates—to be “rigid,” as David Deutsch puts it—so that anyone can see what they predict, and the advocates don’t need to be continually consulted about it.  Of course this needs to be balanced, in practice, against the fact that the advocates probably understand how to use the theory better than anyone else, but it’s a real consideration as well.

In one conversation, Yudkowsky presents himself as telling startup founders not to bother putting their prototype in front of users, until they have a testable hypothesis that can be confirmed or ruled out by the users’ reactions.  I confess to more sympathy here with the startup founders than with Yudkowsky.  It does seem like an excellent idea to get a product in front of users as early as possible, and to observe their reactions to it: crucially, not just a binary answer (do they like the product or not), confirming or refuting a prediction, but more importantly, reactions that you hadn’t even thought to ask about.  (E.g., that the cool features of your website never even enter into the assessment of it, because people can’t figure out how to create an account, or some such.)

More broadly, I’d stress the value of the exploratory phase in science—the phase where you just play around with your system and see what happens, without necessarily knowing yet what hypothesis you want to test.  Indeed, this phase is often what leads to formulating a testable hypothesis.

But let me step back from these quibbles, to address something more interesting: what can I, personally, take from Inadequate Equilibria?  Is academic theoretical computer science broken/inadequate in the same way a lot of other institutions are?  Well, it seems to me that we have some built-in advantages that keep us from being as broken as we might otherwise be.  For one thing, we’re overflowing with well-defined problems, which anyone, including a total outsider, can get credit for solving.  (Of course, the “outsider” might not retain that status for long.)  For another, we have no Institutional Review Boards and don’t need any expensive equipment, so the cost to enter the field is close to zero.  Still, we could clearly be doing better: why didn’t we invent Bitcoin?  Why didn’t we invent quantum computing?  (We did lay some of the intellectual foundations for both of them, but why did it take people outside TCS to go the distance?)  Do we value mathematical pyrotechnics too highly compared to simple but revolutionary insights?  It’s worth noting that a whole conference, Innovations in Theoretical Computer Science, was explicitly founded to try to address that problem—but while ITCS is a lovely conference that I’ve happily participated in, it doesn’t seem to have succeeded at changing community norms much.  Instead, ITCS itself converged to look a lot like the rest of the field.

Now for a still more pointed question: am I, personally, too conformist or status-conscious?  I think even “conformist” choices I’ve made, like staying in academia, can be defended as the right ones for what I wanted to do with my life, just as Eliezer’s non-conformist choices (e.g., dropping out of high school) can be defended as the right ones for what he wanted to do with his.  On the other hand, my acute awareness of social status, and when I lacked any—in contrast to what Eliezer calls his “status blindness,” something that I see as a tremendous gift—did indeed make my life unnecessarily miserable in all sorts of ways.

Anyway, go read Inadequate Equilibria, then venture into the world and look for some $20 bills laying on the street.  And if you find any, come back and leave a comment on this post explaining where they are, so a conformist herd can follow you.

43 Responses to “Review of “Inadequate Equilibria,” by Eliezer Yudkowsky”

  1. jonas Says:

    > Why didn’t we invent quantum computing?

    I think this one has an easy answer. There’s a large part of mathematics that physicists generally understand better than mathematicians. This isn’t a modern phenomenon. My hero Christian Huygens was a physicist who solved mathematical problems that seem impossible to solve without the apparatus of calculus, before calculus was invented. Pierre-Simon Laplace is a physicist celebrated for mathematical results.

    Anyway, inventing quantum computing required familiarity with one of those areas of mathematics, thus physicists have invented it.

  2. Ram Says:

    A $20 dollar bill on the street is a tough ask, but I did spot a $5 dollar bill on the street in midtown Manhattan the other day.

    I watched as people passed by without picking it up, and I asked nearby folks if they had dropped it, but all said no.

    I, too, walked away without picking it up.

  3. pku31 Says:

    This seems to suffer from Yudkowski’s general problem, which is that while he sees others as having potentially messed-up incentives that blind them to the truth, he never applies the same suspicion to himself. He always plays the character of the alien, a complete outsider whose only incentive is knowing the truth, giving him powers beyond ordinary folk.

    It’s also why I wouldn’t trust advice by him, in general – he’s got the double disadvantage of seeing everyone else as flawed and not seeing himself as flawed. It makes him unreliable.

  4. mjgeddes Says:

    I’ve twice found a $20 note lying on the street. In both cases, the notes were on grass verges by the sidewalk – I just happened to be staring downwards at the grass on those occasions. Also picked up 5s and 10s. Never found anything higher than 20s though.

  5. Scott Says:

    pku31 #3: There are many, many people who see themselves as disinterested truth-seekers, and everybody else as biased and irrational. For each such person who’s interesting enough to read, or get to know, there are typically at least a few cases where (as far as I can tell) they’re actually right about it. And it doesn’t take many such cases to make someone worth listening to, especially if you can predict where they’re likely to be right based on the topic. So, while this character trait hardly causes me to assent immediately to whatever a person says, it doesn’t turn me off either.

  6. James B Says:

    Having read some of Yudkowsky’s stuff back when he was blogging; he did seem to have something of a blind spot for the (majority of?) situations in which one doesn’t have a very good idea of what the sets of possible strategies or possible outcomes look like. It seems that hasn’t changed much…

  7. Mitchell Porter Says:

    Does this book say anything about simply discovering something new, as opposed to being right when the experts are wrong?

  8. Rational Feed – deluks917 Says:

    […] Review of Inadequate Equilibria by Scott Aaronson – “To summarize, he argues that contrarianism usually won’t work if your goal is to outcompete many other actors in a free market for a scarce resource that they all want too, like money or status or fame. In those situations, you really should ask yourself why, if your idea is so wonderful, it’s not already being implemented. On the other hand, contrarianism can make sense when the “authoritative institutions” of a given field have screwed-up incentives that prevent them from adopting sensible policies—when even many of the actual experts might know that you’re right, but something prevents them from acting on their knowledge.” […]

  9. Justin Says:

    pku31 #3:

    > This seems to suffer from Yudkowski’s general problem, which is that while he sees others as having potentially messed-up incentives that blind them to the truth, he never applies the same suspicion to himself.

    Do you realize that you’re replying to a book whose sole purpose is to discuss when and how much you should be suspicious of your and other people’s reasoning? That’s a pretty damning critique: “author on book about how much to trust yourself doesn’t know how much to trust himself”.

  10. Scott Says:

    Mitchell #7:

      Does this book say anything about simply discovering something new, as opposed to being right when the experts are wrong?

    Yes. It talks a lot about discovering a new medical treatment—say, using a huge amount of artificial light to treat Seasonal Affective Disorder, way more than the published studies ever tried—and then confronting the question, “if this really works, then why wouldn’t the professionals have discovered it first?” It’s yet another instance of discovering the $20 bill lying on the ground.

  11. William Hird Says:

    Not only have I found a $20 bill lying on the ground at noontime in broad daylight in South Providence , RI ( the first time a hooker ever lost money to me ), I once found $200 lying on the floor of a bar at midnight in Narragansett, RI ( the first time I ever got the best of a drunken fisherman ?). But seriously, I think TRUTH will always take a back seat in a world ruled by money and special interests ( the “tyranny of wealth”).

  12. jonathan Says:

    @Mitchell Porter #7:

    Yes, Eliezer talks a bit about that; but, as he correctly points out, most of the work of a rationalist (or, if you prefer, truth-seeker) goes into deciding which existing theories to believe, not trying to create your own novel theories from scratch.

    It’s true that if you’re a professional researcher you might be very concerned with the latter. But even in that case, the great majority of your beliefs will come from outside sources. And the sort of inadequacy analysis Eliezer suggests can point you towards holes in existing theories, which is where you want to look for innovations that might have been missed.

    To use the prevailing metaphor: some of the book concerns where to look if you want to find $20 bills lying around; some concerns how to determine whether that thing on the ground really is a $20 bill; and some concerns when to believe someone claiming there’s a $20 bill over there.

  13. Milos Says:

    I’m quite interested in the specific case of a bad equilibrium that large parts of (mostly non-STEM) academia seem to be stuck in: the postmodernist dogma. What seems to be enforcing the equilibrium is the threat of severe public bullying to anyone that dares to diverge from it, and there were maybe a dozen such cases in just the last year. In this case, it doesn’t even seem to help if a high-ranking insider speaks clearly against it — Steven Pinker did for years and not much has changed, except perhaps for the worse. Clearly, if this equilibrium can be overcome, it will need to be done by some outsiders, and they will need to solve the problem of appearing like arrogant crackpots, one way or another.

  14. Tim McCormack Says:

    I’ve twice found $20 bills next to the North Station MBTA fare machines in Boston, having passed through there maybe 20 times. There are definitely some places worth keeping an eye out…

  15. Michael Arc Says:

    My wife Michele once found almost exactly $20 in Guatemalan Quetzal (a lot of money, locally), not on a sidewalk, but on the floor of the economics department of the Libertarian Universidad Francisco Marroquín in Guatemala!

  16. Michael Arc Says:

    I think that the critique about ’empty skulls’ is important, and I’m trying to write up a more compelling version of it.

  17. Scott Says:

    Michael #15: That story takes the cake! (A cake that was just lying on the ground?)

  18. Sniffnoy Says:

    The bit about “empty skulls” reminds me of this old piece… 🙂

  19. Sniffnoy Says:

    (FWIW, the “Laws of Human Stupidity” essay is actually older than the 1987 date listed on that page; it goes back to 1976, I think.)

  20. Eric Habegger Says:

    I haven’t read Yudkowski’s short book yet (don’t know if I will), but if your summation is mostly correct and comprehensive then there is a big ingredient that might be left out. Governing leaders often get to where they are because they are extroverts. There is a whole nebula of characteristics that are associated with being an extrovert, one of the main ones being a relative insensitivity to criticism.

    A lot of us introverts know that many of these “leaders” in their field just need more external stimulus from other people than we introverts do. There is a lot of truth in the idea that any attention is good attention to real extroverts. How could someone like that NOT have an advantage in the school of ideas where a neutral audience observing them seems more like innocent sheep coming to the slaughter.

    I guess you can tell I have some real hostility to many notorious extrovert thought leaders.

  21. Scott Says:

    Eric #20: Even supposing that idea is true, I confess I don’t understand its relationship to the rest of Yudkowsky’s book (which is not to say there isn’t one).

  22. Eric Habegger Says:

    Scott #21: I suppose I should have defined my terms better. Yudkowski, to me, is calling humility the essential ingredient. I would rather define it as introversion. There are many cranks in the scientific world that deviate from the standard path but are far from having humility. In a sense they submerse themselves in their own world and are blind to inconsistencies with the external world’s phenomenology.

    But that arrogance that these cranks possess is not that different from thought leaders who continually pump out excuses for why the mainstream theories are not conforming to observed reality and experiment. Think about the discovery of dark matter and energy in the previous 20 years and some of the outlandish explanations for it that the experts have ported to all the different publications.

    What I’m saying is that often introverted people know when their ideas make sense but they find that the scientific culture just doesn’t want to listen to the quiet person in the corner. That’s just the way it is nowadays.

  23. Arko Bose Says:

    I suspect that one reason why he spends time in explaining bad policy choices made by populations from a Nash equilibrium angle (which are now suspected to be very hard to achieve efficiently; see this: https://www.quantamagazine.org/in-game-theory-no-clear-path-to-equilibrium-20170718/), is because the more frequent cause that you mention hardly has much academic interest.

    After all, running into a head that seems to be a hollow chamber protected by concrete is a dead-end. There is almost nothing that can be done to make such a person see reason, whereas why a particular Nash equilibrium cannot be reached is a question that rouses our interest.

  24. David Pearce Says:

    Who has the graver defect of rationality: the FDA who haven’t approved an intravenous baby food with the right mix of fats, or someone who doesn’t believe that pigs or human babies are conscious?
    (cf. https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/)
    Some false reasoning can be ethically catastrophic.

    I choose this example not to make fun of Eliezer – I hold some crazy beliefs myself – but rather to make a perhaps familiar point. Virtually all of us overestimate our own rationality, but especially members of the self-avowed rationalist community.

  25. Tim May Says:

    I don’t buy the “twenty dollar bill” argument at all.

    I’m now 65 and have had several “nobody expected to see it” successes over the years. In fact, it’s the main reason I retired in 1986 when I was 34. (Didn’t stop doing things, and several other successes followed.)

    It’s hard to separate the trivial stories about finding a (literal) twenty dollar bill from the more important stories of seeing something other people–indeed, the consenus view–did not see from just bragging.

    But I think seeing what other people don’t see has long been the road to wealth.

    –Tim May

  26. Douglas Knight Says:

    I found it strange in the first paragraph that you describe the book as being about “bureaucracies.” It seems to me that it is about very different kinds of systems, more about systems that know rather than systems that do, like bureaucracies. And then when you talk about encountering a single “empty skull,” well, that’s exactly the kind of thing he’s not talking about, because he’s not talking about bureaucracies. He’s addressing people who say “How dare you think you can have ideas outside of the system!” not “You deserve everything the system does to you.”

    Of course, many bureaucracies do show up. NSF is a bureaucracy, but he is mainly interested in the zero-sum game of how it distributes grants, not things it could do at no cost to anyone. And while lots of people at NSF make idiosyncratic decisions, individuals are not usually the bottleneck, indeed, NSF is not the only source of grants.

    I don’t know what the Bank of Japan was thinking, but it is merely the most extreme example of tendencies of central banks, not some idiosyncratic behavior that could be explained by an individual.

    Probably the single biggest problem with the IV baby food was the idiosyncratic decision by the company not to pursue US approval. And maybe there is an individual at the FDA blocking some fast track. Even if the FDA did approve it, there would still be the problem of getting the word out. (Pharma ads are really useful!) Most such problems are really informational. MGH makes the food, but why not hospitals in other states, especially big states like CA and TX? The FDA is the most important relevant institution, but it isn’t the bottleneck stopping them. (Could the FDA decide to make an exception to bureaucratic rules? I don’t think so. A lot is forced on it by Congress, which would respond badly to being defied.)

  27. Jacob Lagerros Says:

    Scott, thanks for the great post on an incredible book.

    I’m curious what you think about the relation between this and (bear with me here!) P≠NP. In chapter 4 section iii) Eliezer writes:

    “This brings me to the single most obvious notion that correct contrarians grasp, and that people who have vastly overestimated their own competence don’t realize: It takes far less work to identify the correct expert in a pre-existing dispute between experts, than to make an original contribution to any field that is remotely healthy.”

    This seems like the core of the book. It’s really not about when you can *outperform* experts, but rather about when you can *pick* the right experts to believe, even if they’re not mainstream. (In some special cases you might pick yourself, and then try to outperform the other experts, but that’s rare.) Would P=NP imply Eliezer is wrong in claiming the latter is significantly easier?

    PS. Hopefully I’m not misappropriating a technical theorem already over-exploited by crackpots. DS.

  28. Scott Says:

    David Pearce #24: I started reading the discussion you linked to. Didn’t finish it yet, but the parts I read were pretty interesting, with plenty to challenge people of any stance about animal consciousness. And while I didn’t agree with everything Eliezer said, his views struck me as no more abhorrent than the views of any other person who eats meat. But then, maybe I’m unusual: eating meat without even trying to justify it, doesn’t strike me as somehow morally superior to eating meat and defending it with a philosophical stance about the distinction between pain and suffering. But it seems like for many other people it does.

  29. Scott Says:

    Jacob #27: Yes, the point Eliezer is making there (which I agree with) could be described as “broadly P≠NP-like.”

    If P=NP in the technical sense, then we could say: writing a computer program to efficiently generate correct contrarian ideas, would be no harder than writing a program to efficiently recognize those ideas when shown them.

    (But someone who thought that even recognizing good contrarian ideas was beyond the ability of computers, could deny that the formal P vs NP problem had any relevance whatsoever to Eliezer’s point.)

    In any case, the whole discussion is a bit academic, since of course P≠NP in reality. 😉

  30. wolfgang Says:

    >> is it rationally justifiable to act as if you know better than our civilization’s “leading experts”?

    I think the main problem is that in most areas the experts do not all agree, if the problem is new and complicated enough.
    Let’s take the example of global warming, with experts like Richard Lindzen and Roy Spencer disagreeing with the majority.
    Obviously, this makes real world examples much more difficult decision problems than what Eliezer and you discussed …

  31. Scott Says:

    wolfgang #30: Actually, in that example the experts are completely overwhelmingly on one side of the question—Lindzen represents like 3% of the field. So regardless of what you personally believe about the answer, I think Eliezer’s question applies with full force to that case.

  32. wolfgang Says:

    @Scott

    >> Lindzen represents like 3% of the field

    How did you arrive at this figure?

    If you really count experts you could perhaps go by refereed journals (assuming you already know what journals to trust) and academic affiliation. But in order to judge a scientific article (to know if you should count it as pro or con) you usually need to understand quite a bit about the field already.

    I assume that this is not how you did it and I further assume that most people don’t have the time to do this.

    Instead most people trust some news sources (from wikipedia to the NYT and from Sci.Am. to Lubos’ blog) to do this work for
    them and this is where the real problem with the debate about global warming begins for most non-experts imho.

    In other words, I still think that establishing the opinion of “the leading experts” is actually the main part of this decision problem and it does not get easier if it is true that a large number of scientific articles cannot be reproduced in many fields …

  33. xyz Says:

    In the particular case of the IV formula, I think the various systemic incentives and barriers are something of a red herring. It’s not news that blindly following an institution’s dictates can sometimes lead to thoroughly pathological outcomes, which is why most real-life institutions include oversight positions with considerable discretionary power to override normal procedures when such cases arise.

    The real question in this case is why the people whose job descriptions include reigning in the pathological tendencies of institutions seem disinclined to do so in a case like this. In normal circumstances I’d have expected either the FDA head to temporarily waive the normal approval procedure, or a court to grant an emergency injunction against the interstate export ban. Why do such things seem to have become unimaginable? It seems to me Simplicio may be far closer to the mark than the Cynical Economist on that score.

  34. Another Nymous Says:

    Thanks for posting about this! It was a very good read with many interesting points although its not always clear to me what his precise point is. Could really use a summary at the end similar to “37 Ways That Suboptimal Use Of Categories”.

    For example, I’m not sure what he was thinking of (explicitly) at the beginning of Chapter 4.III or what “You can’t lift a ten-pound weight with one pound of force!” means there. Or he means by “But then what do you say to the Republican?” in Chapter 6.II. It feels like I’m missing some inside jokes here.

    The prose is very interesting and make the first reading much more enjoyable, but there’s some concision sacrificed (and more direct/explicit statements missing, even if a clearer statement would be about a nuanced view).

    For TCS, what about short/simple proofs? If something is open for a long (enough) time and then receives a short proof, it is well received. Otherwise, the reviewer might dismiss it thinking the original problem was too easy. (I think I’ve mostly encountered this as a reviewer judging the value of a result. But it could also act strongly and implicitly when I select which questions to work on.)

    Although you could say number of years unresolved is some statistical information about the original problem’s difficulty, I think we are still pretty heavily biased towards answering questions that are hard for other researchers (rather than things that tell us more about the mathematical world).

    For your question of why didn’t TCS invent bitcoin and quantum computers, I feel like this is some kind of exploration vs exploitation problem. Using the theory developed means taking time away from developing the theory further.

    But coming back to my previous point, I think what’s needed for new inventions aren’t usually “hard” theory problems. Simplier/easier question-answer pairs are more easily applicable. But these pairs still need a theoretician to explore, understand, digest and explain. (Of course, the harder problems do help map out the broader region of mathematical truth within which individual pieces of knowledge are picked for inventions.)

    There’s the Distill project in AI that goes a bit in the direction of promoting simplification (which I’d like to see more of). (Although I’m not quite sure if they mean only unnecessarily complicated math should be removed or should *all* complicated math be removed. And in the former case, how do we define unnecessary.)

  35. emiliobumachar Says:

    “why didn’t we invent Bitcoin?”

    Maybe a Theoretical Computer Scientist did. Bitcoin inventor Satoshi Nakamoto’s true identity is not public information. It could be YOU, Scott, as far as I’m concerned. As far as you’re concerned, it could be any colleague you consider an insider.

  36. ad Says:

    In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to.

    Would this individual have benefited in any way from solving your problem? If not, the easiest thing for him to do would be nothing. Doing anything other than nothing would then be a cost to him.

  37. H. Says:

    > Maybe a Theoretical Computer Scientist did.

    That merely postpones the question. Why do it behind a fake name, outside of the usual venues of academic computer science?

    (Well, one reason is because it’s an idea that could have made the creator very rich and well known. That’s a good reason to not tie it to your identity — it would allow people to attempt to steal your keys to some of those early blocks and bother you for cash and prevent you from moving on from the project.)

  38. mjgeddes Says:

    Those in the know think the mystery of Nakamoto has been solved. It was Hal Finney, a transhumanist who died in 2014.

    https://en.wikipedia.org/wiki/Hal_Finney_(computer_scientist)

    It turns out that a guy named Nakamoto lived close by Finney’s address, and it seems likely that Finney made up a fictional persona based his neighbour. Also, ‘Nakamoto’ disappeared around the time that Finney was incapacitated with ALS.

  39. More Dakka | Don't Worry About the Vase Says:

    […] excellent. I recommend reading it, if you haven’t done so. Three recent reviews are Scott Aaronson’s, Robin Hanson’s (which inspired You Have the Right to Think and a great discussion in its […]

  40. Dave Says:

    > I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone

    Couldn’t you apply the same equilibrium-based “individual agents may be smart but the collective decision is stupid” logic on the nano level of the “neuroeconomic” decision-making process in the empty skull’s head?

    They may have a lot of competing thoughts, goals, priorities, individually some of those thoughts may agree with your point, but the overall noisy & risk-averse output from this “empty skull” may be apathy and protection of status quo. Which is what you end up observing.

  41. TheMoneyIllusion » A new NGDP prediction market Says:

    […] a review by Robin Hanson, another by Scott Alexander, and another by Scott Aaronson.  If I were made king of the world, I’d probably just turn it over to those four bloggers.  […]

  42. MIRI’s December 2017 Newsletter and Annual Fundraiser – Errau Geeks Says:

    […] out! See other recent discussion of modest epistemology and inadequacy analysis by Scott Aaronson, Robin Hanson, Abram Demski, Gregory Lewis, and Scott […]

  43. Nemesis Among the Machines | Adamas Nemesis Says:

    […] However, I think he’s off the scent by focusing on what the intellectually gifted should do with themselves. For the problem he describes is not automation, not legible signals, not guidelines as such or even bureaucracies. No, the problem is that the bureaucratic mentality is that of an automaton, unwilling to exercise any independent human judgment that we’d expect from a reasonable person. As Scott Aaronson once wrote: […]