Archive for the ‘Rage Against Doofosity’ Category

My Conversation with “Eugene Goostman,” the Chatbot that’s All Over the News for Allegedly Passing the Turing Test

Monday, June 9th, 2014

If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper.  See for example here, here, here, here.  In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.”  Much more importantly, Turing’s famous example dialogue, involving Mr. Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve.

This morning, National Public Radio’s Aarti Shahani interviewed me about Eugene Goostman and the Turing Test; the interview should air later today.

Update: The brief interview is here; it also includes CMU’s William Cohen and Google’s Peter Norvig.

I tried to redirect Ms. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice.  Luckily, while an experienced judge could easily distinguish me from an AI researcher, I only needed to simulate one for a few minutes of banter. :-)

Prompted by Ms. Shahani’s inquiries, I had the “conversation” below with the version of Eugene Goostman available at http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/.  It’s possible that there’s a more recent version somewhere else, but this is the only version I was able to access.  Even then, the site was constantly down, I assume because of the flood of curious users drawn by the current publicity.

Scott: Which is bigger, a shoebox or Mount Everest?

Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…

Scott: How many legs does a camel have?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Scott: How many legs does a millipede have?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.

Scott: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-)

Scott: Do you understand why I’m asking such basic questions?  Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”?

Eugene: …wait

Scott: Do you think your ability to fool unsophisticated judges indicates a flaw with the Turing Test itself, or merely with the way people have interpreted the test?

Eugene: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Scott: Do you think Alan Turing, brilliant though he was, had trouble imagining that the judges of his “imitation game” wouldn’t think to ask commonsense questions like the ones above—or that, if they did, they’d actually accept evasion or irrelevant banter as answers?

Eugene: No, not really. I don’t think alan turing brilliant although this guy was had trouble imagining that the judges of his imitation game would not consider to Oooh. Anything else?

Scott: In your opinion, does your existence demonstrate any idea or principle that wasn’t demonstrated just as convincingly by ELIZA, Joseph Weizenbaum’s chatbot from the 1960s?

Eugene: Eliza was a break-thru. All the bots after it were nothing but weak parodies, claiming to have “revolutionary improvements”.

Scott: Hey, that’s the first sensible thing you’ve said!

Eugene: Did I really say so? Well, I must have forgotten about it. Could you tell me about your job, by the way?

Scott: OK, don’t push your luck.  Why do you think journalists are now hyping you?  Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?

Eugene: Ask the Creator about it (call Him by phone, for example:-). He knows all hidden reasons of human-beings’ life and behavior.

Scott: OK, I’ll count that as the second sensible thing you’ve said (grading generously).  Don’t you think that a more convincing chatbot could be created, by using Google to trawl the entire web for plausible-sounding answers (analogous to what IBM’s Watson did)?

Eugene: Yes, I do think that a more convincing chatbot can be built by using google to trawl the all web for plausible – sounding – I hope I have right to have my own opinion. Maybe, let’s talk about something else? What would you like to discuss?

Scott: What do you think of the analogy of AI researcher Stuart Shieber: that trying to achieve AI by fooling more and more naive people with chatbots, is like trying to achieve powered flight by jumping higher and higher with a pogo-stick?

Eugene: “Artificial Intelligence”? I guess you mean that stupid joke about a blonde who dyed her hair!

Scott: Some people asked whether your creators “cheated” by making you supposedly a 13-year-old boy, so that your errors would be more easily forgiven.  Do people actually have that low an opinion of 13-year-old boys?  I mean, I knew when I was 13 that camels have four legs!

Eugene: I wasn’t created by anyone. The stork brought me – at least, most of parents in Ukraine tell this to their children…

Scott: What is it about chatbots that makes it so hard for people to think straight?  Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?

Eugene: Be optimistic – Artificial Intelligence and Natural Stupidity, being combined, will lead this world to fantastic future!

Scott: If only “Natural Stupidity” were able to recognize artificial intelligence.  Speaking of which, did you see the movie “Her”?  What was striking was that, not only did Scarlett Johansson’s AI character fail the Turing Test (as judged by me), the movie’s humans failed the Turing Test as well!

Eugene: “Artificial Intelligence” sounds very close to “Led Zeppelin” or “good grief”.

Scott: Let’s end our conversation on that thought-provoking note.  Thanks for chatting, Eugene!

Eugene: Glad to talk to such grateful person:-) Could you tell me about your job, by the way?

Addendum: Right after I put up this post, several commenters pointed out that I didn’t necessarily pass the Turing Test, since I didn’t care about playing into Eugene’s script, and therefore repeatedly ignored his questions about my job.  Well, Turing didn’t say anything in his paper about the interrogator also being tested!  I was trying to illustrate how to unmask a robot as quickly as possible, not how to distinguish yourself from one.  But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response:

Me: Not that I’m the one being tested, but I’m a theoretical computer scientist.

Eugene: I have nice Celeron-667, by the way. I play games on it!

Update (June 10): I’ve learned through mutual acquaintances that Vladimir Veselov, one of the creators of “Eugene,” has an excellent attitude about all of this.  He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator.  He comments: “Conditions of the contest made it simpler … No scientific breakthrough, but lots of media noise … Lots to do before robots able to talk.”  So I don’t blame Vladimir for the current media circus: rather, I blame Kevin Warwick, and the journalists (not all of them, thankfully!) who uncritically repeated Warwick’s pronouncements.

Incidentally, I strongly encourage people to read Stuart Shieber’s classic paper, Lessons from a Restricted Turing Test (about Shieber’s experiences with the Loebner Prize competition).  This is the paper where Shieber introduces the pogo-stick analogy, and where he crisply explains why AI researchers don’t currently focus their energies on chatbot competitions.

Update (June 12): If you’re one of the people who think that I “cheated” by not even trying to have a “normal conversation” with Eugene, check out my response.

23, Me, and the Right to Misinterpret Probabilities

Wednesday, December 11th, 2013

If you’re the sort of person who reads this blog, you may have heard that 23andMe—the company that (until recently) let anyone spit into a capsule, send it away to a DNA lab, and then learn basic information about their ancestry, disease risks, etc.—has suspended much of its service, on orders from the US Food and Drug Administration.  As I understand it, on Nov. 25, the FDA ordered 23andMe to stop marketing to new customers (though it can still serve existing customers), and on Dec. 5, the company stopped offering new health-related information to any customers (though you can still access the health information you had before, and ancestry and other non-health information is unaffected).

Of course, the impact of these developments is broader: within a couple weeks, “do-it-yourself genomics” has gone from an industry whose explosive growth lots of commentators took as a given, to one whose future looks severely in doubt (at least in the US).

The FDA gave the reasons for its order in a letter to Ann Wojcicki, 23andMe’s CEO.  Excerpts:

For instance, if the BRCA-related risk assessment for breast or ovarian cancer reports a false positive, it could lead a patient to undergo prophylactic surgery, chemoprevention, intensive screening, or other morbidity-inducing actions, while a false negative could result in a failure to recognize an actual risk that may exist.  Assessments for drug responses carry the risks that patients relying on such tests may begin to self-manage their treatments through dose changes or even abandon certain therapies depending on the outcome of the assessment.  For example, false genotype results for your warfarin drug response test could have significant unreasonable risk of illness, injury, or death to the patient due to thrombosis or bleeding events that occur from treatment with a drug at a dose that does not provide the appropriately calibrated anticoagulant effect …  The risk of serious injury or death is known to be high when patients are either non-compliant or not properly dosed; combined with the risk that a direct-to-consumer test result may be used by a patient to self-manage, serious concerns are raised if test results are not adequately understood by patients or if incorrect test results are reported.

To clarify, the DNA labs that 23andMe uses are already government-regulated.  Thus, the question at issue here is not whether, if 23andMe claims (say) that you have CG instead of CC at some particular locus, the information is reliable.  Rather, the question is whether 23andMe should be allowed to tell you that fact, while also telling you that a recent research paper found that people with CG have a 10.4% probability of developing Alzheimer’s disease, as compared to a 7.2% base rate.  More bluntly, the question is whether ordinary schmoes ought to be trusted to learn such facts about themselves, without a doctor as an intermediary to interpret the results for them, or perhaps to decide that there’s no good reason for the patient to know at all.

Among medical experts, a common attitude seems to be something like this: sure, getting access to your own genetic data is harmless fun, as long as you’re an overeducated nerd who just wants to satisfy his or her intellectual curiosity (or perhaps narcissism).  But 23andMe crossed a crucial line when it started marketing its service to the hoi polloi, as something that could genuinely tell them about health risks.  Most people don’t understand probability, and are incapable of parsing “based on certain gene variants we found, your chances of developing diabetes are about 6 times higher than the baseline” as anything other than “you will develop diabetes.”  Nor, just as worryingly, are they able to parse “your chances are lower than the baseline” as anything other than “you won’t develop diabetes.”

I understand this argument.  Nevertheless, I find it completely inconsistent with a free society.  Moreover, I predict that in the future, the FDA’s current stance will be looked back upon as an outrage, with the subtleties in the FDA’s position mattering about as much as the subtleties in the Church’s position toward Galileo (“look, Mr. G., it’s fine to discuss heliocentrism among your fellow astronomers, as a hypothesis or a calculational tool—just don’t write books telling the general public that heliocentrism is literally true, and that they should change their worldviews as a result!”).  That’s why I signed this petition asking the FDA to reconsider its decision, and I encourage you to sign it too.

Here are some comments that might help clarify my views:

(1) I signed up for 23andMe a few years ago, as did the rest of my family.  The information I gained from it wasn’t exactly earth-shattering: I learned, for example, that my eyes are probably blue, that my ancestry is mostly Ashkenazi, that there’s a risk my eyesight will further deteriorate as I age (the same thing a succession of ophthalmologists told me), that I can’t taste the bitter flavor in brussels sprouts, and that I’m an “unlikely sprinter.”  On the other hand, seeing exactly which gene variants correlate with these things, and how they compare to the variants my parents and brother have, was … cool.  It felt like I imagine it must have felt to buy a personal computer in 1975.  In addition, I found nothing the slightest bit dishonest about the way the results were reported.  Each result was stated explicitly in terms of probabilities—giving both the baseline rate for each condition, and the rate conditioned on having such-and-such gene variant—and there were even links to the original research papers if I wanted to read them myself.  I only wish that I got half as much context and detail from conventional doctor visits—or for that matter, from most materials I’ve read from the FDA itself.  (When Dana was pregnant, I was pleasantly surprised when some of the tests she underwent came back with explicit probabilities and base rates.  I remember wishing doctors would give me that kind of information more often.)

(2) From my limited reading and experience, I think it’s entirely possible that do-it-yourself genetic testing is overhyped; that it won’t live up to its most fervent advocates’ promises; that for most interesting traits there are just too many genes involved, via too many labyrinthine pathways, to make terribly useful predictions about individuals, etc.  So it’s important to me that, in deciding whether what 23andMe does should be legal, we’re not being asked to decide any of these complicated questions!  We’re only being asked whether the FDA should get to decide the answers in advance.

(3) As regular readers will know, I’m far from a doctrinaire libertarian.  Thus, my opposition to shutting down 23andMe is not at all a corollary of reflexive opposition to any government regulation of anything.  In fact, I’d be fine if the FDA wanted to insert a warning message on 23andMe (in addition to the warnings 23andMe already provides), emphasizing that genetic tests only provide crude statistical information, that they need to be interpreted with care, consult your doctor before doing anything based on these results, etc.  But when it comes to banning access to the results, I have trouble with some of the obvious slippery slopes.  E.g., what happens when some Chinese or Russian company launches a competing service?  Do we ban Americans from mailing their saliva overseas?  What happens when individuals become able just to sequence their entire genomes, and store and analyze them on their laptops?  Do we ban the sequencing technology?  Or do we just ban software that makes it easy enough to analyze the results?  If the software is hard enough to use, so only professional biologists use it, does that make it OK again?  Also, if the FDA will be in the business of banning genomic data analysis tools, then what about medical books?  For that matter, what about any books or websites, of any kind, that might cause someone to make a poor medical decision?  What would such a policy, if applied consistently, do to the multibillion-dollar alternative medicine industry?

(4) I don’t understand the history of 23andMe’s interactions with the FDA.  From what I’ve read, though, they have been communicating for five years, with everything 23andMe has said in public sounding conciliatory rather than defiant (though the FDA has accused 23andMe of being tardy with its responses).  Apparently, the key problem is simply that the FDA hasn’t yet developed a regulatory policy specifically for direct-to-consumer genetic tests.  It’s been considering such a policy for years—but in the meantime, it believes no one should be marketing such tests for health purposes before a policy exists.  Alas, there are very few cases where I’d feel inclined to support a government in saying: “X is a new technology that lots of people are excited about.  However, our regulatory policies haven’t yet caught up to X.  Therefore, our decision is that X is banned, until and unless we figure out how to regulate it.”  Maybe I could support such a policy, if X had the potential to level cities and kill millions.  But when it comes to consumer DNA tests, this sort of preemptive banning seems purposefully designed to give wet dreams to Ayn Rand fans.

(5) I confess that, despite everything I’ve said, my moral intuitions might be different if dead bodies were piling up because of terrible 23andMe-inspired medical decisions.  But as far as I know, there’s no evidence so far that even a single person was harmed.  Which isn’t so surprising: after all, people might run to their doctor terrified about something they learned on 23onMe, but no sane doctor would ever make a decision solely on that basis, without ordering further tests.

I was right: Congress’s attack on the NSF widens

Thursday, April 25th, 2013

Last month, I blogged about Sen. Tom Coburn (R-Oklahoma) passing an amendment blocking the National Science Foundation from funding most political science research.  I wrote:

This sort of political interference with the peer-review process, of course, sets a chilling precedent for all academic research, regardless of discipline.  (What’s next, an amendment banning computer science research, unless it has applications to scheduling baseball games or slicing apple pies?)

In the comments section of that post, I was pilloried by critics, who ridiculed my delusional fears about an anti-science witch hunt.  Obviously, they said, Congressional Republicans only wanted to slash dubious social science research: not computer science or the other hard sciences that people reading this blog really care about, and that everyone agrees are worthy.  Well, today I write to inform you that I was right, and my critics were wrong.  For the benefit of readers who might have missed it the first time, let me repeat that:

I was right, and my critics were wrong.

In this case, like in countless others, my “paranoid fears” about what could happen turned out to be preternaturally well-attuned to what would happen.

According to an article in Science, Lamar Smith (R-Texas), the new chair of the ironically-named House Science Committee, held two hearings in which he “floated the idea of having every NSF grant application [in every field] include a statement of how the research, if funded, ‘would directly benefit the American people.’ ”  Connoisseurs of NSF proposals will know that every proposal already includes a “Broader Impacts” section, and that that section often borders on comic farce.  (“We expect further progress on the μ-approximate shortest vector problem to enthrall middle-school students and other members of the local community, especially if they happen to belong to underrepresented groups.”)  Now progress on the μ-approximate shortest vector problem also has to directly—directly—“benefit the American people.”  It’s not enough for such research to benefit science—arguably the least bad, least wasteful enterprise our sorry species has ever managed—and for science, in turn, to be a principal engine of the country’s economic and military strength, something that generally can’t be privatized because of a tragedy-of-the-commons problem, and something that economists say has repaid public investments many, many times over.  No, the benefit now needs to be “direct.”

The truth is, I find myself strangely indifferent to whether Smith gets his way or not.  On the negative side, sure, a pessimist might worry that this could spell the beginning of the end for American science.  But on the positive side, I would have been proven so massively right that, even as I held up my “Will Prove Quantum Complexity Theorems For Food” sign on a street corner or whatever, I’d have something to crow about until the end of my life.

Sen. Tom Coburn, the National Science Foundation, and Antarctican Jello Wrestling

Monday, March 25th, 2013

As some of you probably heard, last week Sen. Tom Coburn (R-Oklahoma) managed to get an amendment passed prohibiting the US National Science Foundation from funding any research in political science, unless the research can be “certified” as “promoting national security or the economic interests of the United States.”  This sort of political interference with the peer-review process, of course, sets a chilling precedent for all academic research, regardless of discipline.  (What’s next, an amendment banning computer science research, unless it has applications to scheduling baseball games or slicing apple pies?)  But on researching further, I discovered that Sen. Coburn has long had it in for the NSF, and even has a whole webpage listing his grievances against the agency.  Most of it is the usual “can you believe they wasted money to study something so silly or obvious?,” but by far my favorite tidbit is the following:

Inappropriate staff behavior including porn surfing and Jello wrestling and skinny-dipping at NSF-operated facilities in Antarctica.

It occurred to me that the NSF really has no need to explain this one, since a complete explanation is contained in a single word of the charge itself: Antarctica.  Personally, I’d support launching an investigation of NSF’s Antarctica facilities, were it discovered that the people stuck in them weren’t porn surfing and Jello wrestling and skinny-dipping.

Collaborative Refutation

Monday, February 4th, 2013

At least eight people—journalists, colleagues, blog readers—have now asked my opinion of a recent paper by Ross Anderson and Robert Brady, entitled “Why quantum computing is hard and quantum cryptography is not provably secure.”  Where to begin?

  1. Based on a “soliton” model—which seems to be almost a local-hidden-variable model, though not quite—the paper advances the prediction that quantum computation will never be possible with more than 3 or 4 qubits.  (Where “3 or 4″ are not just convenient small numbers, but actually arise from the geometry of spacetime.)  I wonder: before uploading their paper, did the authors check whether their prediction was, y’know, already falsified?  How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions—not to mention the famous experiments with superconducting Josephson junctions, buckyballs, and so forth that have demonstrated the reality of entanglement among many thousands of particles (albeit not yet in a “controllable” form)?
  2. The paper also predicts that, even with 3 qubits, general entanglement will only be possible if the qubits are not collinear; with 4 qubits, general entanglement will only be possible if the qubits are not coplanar.  Are the authors aware that, in ion-trap experiments (like those of David Wineland that recently won the Nobel Prize), the qubits generally are arranged in a line?  See for example this paper, whose abstract reads in part: “Here we experimentally demonstrate quantum error correction using three beryllium atomic-ion qubits confined to a linear, multi-zone trap.”
  3. Finally, the paper argues that, because entanglement might not be a real phenomenon, the security of quantum key distribution remains an open question.  Again: are the authors aware that the most practical QKD schemes, like BB84, never use entanglement at all?  And that therefore, even if the paper’s quasi-local-hidden-variable model were viable (which it’s not), it still wouldn’t justify the claim in the title that “…quantum cryptography is not provably secure”?

Yeah, this paper is pretty uninformed even by the usual standards of attempted quantum-mechanics-overthrowings.  Let me now offer three more general thoughts.

First thought: it’s ironic that I’m increasingly seeing eye-to-eye with Lubos Motl—who once called me “the most corrupt piece of moral trash”—in his rantings against the world’s “anti-quantum-mechanical crackpots.”  Let me put it this way: David Deutsch, Chris Fuchs, Sheldon Goldstein, and Roger Penrose hold views about quantum mechanics that are diametrically opposed to one another’s.  Yet each of these very different physicists has earned my admiration, because each, in his own way, is trying to listen to whatever quantum mechanics is saying about how the world works.  However, there are also people all of whose “thoughts” about quantum mechanics are motivated by the urge to plug their ears and shut out whatever quantum mechanics is saying—to show how whatever naïve ideas they had before learning QM might still be right, and how all the experiments of the last century that seem to indicate otherwise might still be wiggled around.  Like monarchists or segregationists, these people have been consistently on the losing side of history for generations—so it’s surprising, to someone like me, that they continue to show up totally unfazed and itching for battle, like the knight from Monty Python and the Holy Grail with his arms and legs hacked off.  (“Bell’s Theorem?  Just a flesh wound!”)

Like any physical theory, of course quantum mechanics might someday be superseded by an even deeper theory.  If and when that happens, it will rank alongside Newton’s apple, Einstein’s elevator, and the discovery of QM itself among the great turning points in the history of physics.  But it’s crucial to understand that that’s not what we’re discussing here.  Here we’re discussing the possibility that quantum mechanics is wrong, not for some deep reason, but for a trivial reason that was somehow overlooked since the 1920s—that there’s some simple classical model that would make everyone exclaim,  “oh!  well, I guess that whole framework of exponentially-large Hilbert space was completely superfluous, then.  why did anyone ever imagine it was needed?”  And the probability of that is comparable to the probability that the Moon is made of Gruyère.  If you’re a Bayesian with a sane prior, stuff like this shouldn’t even register.

Second thought: this paper illustrates, better than any other I’ve seen, how despite appearances, the “quantum computing will clearly be practical in a few years!” camp and the “quantum computing is clearly impossible!” camp aren’t actually opposed to each other.  Instead, they’re simply two sides of the same coin.  Anderson and Brady start from the “puzzling” fact that, despite what they call “the investment of tremendous funding resources worldwide” over the last decade, quantum computing still hasn’t progressed beyond a few qubits, and propose to overthrow quantum mechanics as a way to resolve the puzzle.  To me, this is like arguing in 1835 that, since Charles Babbage still hasn’t succeeded in building a scalable classical computer, we need to rewrite the laws of physics in order to explain why classical computing is impossible.  I.e., it’s a form of argument that only makes sense if you’ve adopted what one might call the “Hype Axiom”: the axiom that any technology that’s possible sometime in the future, must in fact be possible within the next few years.

Third thought: it’s worth noting that, if (for example) you found Michel Dyakonov’s arguments against QC (discussed on this blog a month ago) persuasive, then you shouldn’t find Anderson’s and Brady’s persuasive, and vice versa.  Dyakonov agrees that scalable QC will never work, but he ridicules the idea that we’d need to modify quantum mechanics itself to explain why.  Anderson and Brady, by contrast, are so eager to modify QM that they don’t mind contradicting a mountain of existing experiments.  Indeed, the question occurs to me of whether there’s any pair of quantum computing skeptics whose arguments for why QC can’t work are compatible with one another’s.  (Maybe Alicki and Dyakonov?)

But enough of this.  The truth is that, at this point in my life, I find it infinitely more interesting to watch my two-week-old daughter Lily, as she discovers the wonderful world of shapes, colors, sounds, and smells, than to watch Anderson and Brady, as they fail to discover the wonderful world of many-particle quantum mechanics.  So I’m issuing an appeal to the quantum computing and information community.  Please, in the comments section of this post, explain what you thought of the Anderson-Brady paper.  Don’t leave me alone to respond to this stuff; I don’t have the time or the energy.  If you get quantum probability, then stand up and be measured!

Aaron Swartz (1986-2013)

Sunday, January 13th, 2013

Update (1/18): Some more information has emerged.  First, it’s looking like the prosecution’s strategy was to threaten Aaron with decades of prison time, in order to force him to accept a plea bargain involving at most 6 months.  (Carmen Ortiz issued a statement that conveniently skips the first part of the strategy and focuses on the second.)  This is standard operating procedure in our wonderful American justice system, due (in part) to the lack of resources actually to bring most cases to trial.  The only thing unusual about the practice is the spotlight being shone on it, now that it was done not to some poor unknown schmuck but to a tortured prodigy and nerd hero.  Fixing the problem would require far-reaching changes to our justice system.

Second, while I still strongly feel that we should await the results of Hal Abelson’s investigation, I’ve now heard from several sources that there was some sort of high-level decision at MIT—by whom, I have no idea—not to come out in support of Aaron.  Crucially, though, I’m unaware of the faculty (or students, for that matter) ever being consulted about this decision, or even knowing that there was anything for MIT to decide.  Yesterday, feeling guilty about having done nothing to save Aaron, I found myself wishing that either he or his friends or parents had made an “end run” around the official channels, and informed MIT faculty and students directly of the situation and of MIT’s ability to help.  (Or maybe they did, and I simply wasn’t involved?)

Just to make sure I hadn’t missed anything, I searched my inbox for “Swartz”, but all I found relevant to the case were a couple emails from a high-school student shortly after the arrest (for a project he was doing about the case), and then the flurry of emails after Aaron had already committed suicide.  By far the most interesting thing that I found was the following:

Aaron Swartz (December 12, 2007): I’m really enjoying the Democritus lecture notes. Any chance we’ll ever see lecture 12?

My response: It’s a-comin’!


As I wrote on this blog at the time of Aaron’s arrest: I would never have advised him to do what he did.  Civil disobedience can be an effective tactic, but off-campus access to research papers simply isn’t worth throwing your life away for—especially if your life holds as much spectacular promise as Aaron’s did, judging from everything I’ve read about him.  At the same time, I feel certain that the world will eventually catch up to Aaron’s passionate belief that the results of publicly-funded research should be freely available to the public.  We can honor Aaron’s memory by supporting the open science movement, and helping the world catch up with him sooner.

The Toaster-Enhanced Turing Machine

Thursday, August 30th, 2012

Over at Theoretical Computer Science StackExchange, an entertaining debate has erupted about the meaning and validity of the Church-Turing Thesis.  The prompt for this debate was a question asking for opinions about Peter Wegner and Dina Goldin’s repetitive diatribes claiming to refute “the myth of the Church-Turing Thesis”—on the grounds that, you see, Turing machines can only handle computations with static inputs and outputs, not interactivity, or programs like operating systems that run continuously.  For a demolition of this simple misunderstanding, see Lance Fortnow’s CACM article.  Anyway, I wrote my own parodic response to the question, which generated so many comments that the moderators started shooing people away.  So I decided to repost my answer on my blog.  That way, after you’re done upvoting my answer over at CS Theory StackExchange :-), you can come back here and continue the discussion in the comments section.


Here’s my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science’s dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can’t toast bread. Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation.

You might say: sure, I have a “point”, but it’s a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a Turing machine to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster’s internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn’t have been obvious to Turing himself.

So, I’d say the reason why there’s been no “followup” to Wegner and Goldin’s diatribes is that theoretical computer science has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field.

Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing’s notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed?

Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages existin the first place, that ensures they can always be compiled down to machine code? Could it be … err … THE CHURCH-TURING THESIS, the very same one you’ve been ragging on? To clarify, the Church-Turing Thesis is not the claim that “TURING MACHINEZ RULE!!” Rather, it’s the claim that any reasonable programming language will be equivalent in expressive power to Turing machines — and as a consequence, that you might as well think in terms of the higher-level languages if it’s more convenient to do so. This, of course, was a radical new insight 60-75 years ago.

Update (Sept. 6): Check out this awesome comment by Lou Scheffer, describing his own tale of conversion from a Church-Turing skeptic to believer, and making an extremely apt comparison to the experience of conversion to the belief that R, R2, and so on all have the same cardinality (an experience I also underwent!).

I was wrong about Joy Christian

Thursday, May 10th, 2012

Update: I decided to close comments on this post and the previous Joy Christian post, because they simply became too depressing for me.

I’ve further decided to impose a moratorium, on this blog, on all discussions about the validity of quantum mechanics in the microscopic realm, the reality of quantum entanglement, or the correctness of theorems such as Bell’s Theorem.  I might lift the moratorium at some future time.  For now, though, life simply feels too short to me, and the actually-interesting questions too numerous.  Imagine, for example, that there existed a devoted band of crackpots who believed, for complicated, impossible-to-pin-down reasons of topology and geometric algebra, that triangles actually have five corners.  These crackpots couldn’t be persuaded by rational argument—indeed, they didn’t even use words and sentences the same way you do, to convey definite meaning.  And crucially, they had infinite energy: you could argue with them for weeks, and they would happily argue back, until you finally threw up your hands in despair for all humanity, at which point the crackpots would gleefully declare, “haha, we won!  the silly ‘triangles have 3 corners’ establishment cabal has admitted defeat!”  And, in a sense, they would have won: with one or two exceptions, the vast majority who know full well how many corners a triangle has simply never showed up to the debate, thereby conceding to the 5-cornerists by default.

What would you in such a situation?  What would you do?  If you figure it out, please let me know (but by email, not by blog comment).


In response to my post criticizing his “disproof” of Bell’s Theorem, Joy Christian taunted me that “all I knew was words.”  By this, he meant that my criticisms were entirely based on circumstantial evidence, for example that (1) Joy clearly didn’t understand what the word “theorem” even meant, (2) every other sentence he uttered contained howling misconceptions, (3) his papers were written in an obscure, “crackpot” way, and (4) several people had written very clear papers pointing out mathematical errors in his work, to which Joy had responded only with bluster.  But I hadn’t actually studied Joy’s “work” at a technical level.  Well, yesterday I finally did, and I confess that I was astonished by what I found.  Before, I’d actually given Joy some tiny benefit of the doubt—possibly misled by the length and semi-respectful tone of the papers refuting his claims.  I had assumed that Joy’s errors, though ultimately trivial (how could they not be, when he’s claiming to contradict such a well-understood fact provable with a few lines of arithmetic?), would nevertheless be artfully concealed, and would require some expertise in geometric algebra to spot.  I’d also assumed that of course Joy would have some well-defined hidden-variable model that reproduced the quantum-mechanical predictions for the Bell/CHSH experiment (how could he not?), and that the “only” problem would be that, due to cleverly-hidden mistakes, his model would be subtly nonlocal.

What I actually found was a thousand times worse: closer to the stuff freshmen scrawl on an exam when they have no clue what they’re talking about but are hoping for a few pity points.  It’s so bad that I don’t understand how even Joy’s fellow crackpots haven’t laughed this off the stage.  Look, Joy has a hidden variable λ, which is either 1 or -1 uniformly at random.  He also has a measurement choice a of Alice, and a measurement choice b of Bob.  He then defines Alice and Bob’s measurement outcomes A and B via the following functions:

A(a,λ) = something complicated = (as Joy correctly observes) λ

B(b,λ) = something complicated = (as Joy correctly observes) -λ

I shit you not.  A(a,λ) = λ, and B(b,λ) = -λ.  Neither A nor B has any dependence on the choices of measurement a and b, and the complicated definitions that he gives for them turn out to be completely superfluous.  No matter what measurements are made, A and B are always perfectly anticorrelated with each other.

You might wonder: what could lead anyone—no matter how deluded—even to think such a thing could violate the Bell/CHSH inequalities?  Aha, Joy says you only ask such a naïve question because, lacking his deep topological insight, you make the rookie mistake of looking at the actual outcomes that his model actually predicts for the actual measurements that are actually made.  What you should do, instead, is compute a “correlation function” E(a,b) that’s defined by dividing A(a,λ)B(b,λ) by a “normalizing factor” that’s a product of the quaternions a and b, with a divided on the left and b divided on the right.  Joy seems to have obtained this “normalizing factor” via the technique of pulling it out of his rear end.  Now, as Gill shows, Joy actually makes an algebra mistake while computing his nonsensical “correlation function.”  The answer should be -a.b-a×b, not -a.b.  But that’s truthfully beside the point.  It’s as if someone announced his revolutionary discovery that P=NP implies N=1, and then critics soberly replied that, no, the equation P=NP can also be solved by P=0.

So, after 400+ comments on my previous thread—including heady speculations about M-theory, the topology of spacetime, the Copenhagen interpretation, continuity versus discreteness, etc., as well numerous comparisons to Einstein—this is what it boils down to.  A(a,λ) = λ and B(b,λ) = -λ.

I call on FQXi, in the strongest possible terms, to stop lending its legitimacy to this now completely-unmasked charlatan.  If it fails to do so, then I will resign from FQXi, and will encourage fellow FQXi members to do the same.

While I don’t know the exact nature of Joy’s relationship to Oxford University or to the Perimeter Institute, I also call on those institutions to sever any connections they still have with him.

Finally, with this post I’m going to try a new experiment.  I will allow comments through the moderation filter if, and only if, they exceed a minimum threshold of sanity and comprehensibility, and do not randomly throw around terms like “M-theory” with no apparent understanding of what they mean.  Comments below the sanity threshold can continue to appear freely in the previous Joy Christian thread (which already has a record-setting number of comments…).

Update (May 11): A commenter pointed me to a beautiful preprint by James Owen Weatherall, which tries sympathetically to make as much sense as possible out of Joy Christian’s ideas, and then carefully explains why the attempt fails (long story short: because of Bell’s theorem!).  Notice the contrast between the precision and clarity of Weatherall’s prose—the way he defines and justifies each concept before using it—and the obscurity of Christian’s prose.

Another Update: Over on the previous Joy Christian thread, some commenters are now using an extremely amusing term for people who believe that theories in physics ought to say something comprehensible about the predicted outcomes of physics experiments.  The term: “computer nerd.”

Third Update: Quite a few commenters seem to assume that I inappropriately used my blog to “pick a fight” with poor defenseless Joy Christian, who was minding his own business disproving and re-disproving Bell’s Theorem.  So let me reiterate that I wasn’t looking for this confrontation, and in fact took great pains to avoid it for six years, even as Joy became more and more vocal.  It was Joy, not me, who finally forced matters to a head through his absurd demand that I pay him $100,000 “with interest,” and then his subsequent attacks.

Bell’s-inequality-denialist Joy Christian offers me $200K if scalable quantum computers are built

Wednesday, May 2nd, 2012

Joy Christian is the author of numerous papers claiming to disprove Bell’s theorem.  Yes, that Bell’s theorem: the famous result from the 1960s showing that no local hidden variable theory can reproduce all predictions of quantum mechanics for entangled states of two particles.  Here a “local hidden variable theory” means—and has always meant—a theory where Alice gets some classical information x, Bob gets some other classical information y (generally correlated with x), then Alice and Bob choose which respective experiments to perform, and finally Alice sees a measurement outcome that’s a function only of her choice and of x (not of Bob’s choice or his measurement outcome), and Bob sees a measurement outcome that’s a function only of his choice and of y.  In modern terms, Bell, with simplifications by Clauser et al., gave an example of a game that Alice and Bob can win at most 75% of the time under any local hidden variable theory (that’s the Bell inequality), but can win 85% of the time by measuring their respective halves of an entangled state (that’s the Bell inequality violation).  The proofs are quite easy, both for the inequality and for its violation by quantum mechanics.  Check out this problem set for the undergrad course I’m currently teaching if you’d like to be led through the proof yourself (it’s problem 7).

In case you’re wondering: no, Bell’s Theorem has no more been “disproved” than the Cauchy-Schwarz Inequality, and it will never be, even if papers claiming otherwise are stacked to the moon.  Like Gödel’s and Cantor’s Theorems, Bell’s Theorem has long been a lightning rod for incomprehension and even anger; I saw another “disproof” at a conference in 2003, and will doubtless see more in the future.  The disproofs invariably rely on personal reinterpretations of the perfectly-clear concept of “local hidden variables,” to smuggle in what would normally be called non-local variables.  That smuggling is accompanied by mathematical sleight-of-hand (the more, the better) to disguise the ultimately trivial error.

While I’d say the above—loudly, even—to anyone who asked, I also declined several requests to write a blog post about Joy Christian and his mistakes.  His papers had already been refuted ad nauseam by others (incidentally, I find myself in complete agreement with Luboš Motl on this one!), and I saw no need to pile on the poor dude.  Having met him, at the Perimeter Institute and at several conferences, I found something poignant and even touching about Joy’s joyless quest.  I mean, picture a guy who made up his mind at some point that, let’s say, √2 is actually a rational number, all the mathematicians having been grievously wrong for millennia—and then unironically held to that belief his entire life, heroically withstanding the batterings of reason.  Show him why 2=A2/B2 has no solution in positive integers A,B, and he’ll answer that you haven’t understood the very concept of rational number as deeply as him.  Ask him what he means by “rational number,” and you’ll quickly enter the territory of the Monty Python dead parrot sketch.  So why not just leave this dead parrot where it lies?

Anyway, that’s what I was perfectly content to do, until Monday, when Joy left the following comment on my “Whether or not God plays dice, I do” post:

Scott,
You owe me 100,000 US Dollars plus five years of interest. In 2007, right under your nose (when you and I were both visiting Perimeter Institute), I demonstrated, convincing to me, that scalable quantum computing is impossible in the physical world.

He included a link to his book, in case I wanted to review his arguments against the reality of entanglement.  I have to confess I had no idea that, besides disproving Bell’s theorem, Joy had also proved the impossibility of scalable quantum computing.  Based on his previous work, I would have expected him to say that, sure, quantum computers could quickly factor 10,000-digit numbers, but nothing about that would go beyond ordinary, classical, polynomial-time Turing machines—because Turing himself got the very definition of Turing machines wrong, by neglecting topological octonion bivectors or something.

Be that as it may, Joy then explained that the purpose of his comment was to show that

there is absolutely nothing that would convince you to part with your 100,000. You know that, and everyone else knows that … The whole thing is just a smug scam to look smarter than the rest of us without having to do the hard work. Good luck with that.

In response, I clarified what it would take to win my bet:

As I’ve said over and over, what would be necessary and sufficient would be to convince the majority of the physics community. Do you hope and expect to do that? If so, then you can expect my $100,000; if not, then not. If a scientific revolution has taken place only inside the revolutionary’s head, then let the monetary rewards be likewise confined to his head.

Joy replied:

[L]et us forget about my work. It is not for you. Instead, let me make a counter offer to you. I will give you 200,000 US dollars the day someone produces an actual, working, quantum computer in a laboratory recognizable by me. If I am still alive, I will send you 200,000 US Dollars, multiplied by an appropriate inflation factor. Go build a quantum computer.

I’m grateful to Joy for his exceedingly generous offer.  But let’s forget about money for now.  Over the past few months, I’ve had a real insight: the most exciting potential application of scalable quantum computers is neither breaking RSA, nor simulating quantum physics, nor Grover’s algorithm, nor adiabatic optimization.  Instead, it’s watching the people who said it was impossible try to explain themselves.  That prospect, alone, would more than justify a Manhattan-project-scale investment in this field.

Postscript. If you want something about quantum foundations and hidden-variable theories of a bit more scientific interest, check out this MathOverflow question I asked on Monday, which was answered within one day by George Lowther (I then carefully wrote up the solution he sketched).

Updates (May 6). Depending on what sort of entertainment you enjoy, you might want to check out the comments section, where you can witness Joy Christian becoming increasingly unhinged in his personal attacks on me and others (“our very own FQXi genius” – “biased and closed-minded” – “incompetent” – “Scott’s reaction is a textbook case for the sociologists” – “As for Richard Gill, he is evidently an incompetent mathematician” – “I question your own intellectual abilities” – “your entire world view is based on an experimentally unsupported (albeit lucrative) belief and nothing else” – “You have been caught with your pants down and still refusing to see what is below your belly” – “let me point out that you are the lesser brain among the two of us. The pitiful flatness of your brain would be all too painful for everyone to see when my proposed experiment is finally done” – etc., etc).  To which I respond: the flatness of my brain?  Also notable is Joy’s Tourette’s-like repetition of the sentence, “I will accept judgement from no man but Nature.”  Nature is a man?

I just posted a comment explaining the Bell/CHSH inequality in the simplest terms I know, which I’ll repost here for convenience:

Look everyone, consider the following game. Two players, Alice and Bob, can agree on a strategy in advance, but from that point forward, are out of communication with each other (and don’t share quantum entanglement or anything like that). After they’re separated, Alice receives a uniformly-random bit A, and Bob receives another uniformly-random bit B (uncorrelated with A). Their joint goal is for Alice to output a bit X, and Bob to output a bit Y, such that

X + Y = AB (mod 2)

or equivalently,

X XOR Y = A AND B.

They want to succeed with the largest possible probability. It’s clear that one strategy they can follow is always to output X=Y=0, in which case they’ll win 75% of the time (namely, in all four of the cases except A=B=1).

Furthermore, by enumerating all of Alice and Bob’s possible pure strategies and then appealing to convexity, one can check that there’s no strategy that lets them win more than 75% of the time.  In other words, no matter what they do, they lose for one of the four possible (A,B) pairs.

Do you agree with the previous paragraph? If so, then you accept the Bell/CHSH inequality, end of story.

Of all the papers pointing out the errors in Joy Christian’s attempted refutations of the simple arithmetic above, my favorite is Richard Gill’s.  Let me quote from Gill’s eloquent conclusion:

There remains a psychological question, why so strong a need is felt by so many researchers to “disprove Bell” in one way or another? At a rough guess, at least one new proposal comes up per year. Many pass by unnoticed, but from time to time one of them attracts some interest and even media attention. Having studied a number of these proposals in depth, I see two main strategies of would-be Bell-deniers.

The first strategy (the strategy, I would guess, in the case in question) is to build elaborate mathematical models of such complexity and exotic nature that the author him or herself is the probably the only person who ever worked through all the details. Somewhere in the midst of the complexity a simple mistake is made, usually resulting from suppression of an important index or variable. There is a hidden and non-local hidden variable.

The second strategy is to simply build elaborate versions of detection loophole models. Sometimes the same proposal can be interpreted in both ways at the same time, since of course either the mistake or the interpretation as a detection loophole model are both interpretations of the reader, not of the writer.

According to the Anna Karenina principle of evolutionary biology, in order for things to succeed, everything has to go exactly right, while for failure, it suffices if any one of a myriad factors is wrong. Since errors are typically accidental and not recognized, an apparently logical deduction which leads to a manifestly incorrect conclusion does not need to allow a unique diagnosis. If every apparently logical step had been taken with explicit citation of the mathematical rule which was being used, and in a specifi ed context, one could say where the first misstep was taken. But mathematics is almost never written like that, and for good reasons. The writer and the reader, coming from the same scienti c community, share a host of “hidden assumptions” which can safely be taken for granted, as long as no self-contradiction occurs. Saying that the error actually occurred in such-and-such an equation at such-and-such a substitution depends on various assumptions.

The author who still believes in his result will therefore claim that the diagnosis is wrong because the wrong context has been assumed.

We can be grateful for Christian that he has had the generosity to write his one page paper with a more or less complete derivation of his key result in a more or less completely explicit context, without distraction from the author’s intended physical interpretation of the mathematics. The mathematics should stand on its own, the interpretation is “free”.  My fi nding is that in this case, the mathematics does not stand on its own.

Update (5/7): I can’t think of any better illustration than the comment thread below for my maxim that computation is clarity.  In other words, if you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!  The following comment of mine expands on this point:

The central concept that I find missing from the comments of David Brown, James Putnam, and Thomas Ray is that of the sanity check.

Math and computation are simply the tools of clear thought. For example, if someone tells me that a 4-by-4 array of zorks contains 25 zorks in total, and I respond that 4 times 4 is 16, not 25, I’m not going to be impressed if the person then starts waxing poetic about how much more profound the physics of zorks is than my narrow and restricted notions of “arithmetic”. There must be a way to explain the discrepancy even at a purely arithmetical level. If there isn’t, then the zork theory has failed a basic sanity check, and there’s absolutely no reason to study its details further.

Likewise, the fact that Joy can’t explain how to code a computer simulation of (say) his exploding toy ball experiment that would reproduce his predicted Bell/CHSH violation is extremely revealing. This is also a sanity check, and it’s one that Joy flunks. Granted, if he were able to explain his model clearly enough for well-intentioned people to understand how to program it on a computer, then almost certainly there would be no need to actually run the program! We could probably just calculate what the program did using pencil and paper. Nevertheless, Bram, John Sidles, and others were entirely right to harp on this simulation question, because its real role is as a sanity check. If Joy’s ideas are not meaningless nonsense, then there’s no reason at all why we shouldn’t be able to simulate his experiment on a computer and get exactly the outcome that he predicts. Until Joy passes this minimal sanity check—which he hasn’t—there’s simply no need to engage in deep ruminations like the ones above about physics or philosophy or Joy’s “Theorema Egregious.”

U. of Florida CS department: let it be destroyed by rising sea levels 100 years from now, not reckless administrators today

Monday, April 23rd, 2012

Update (4/27): A famous joke concerns an airplane delivered to the US Defense Department in the 1950s, which included a punch-card computer on board.  By regulation, the contractor had to provide a list of all the components of the plane—engine, wings, fuselage, etc.—along with the weight of each component.  One item in the list read, “Computer software: 0.0 kg.”

“That must be a mistake—it can’t weigh 0 kg!” exclaimed the government inspector.  “Here, show me where the software is.”  So the contractor pointed to a stack of punched cards.  “OK, fine,” said the government inspector.  “So just weigh those cards, and that’s the weight of the software.”

“No, sir, you don’t understand,” replied the contractor.  “The software is the holes.”

If the Abernathy saga proves anything, it’s the continuing relevance of this joke even in 2012.  Abernathy is the government inspector who hears that software weighs nothing, and concludes that it does nothing—or, at least, that whatever division is responsible for punching the holes in the cards, can simply be folded into the division that cuts the card paper into rectangles.


As many of you have heard by now, Cammy Abernathy, Dean of Engineering at the University of Florida, has targeted her school’s Computer and Information Science and Engineering (CISE) department for disembowelment: moving most faculty to other departments, and shunting any who remain into non-research positions.  Though CISE is by all accounts one of UF’s strongest engineering departments, no other department faces similar cuts, and the move comes just as UF is increasing its sports budget by more than would be saved by killing computer science. (For more, see Lance’s blog, or letters from Eric Grimson and Zvi Galil. Also, click here to add your name to the already 7000+ petitioning UF to reconsider.)

On its face, this decision seems so boneheadedly perverse that it immediately raises the suspicion that the real reasons for it, whatever they are, have not been publicly stated. The closest I could find to a comprehensible rationale came from this comment, which speculates that the UF administration might be sabotaging its CS department as a threat to the Florida State legislature: “see, keep slashing our budget, and this is the sort of thing we’ll be forced to do!”  But I don’t find that theory very plausible; UF must realize that the Republican-controlled legislature’s likely reaction would be “go ahead, knock yourselves out!”

On a personal note, my parents live part-time in beautiful Sarasota, FL, home of the Mote Marine Laboratory, which does amazing work rehabilitating dolphins, manatees, and sea turtles.  Having visited Sarasota just a few weeks ago, I can testify that, despite frequent hurricanes, a proven inability to hold democratic elections, and its reputation as a giant retirement compound, Florida has definite potential as a state.

Academic computer science as a whole will be fine.  As for Florida, may the state prove greater than its Katherine Harrises, Rick Scotts, and Cammy Abernathys.

Update: See this document for more of the backstory on Abernathy’s underhanded tactics in dismantling the UF CISE department.  Based on the evidence presented there, she really does deserve the scorn now being heaped on her by much of the academic world.

Another Update: UF’s president issued a rather mealy-mouthed statement saying that they’re going to set aside their original evisceration proposal and find a compromise, though who knows what the compromise will look like.

In another news, Greg Kuperberg posted a comment that not only says everything I was trying to say more eloquently, but also explains why I and other CS folks care so much about this issue: because what’s really at stake is the concept of Turing-universality itself.  Let me repost Greg’s comment in its entirety.

It looks like Dean Abernathy hasn’t explained herself all that well, which is not surprising if what she is doing makes no sense. Reading the tea leaves, in particular the back-story document that Scott posted, it looks like she had it in for the CS department from the beginning of her tenure as Dean at Florida. In her interview with Stanford when she had just been appointed as dean, she already said then that “we” wanted to bring EE and CS closer together, even though at the time, there had been no discussion and there was no “we”. Then during discussions with the CS department, she refused to take no for an answer, even though she sometimes pretended to, and as time went on the actual plan looked more and more punitive. She appointed an outside chair to the department, and then in the final plan she terminated the graduate program, moved half of the department to EE, and left the other half to do teaching only. The CS department was apparently very concerned about its NRC ranking, but this ranking only came out when Abernathy’s wheels were already in motion. In any case everyone knows that the NRC rankings were notoriously shabby across all disciplines and the US News rankings, although hardly deep, are much less ridiculous.

So what gives? Apparently from Abernathy’s Stanford interview, and from her actions, she simply takes computer science to be a special case of electrical engineering. Ultimately, it’s a rejection of the fundamental concept of Turing universality. In this world view, there is no such thing as an abstract computer, or at best who really cares if there is one; all that really exists is electronic devices.

Scott points out that those departments that are combined EECS are really combined in name only. This is not just empirical happenstance; it comes from Turing universality and the abstract concept of a computer. Yes, in practice modern computers are electronic. However, if someone does research in compilers, much less CS theory, then really nothing at all is said about electricity. To most people in computer science, it’s completely peripheral that computers are electronic. Nor is this just a matter of theoretical vs applied computer science. CS theory may be theoretical, but compiler research isn’t, much less other topics such as user interfaces or digital libraries.

Abernathy herself works in materials engineering and has a PhD from Stanford. I’m left wondering at what point she failed to understand, or began to misunderstand or dismiss, the abstract concept of a computer. If she were dean of letters of sciences, then I could imagine an attempt to dump half of the literature department into a department of paper and printing technology, and leave the other half only to teach grammar. It would be exactly the same mistake.