## Archive for the ‘Nerd Interest’ Category

### Aaron Swartz (1986-2013)

Sunday, January 13th, 2013

Update (1/18): Some more information has emerged.  First, it’s looking like the prosecution’s strategy was to threaten Aaron with decades of prison time, in order to force him to accept a plea bargain involving at most 6 months.  (Carmen Ortiz issued a statement that conveniently skips the first part of the strategy and focuses on the second.)  This is standard operating procedure in our wonderful American justice system, due (in part) to the lack of resources actually to bring most cases to trial.  The only thing unusual about the practice is the spotlight being shone on it, now that it was done not to some poor unknown schmuck but to a tortured prodigy and nerd hero.  Fixing the problem would require far-reaching changes to our justice system.

Second, while I still strongly feel that we should await the results of Hal Abelson’s investigation, I’ve now heard from several sources that there was some sort of high-level decision at MIT—by whom, I have no idea—not to come out in support of Aaron.  Crucially, though, I’m unaware of the faculty (or students, for that matter) ever being consulted about this decision, or even knowing that there was anything for MIT to decide.  Yesterday, feeling guilty about having done nothing to save Aaron, I found myself wishing that either he or his friends or parents had made an “end run” around the official channels, and informed MIT faculty and students directly of the situation and of MIT’s ability to help.  (Or maybe they did, and I simply wasn’t involved?)

Just to make sure I hadn’t missed anything, I searched my inbox for “Swartz”, but all I found relevant to the case were a couple emails from a high-school student shortly after the arrest (for a project he was doing about the case), and then the flurry of emails after Aaron had already committed suicide.  By far the most interesting thing that I found was the following:

Aaron Swartz (December 12, 2007): I’m really enjoying the Democritus lecture notes. Any chance we’ll ever see lecture 12?

My response: It’s a-comin’!

As I wrote on this blog at the time of Aaron’s arrest: I would never have advised him to do what he did.  Civil disobedience can be an effective tactic, but off-campus access to research papers simply isn’t worth throwing your life away for—especially if your life holds as much spectacular promise as Aaron’s did, judging from everything I’ve read about him.  At the same time, I feel certain that the world will eventually catch up to Aaron’s passionate belief that the results of publicly-funded research should be freely available to the public.  We can honor Aaron’s memory by supporting the open science movement, and helping the world catch up with him sooner.

### Lincoln Blogs

Friday, December 28th, 2012

Sorry for the terrible pun.  Today’s post started out as a comment on a review of the movie Lincoln on Sean Carroll’s blog, but it quickly become too long, so I made it into a post on my own blog.  Apparently I lack Abe’s gift for concision.

I just saw Lincoln — largely inspired by Sean’s review — and loved it.  It struck me as the movie that Lincoln might have wanted to be made about himself: it doesn’t show any of his evolution, but at least it shows the final result of that evolution, and conveys the stories, parables, and insight into human nature that he had accumulated by the end of his life in a highly efficient manner.

Interestingly, the Wikipedia page says that Spielberg commissioned, but then ultimately rejected, two earlier scripts that would have covered the whole Civil War period, and (one can assume) Lincoln’s process of evolution.  I think that also could have been a great movie, but I can sort-of understand why Spielberg and Tony Kushner made the unusual choice they did: at the level of detail they wanted, it seems like it would be impossible to do justice to Lincoln’s whole life, or even the last five years of it, in anything less than a miniseries.

I agree with the many people who pointed out that the movie could have given more credit to those who were committed antislavery crusaders from the beginning—rather than those like Lincoln, who eventually came around to the positions we now associate with him after a lot of toying with ideas like blacks self-deporting to Liberia.  But in a way, the movie didn’t need to dole out such credit: today, we know (for example) that Thaddeus Stevens had history and justice 3000% on his side, so the movie is free to show him as the nutty radical that he seemed to most others at the time.  And there’s even a larger point: never the most diligent student of history, I (to take one embarrassing example) had only the vaguest idea who Thaddeus Stevens even was before seeing the movie.  Now I’ve spent hours reading about him, as well as about Charles Sumner, and being moved by their stories.

(At least I knew about the great Frederick Douglass, having studied his Narrative in freshman English class.  Douglass and I have something in common: just as a single sentence he wrote, “I would unite with anybody to do right and with nobody to do wrong,” will reverberate through the ages, so too, I predict, will a single sentence I wrote: “Australian actresses are plagiarizing my quantum mechanics lecture to sell printers.”)

More broadly, I think it’s easy for history buffs to overestimate how much people already know about this stuff.  Indeed, I can easily imagine that millions of Americans who know Lincoln mostly as “the dude on the $5 bill (who freed some slaves, wore a top hat, used the word ‘fourscore,’ and got shot)” will walk out of the cineplex with a new and ~85% accurate appreciation for what Lincoln did to merit all that fuss, and why his choices weren’t obvious to everyone else at the time. Truthfully, though, nothing made me appreciate the movie more than coming home and reading countless comments on movie review sites denouncing Abraham Lincoln as a bloodthirsty war criminal, and the movie as yet more propaganda by the victors rewriting history. Even on Sean’s blog we find this, by a commenter named Tony: I’m not one who believes we have to go to war to solve every problem we come across, I can’t believe that Lincoln couldn’t have found a solution to states rights and slavery in a more peaceful course of action. It seems from the American Revolutionary war to the present it has been one war after another … The loss of life of all wars is simply staggering, what a waste of humanity. Well look, successive presidential administrations did spend decades trying to find a peaceful solution to the “states rights and slavery” issue; the massive failure of their efforts might make one suspect that a peaceful solution didn’t exist. Indeed, even if Lincoln had simply let the South secede, my reading of history is that issues like the return of fugitive slaves, or competition over Western territories, would have eventually led to a war anyway. I’m skeptical that, in the limit t→∞, free and slave civilizations could coexist on the same continent, no matter how you juggled their political organization. I’ll go further: it even seems possible to me that the Civil War ended too early, with the South not decimated enough. After World War II, Japan and Germany were successfully dissuaded even from “lite” versions of their previous plans, and rebuilt themselves on very different principles. By contrast, as we all know, the American South basically refused for the next century to admit it had lost: it didn’t try to secede again, but it did use every means available to it to reinstate de facto slavery or something as close to that as possible. All the civil-rights ideals of the 1960s had already been clearly articulated in the 1860s, but it took another hundred years for them to get implemented. Even today, with a black President, the intellectual heirs of the Confederacy remain a force to be reckoned with in the US, trying (for example) to depress minority voter turnout through ID laws, gerrymandering, and anything else they think they can possibly get away with. The irony, of course, is that the neo-Confederates now constitute a nontrivial fraction of what they proudly call “the party of Lincoln.” (Look at the map of blue vs. red states, and compare it to the Mason-Dixon line. Even the purple states correspond reasonably well to the vacillating border states of 1861.) So that’s why it seems important to have a movie every once in a while that shows the moral courage of people like Lincoln and Thaddeus Stevens, and that names and shames the enthusiastic defenders of slavery—because while the abolitionists won the battle, on some fronts we’re still fighting the war. ### The$10 billion voter

Monday, November 5th, 2012

Update (Nov. 8): Slate’s pundit scoreboard.

Update (Nov. 6): In crucial election news, a Florida woman wearing an MIT T-shirt was barred from voting, because the election supervisor thought her shirt was advertising Mitt Romney.

At the time of writing, Nate Silver is giving Obama an 86.3% chance.  I accept his estimate, while vividly remembering various admittedly-cruder forecasts the night of November 5, 2000, which gave Gore an 80% chance.  (Of course, those forecasts need not have been “wrong”; an event with 20% probability really does happen 20% of the time.)  For me, the main uncertainties concern turnout and the effects of various voter-suppression tactics.

In the meantime, I wanted to call the attention of any American citizens reading this blog to the wonderful Election FAQ of Peter Norvig, director of research at Google and a person well-known for being right about pretty much everything.  The following passage in particular is worth quoting.

## Is it rational to vote?

Yes. Voting for president is one of the most cost-effective actions any patriotic American can take.

Let me explain what the question means. For your vote to have an effect on the outcome of the election, you would have to live in a decisive state, meaning a state that would give one candidate or the other the required 270th electoral vote. More importantly, your vote would have to break an exact tie in your state (or, more likely, shift the way that the lawyers and judges will sort out how to count and recount the votes). With 100 million voters nationwide, what are the chances of that? If the chance is so small, why bother voting at all?

Historically, most voters either didn’t worry about this problem, or figured they would vote despite the fact that they weren’t likely to change the outcome, or vote because they want to register the degree of support for their candidate (even a vote that is not decisive is a vote that helps establish whether or not the winner has a “mandate”). But then the 2000 Florida election changed all that, with its slim 537 vote (0.009%) margin.

What is the probability that there will be a decisive state with a very close vote total, where a single vote could make a difference? Statistician Andrew Gelman of Columbia University says about one in 10 million.

That’s a small chance, but what is the value of getting to break the tie? We can estimate the total monetary value by noting that President George W. Bush presided over a $3 trillion war and at least a$1 trillion economic melt-down. Senator Sheldon Whitehouse (D-RI) estimated the cost of the Bush presidency at $7.7 trillion. Let’s compromise and call it$6 trillion, and assume that the other candidate would have been revenue neutral, so the net difference of the presidential choice is $6 trillion. The value of not voting is that you save, say, an hour of your time. If you’re an average American wage-earner, that’s about$20. In contrast, the value of voting is the probability that your vote will decide the election (1 in 10 million if you live in a swing state) times the cost difference (potentially $6 trillion). That means the expected value of your vote (in that election) was$600,000. What else have you ever done in your life with an expected value of $600,000 per hour? Not even Warren Buffett makes that much. (One caveat: you need to be certain that your contribution is positive, not negative. If you vote for a candidate who makes things worse, then you have a negative expected value. So do your homework before voting. If you haven’t already done that, then you’ll need to add maybe 100 hours to the cost of voting, and the expected value goes down to$6,000 per hour.)

I’d like to embellish Norvig’s analysis with one further thought experiment.  While I favor a higher figure, for argument’s sake let’s accept Norvig’s estimate that the cost George W. Bush inflicted on the country was something like $6 trillion. Now, imagine that a delegation of concerned citizens from 2012 were able to go back in time to November 5, 2000, round up 538 lazy Gore supporters in Florida who otherwise would have stayed home, and bribe them to go to the polls. Set aside the illegality of the time-travelers’ action: they’re already violating the laws of space, time, and causality, which are well-known to be considerably more reliable than Florida state election law! Set aside all the other interventions that also would’ve swayed the 2000 election outcome, and the 20/20 nature of hindsight, and the insanity of Florida’s recount process. Instead, let’s simply ask: how much should each of those 538 lazy Floridian Gore supporters have been paid, in order for the delegation from the future to have gotten its money’s worth? The answer is a mind-boggling ~$10 billion per voter.  Think about that: just for peeling their backsides off the couch, heading to the local library or school gymnasium, and punching a few chads (all the way through, hopefully), each of those 538 voters would have instantly received the sort of wealth normally associated with Saudi princes or founders of Google or Facebook.  And the country and the world would have benefited from that bargain.

No, this isn’t really a decisive argument for anything (I’ll leave it to the commenters to point out the many possible objections).  All it is, is an image worth keeping in mind the next time someone knowingly explains to you why voting is a waste of time.

### Silver lining

Monday, October 29th, 2012

Update (10/31): While I continue to engage in surreal arguments in the comments section—Scott, I’m profoundly disappointed that a scientist like you, who surely knows better, would be so sloppy as to assert without any real proof that just because it has tusks and a trunk, and looks and sounds like an elephant, and is the size of the elephant, that it therefore is an elephant, completely ignoring the blah blah blah blah blah—while I do that, there are a few glimmerings that the rest of the world is finally starting to get it.  A new story from The Onion, which I regard as almost the only real newspaper left:

## Nation Suddenly Realizes This Just Going To Be A Thing That Happens From Now On

Update (11/1): OK, and this morning from Nicholas Kristof, who’s long been one of the rare non-Onion practitioners of journalism: Will Climate Get Some Respect Now?

I’m writing from the abstract, hypothetical future that climate-change alarmists talk about—the one where huge tropical storms batter the northeastern US, coastal cities are flooded, hundreds of thousands are evacuated from their homes, etc.  I always imagined that, when this future finally showed up, at least I’d have the satisfaction of seeing the deniers admit they were grievously wrong, and that I and those who think similarly were right.  Which, for an academic, is a satisfaction that has to be balanced carefully against the possible destruction of the world.  I don’t think I had the imagination to foresee that the prophesied future would actually arrive, and that climate change would simultaneously disappear as a political issue—with the forces of know-nothingism bolder than ever, pressing their advantage into questions like whether or not raped women can get pregnant, as the President weakly pleads that he too favors more oil drilling.  I should have known from years of blogging that, if you hope for the consolation of seeing those who are wrong admit to being wrong, you hope for a form of happiness all but unattainable in this world.

Yet, if the transformation of the eastern seaboard into something out of the Jurassic hasn’t brought me that satisfaction, it has brought a different, completely unanticipated benefit.  Trapped in my apartment, with the campus closed and all meetings cancelled, I’ve found, for the first time in months, that I actually have some time to write papers.  (And, well, blog posts.)  Because of this, part of me wishes that the hurricane would continue all week, even a month or two (minus, of course, the power outages, evacuations, and other nasty side effects).  I could learn to like this future.

At this point in the post, I was going to transition cleverly into an almost (but not completely) unrelated question about the nature of causality.  But I now realize that the mention of hurricanes and (especially) climate change will overshadow anything I have to say about more abstract matters.  So I’ll save the causality stuff for tomorrow or Wednesday.  Hopefully the hurricane will still be here, and I’ll have time to write.

### The right to bear ICBMs

Tuesday, August 7th, 2012

(Note for non-US readers: This will be another one of my America-centric posts.  But don’t worry, it’s probably one you’ll agree with.)

There’s one argument in favor of gun control that’s always seemed to me to trump all others.

In your opinion, should private citizens should be allowed to own thermonuclear warheads together with state-of-the-art delivery systems?  Does the Second Amendment give them the right to purchase ICBMs on the open market, maybe after a brief cooling-off period?  No?  Why not?

OK, whatever grounds you just gave, I’d give precisely the same grounds for saying that private citizens shouldn’t be allowed to own assault weapons, and that the Second Amendment shouldn’t be construed as giving them that right.  (Personally, I’d ban all guns except for the bare minimum used for sport-shooting, and even that I’d regulate pretty tightly.)

Now, it might be replied that the above argument can be turned on its head: “Should private citizens be allowed to own pocket knives?  Yes, they should?  OK then, whatever grounds you gave for that, I’d give the precisely same grounds for saying that they should be allowed to own assault weapons.”

But crucially, I claim that’s a losing argument for the gun-rights crowd.  For as soon as we’re anywhere on the slippery slope—that is, as soon as it’s conceded that the question hinges, not on absolute rights, but on an actual tradeoffs in actual empirical reality—then the facts make it blindingly obvious that letting possibly-deranged private citizens buy assault weapons is only marginally less crazy than letting them buy ICBMs.

[Related Onion story]

### Ten reasons why the Olympics suck

Sunday, August 5th, 2012

1. The 1936 Berlin Olympics, in which American participation was ensured by the racist, sexist, antisemitic, Nazi-sympathizing future decades-long IOC president Avery Brundage (also, the IOC’s subsequent failure to accept responsibility for its role in legimitizing Hitler).

2. The 1972 Munich Olympics (and the IOC’s subsequent refusal even to memorialize the victims, apparently for fear of antagonizing those Olympic countries that still celebrate the murder of the 11 Israeli athletes).

3. Even after you leave out 1936 and 1972, the repeated granting of unearned legitimacy to the world’s murderous dictatorships—as well as “glory” to those countries most able to coerce their children into lives of athletic near-slavery (or, in the case of more “civilized” countries, outspend their rivals).

4. The sanctimonious fiction that, after all this, we need the Olympics because of their contributions to world peace and brotherhood (a claim about which we now arguably have a century of empirical data).

5. The double-standard that holds “winning a medal is everything” to be a perfectly-reasonable life philosophy for a gymnast, yet would denounce the same attitude if expressed by a scientist or mathematician.

6. The increasingly-convoluted nature of what it is that the athletes are supposed to be optimizing (“run the fastest, but having taken at most these performance-enhancing substances and not those, unless of course you’re a woman with unusually-high testosterone, in which case you must artificially decrease your testosterone before competing in order to even things out”)

7. The IOC’s notorious corruption, and the fact that hosting the Olympics is nevertheless considered such a wonderful honor and goal for any aspiring city.

8. The IOC’s farcical attempts to control others’ use of five interlocked rings and of the word “Olympics.”

9. The fact that swimmers have to use a particular stroke, rather than whichever stroke will propel them through the water the fastest (alright, while the “freestyle” rules still seem weird to me, I’m taking this one out given the amount of flak it’s gotten)

10. The fact that someone like me, who knows all the above, and who has less interest in sports than almost anyone on earth, is still able to watch an Olympic event and care about its outcome.

### Enough with Bell’s Theorem. New topic: Psychopathic killer robots!

Friday, May 25th, 2012
A few days ago, a writer named John Rico emailed me the following question, which he’s kindly given me permission to share.
If a computer, or robot, was able to achieve true Artificial Intelligence, but it did not have a parallel programming or capacity for empathy, would that then necessarily make the computer psychopathic?  And if so, would it then follow the rule devised by forensic psychologists that it would necessarily then become predatory?  This then moves us into territory covered by science-fiction films like “The Terminator.”  Would this psychopathic computer decide to kill us?  (Or would that merely be a rational logical decision that wouldn’t require psychopathy?)

See, now this is precisely why I became a CS professor: so that if anyone asked, I could give not merely my opinion, but my professional, expert opinion, on the question of whether psychopathic Terminators will kill us all.

My response (slightly edited) is below.

Dear John,

I fear that your question presupposes way too much anthropomorphizing of an AI machine—that is, imagining that it would even be understandable in terms of human categories like “empathetic” versus “psychopathic.”  Sure, an AI might be understandable in those sorts of terms, but only if it had been programmed to act like a human.  In that case, though, I personally find it no easier or harder to imagine an “empathetic” humanoid robot than a “psychopathic” robot!  (If you want a rich imagining of “empathetic robots” in science fiction, of course you need look no further than Isaac Asimov.)

On the other hand, I personally also think it’s possible –even likely—that an AI would pursue its goals (whatever they happened to be) in a way so different from what humans are used to that the AI couldn’t be usefully compared to any particular type of human, even a human psychopath.  To drive home this point, the AI visionary Eliezer Yudkowsky likes to use the example of the “paperclip maximizer.”  This is an AI whose programming would cause it to use its unimaginably-vast intelligence in the service of one goal only: namely, converting as much matter as it possibly can into paperclips!

Now, if such an AI were created, it would indeed likely spell doom for humanity, since the AI would think nothing of destroying the entire Earth to get more iron for paperclips.  But terrible though it was, would you really want to describe such an entity as a “psychopath,” any more than you’d describe (say) a nuclear weapon as a “psychopath”?  The word “psychopath” connotes some sort of deviation from the human norm, but human norms were never applicable to the paperclip maximizer in the first place … all that was ever relevant was the paperclip norm!

Motivated by these sorts of observations, Yudkowsky has thought and written a great deal about how the question of how to create a “friendly AI,” by which he means one that would use its vast intelligence to improve human welfare, instead of maximizing some arbitrary other objective like the total number of paperclips in existence that might be at odds with our welfare.  While I don’t always agree with him—for example, I don’t think AI has a single “key,” and I certainly don’t think such a key will be discovered anytime soon—I’m sure you’d find his writings at yudkowsky.net, lesswrong.com, and overcomingbias.com to be of interest to you.

I should mention, in passing, that “parallel programming” has nothing at all to do with your other (fun) questions.  You could perfectly well have a murderous robot with parallel programming, or a kind, loving robot with serial programming only.

Hope that helps,
Scott

### U. of Florida CS department: let it be destroyed by rising sea levels 100 years from now, not reckless administrators today

Monday, April 23rd, 2012

Update (4/27): A famous joke concerns an airplane delivered to the US Defense Department in the 1950s, which included a punch-card computer on board.  By regulation, the contractor had to provide a list of all the components of the plane—engine, wings, fuselage, etc.—along with the weight of each component.  One item in the list read, “Computer software: 0.0 kg.”

“That must be a mistake—it can’t weigh 0 kg!” exclaimed the government inspector.  “Here, show me where the software is.”  So the contractor pointed to a stack of punched cards.  “OK, fine,” said the government inspector.  “So just weigh those cards, and that’s the weight of the software.”

“No, sir, you don’t understand,” replied the contractor.  “The software is the holes.”

If the Abernathy saga proves anything, it’s the continuing relevance of this joke even in 2012.  Abernathy is the government inspector who hears that software weighs nothing, and concludes that it does nothing—or, at least, that whatever division is responsible for punching the holes in the cards, can simply be folded into the division that cuts the card paper into rectangles.

As many of you have heard by now, Cammy Abernathy, Dean of Engineering at the University of Florida, has targeted her school’s Computer and Information Science and Engineering (CISE) department for disembowelment: moving most faculty to other departments, and shunting any who remain into non-research positions.  Though CISE is by all accounts one of UF’s strongest engineering departments, no other department faces similar cuts, and the move comes just as UF is increasing its sports budget by more than would be saved by killing computer science. (For more, see Lance’s blog, or letters from Eric Grimson and Zvi Galil. Also, click here to add your name to the already 7000+ petitioning UF to reconsider.)

On its face, this decision seems so boneheadedly perverse that it immediately raises the suspicion that the real reasons for it, whatever they are, have not been publicly stated. The closest I could find to a comprehensible rationale came from this comment, which speculates that the UF administration might be sabotaging its CS department as a threat to the Florida State legislature: “see, keep slashing our budget, and this is the sort of thing we’ll be forced to do!”  But I don’t find that theory very plausible; UF must realize that the Republican-controlled legislature’s likely reaction would be “go ahead, knock yourselves out!”

On a personal note, my parents live part-time in beautiful Sarasota, FL, home of the Mote Marine Laboratory, which does amazing work rehabilitating dolphins, manatees, and sea turtles.  Having visited Sarasota just a few weeks ago, I can testify that, despite frequent hurricanes, a proven inability to hold democratic elections, and its reputation as a giant retirement compound, Florida has definite potential as a state.

Academic computer science as a whole will be fine.  As for Florida, may the state prove greater than its Katherine Harrises, Rick Scotts, and Cammy Abernathys.

Update: See this document for more of the backstory on Abernathy’s underhanded tactics in dismantling the UF CISE department.  Based on the evidence presented there, she really does deserve the scorn now being heaped on her by much of the academic world.

Another Update: UF’s president issued a rather mealy-mouthed statement saying that they’re going to set aside their original evisceration proposal and find a compromise, though who knows what the compromise will look like.

In another news, Greg Kuperberg posted a comment that not only says everything I was trying to say more eloquently, but also explains why I and other CS folks care so much about this issue: because what’s really at stake is the concept of Turing-universality itself.  Let me repost Greg’s comment in its entirety.

It looks like Dean Abernathy hasn’t explained herself all that well, which is not surprising if what she is doing makes no sense. Reading the tea leaves, in particular the back-story document that Scott posted, it looks like she had it in for the CS department from the beginning of her tenure as Dean at Florida. In her interview with Stanford when she had just been appointed as dean, she already said then that “we” wanted to bring EE and CS closer together, even though at the time, there had been no discussion and there was no “we”. Then during discussions with the CS department, she refused to take no for an answer, even though she sometimes pretended to, and as time went on the actual plan looked more and more punitive. She appointed an outside chair to the department, and then in the final plan she terminated the graduate program, moved half of the department to EE, and left the other half to do teaching only. The CS department was apparently very concerned about its NRC ranking, but this ranking only came out when Abernathy’s wheels were already in motion. In any case everyone knows that the NRC rankings were notoriously shabby across all disciplines and the US News rankings, although hardly deep, are much less ridiculous.

So what gives? Apparently from Abernathy’s Stanford interview, and from her actions, she simply takes computer science to be a special case of electrical engineering. Ultimately, it’s a rejection of the fundamental concept of Turing universality. In this world view, there is no such thing as an abstract computer, or at best who really cares if there is one; all that really exists is electronic devices.

Scott points out that those departments that are combined EECS are really combined in name only. This is not just empirical happenstance; it comes from Turing universality and the abstract concept of a computer. Yes, in practice modern computers are electronic. However, if someone does research in compilers, much less CS theory, then really nothing at all is said about electricity. To most people in computer science, it’s completely peripheral that computers are electronic. Nor is this just a matter of theoretical vs applied computer science. CS theory may be theoretical, but compiler research isn’t, much less other topics such as user interfaces or digital libraries.

Abernathy herself works in materials engineering and has a PhD from Stanford. I’m left wondering at what point she failed to understand, or began to misunderstand or dismiss, the abstract concept of a computer. If she were dean of letters of sciences, then I could imagine an attempt to dump half of the literature department into a department of paper and printing technology, and leave the other half only to teach grammar. It would be exactly the same mistake.

### Tell President Obama to support the Federal Research Public Access Act

Tuesday, February 28th, 2012

If you’re tired of blog posts about open science, sorry dude—but it feels great to be part a group of blogging nerds who, for once, are actually having a nonzero (and positive, I think!) impact on the political process.  Yesterday, Elsevier, which had been the biggest supporter of the noxious Research Works Act, announced, under pressure from the “Cost of Knowledge” movement, that it was dropping its support for RWA.  Only hours later, Elsevier’s paid cheerleaders in Congress, Darrell Issa (R-CA) and Carolyn Maloney (D-NY), announced that they were shelving the RWA for now.  See this hilarious post by physicist John Baez, which translates Issa and Maloney’s statement on why they’re letting the RWA die into ordinary English sentence-by-sentence.

But it gets better: Representative Mike Doyle (D-PA) has introduced a sort of anti-RWA, the Federal Research Public Access Act (or easily-pronounced FRPAA), which would require federal agencies with budgets of over \$100 million to make the research they sponsor freely available less than 6 months after its publication in a peer-reviewed journal (thereby expanding the NIH’s successful open-access policy).  If you’re a US citizen, and you care about the results of taxpayer-funded medical and other research being accessible to the public, then please sign this petition telling President Obama you support the FRPAA.  Tell your coworker, husband, wife, grandmother, etc. to sign it too.  Apparently the President will personally review it if it gets to 25,000 signatures by March 9.

And if you’re not a US citizen: that’s cool too!  Support open-access initiatives in your country.  (Or, if you live someplace like Syria, support the prerequisite “not-getting-shot” initiatives.)  Just don’t have a cow about my blogging American issues from time to time, like this easily-offended Aussie did over on Cosmic Variance.

### The battle against Elsevier gains momentum

Wednesday, February 8th, 2012

Check out this statement on “The Cost of Knowledge” released today, which (besides your humble blogger) has been signed by Ingrid Daubechies (President of the International Mathematical Union), Timothy Gowers, Terence Tao, László Lovász, and 29 others.  The statement carefully explains the rationale for the current Elsevier boycott, and answers common questions like “why single out Elsevier?” and “what comes next?”

Also check out Timothy Gowers’ blog post announcing the statement.  The post includes a hilarious report by investment firm Exane Paribas, explaining that the current boycott has caused Reed Elsevier’s stock price to fall, but presenting that as a great investment opportunity, since they fully expect the price to rebound once this boycott fails like all the previous ones.  I ask you: does that not want to make you boycott Elsevier, for no other reason than to see the people who follow Exane Paribas’ cynical advice lose their money?

In related news, the boycott petition now has 4600+ signatures and counting.  If you’ve already signed, great!  If you haven’t, why not?

Update (Feb. 9): There’s now a great editorial by Gareth Cook in the Boston Globe supporting the Elsevier boycott (and analogizing it to both the Tahrir Square uprising and the Boston Tea Party!).