As those of you in American academia have probably heard by now, this week the National Research Council finally released its 2005 rankings of American PhD programs, only five years behind schedule. This time, the rankings have been made 80% more scientific by the addition of error bars. Among the startling findings:
- In electrical and computer engineering, UCLA and Purdue are ahead of Carnegie Mellon.
- In computer science, UNC Chapel Hill is ahead of the University of Washington.
- In statistics, Iowa State is ahead of Berkeley.
However, before you base any major decisions on these findings, you should know that a few … irregularities have emerged in the data used to generate them.
- According to the NRC data set, 0% of graduates of the University of Washington’s Computer Science and Engineering Department had “academic plans” for 2001-2005. (In reality, 40% of their graduates took faculty positions during that period.) NRC also reports that UW CSE has 91 faculty (the real number is about 40). Most of the illusory “faculty,” it turned out, were industrial colleagues who don’t supervise students, and who thereby drastically and artificially brought down the average number of students supervised. See here and here for more from UW itself.
- According to the NRC, 0% of MIT electrical engineering faculty engage in interdisciplinary work. NRC also reports that 24.62% of MIT computer science PhDs found academic employment; the actual number is twice that (49%).
- The more foreign PhD students a department had, the higher it scored. This had the strange effect that the top departments were punished for managing to recruit more domestic students, who are the ones in much shorter supply these days.
- The complicated regression analysis used to generate the scoring formula led to the percentage of female faculty in a given department actually counting against that department’s reputation score (!).
Ever since the NRC data were released from the parallel universe in which they were gathered, bloggers have been having a field day with them—see for example Dave Bacon and Peter Woit, and especially Sariel Har-Peled’s Computer Science Deranker (which ranks CS departments by a combined formula, consisting of 0% the NRC scores and 100% a random permutation of departments).
Yet despite the fact that many MIT departments (for some reason not CS) took a drubbing, I actually heard some of my colleagues defend the rankings, on the following grounds:
- A committee of good people put a lot of hard work into generating them.
- The NRC is a prestigious body that can’t be dismissed out of hand.
- Now that the rankings are out, everyone should just be quiet and deal with them.
But while the Forces of Doofosity usually win, my guess is that they’re going to lose this round. Deans and department heads—and even the Computing Research Association—have been livid enough about the NRC rankings that they’ve denounced them with unusual candor, and the rankings have already been thoroughly eviscerated elsewhere on the web.
Look: if I really needed to know what (say) the best-regarded PhD programs in computer science were, I could post my question to a site like MathOverflow—and in the half hour before the question was closed for being off-topic, I’d get vastly more reliable answers than the ones the NRC took fifteen years and more than four million dollars to generate.
So the interesting questions here have nothing to do with the “rankings” themselves, and everything to do with the process and organization that produced them. How does Charlotte Kuh, study director of the NRC’s Assessment of Research Doctorate Programs, defend the study against what now looks like overwhelming evidence of Three-Stooges-level incompetence? How will the NRC recover from this massive embarrassment, and in what form should it continue to exist?
The NRC, as I had to look up, is an outfit jointly overseen by the National Academy of Sciences (NAS), the National Academy of Engineering (NAE), and the Institute of Medicine (IOM). Which reminded me of the celebrated story about Richard Feynman resigning his membership in the NAS. When asked why, Feynman explained that, when he was in high school, there was an “honor club” whose only significant activity was debating who was worthy of joining the honor club. After years in the NAS, he decided it was no different.
Now that I write that, though, an alternative explanation for the hilarious problems with the NRC study occurs to me. The alternative theory was inspired by this striking sentence from an Inside Higher Ed article:
When one of the reporters on a telephone briefing about the rankings asked Ostriker [the chairman of the NRC project committee] and his fellow panelists if any of them would “defend the rankings,” none did so.
So, were these joke rankings an elaborate ruse by the NRC, meant to discredit the whole idea of a strict linear order on departments and universities? If so, then I applaud the NRC for its deviousness and ingenuity in performing a much-needed public service.