Sen. Tom Coburn, the National Science Foundation, and Antarctican Jello Wrestling

As some of you probably heard, last week Sen. Tom Coburn (R-Oklahoma) managed to get an amendment passed prohibiting the US National Science Foundation from funding any research in political science, unless the research can be “certified” as “promoting national security or the economic interests of the United States.”  This sort of political interference with the peer-review process, of course, sets a chilling precedent for all academic research, regardless of discipline.  (What’s next, an amendment banning computer science research, unless it has applications to scheduling baseball games or slicing apple pies?)  But on researching further, I discovered that Sen. Coburn has long had it in for the NSF, and even has a whole webpage listing his grievances against the agency.  Most of it is the usual “can you believe they wasted money to study something so silly or obvious?,” but by far my favorite tidbit is the following:

Inappropriate staff behavior including porn surfing and Jello wrestling and skinny-dipping at NSF-operated facilities in Antarctica.

It occurred to me that the NSF really has no need to explain this one, since a complete explanation is contained in a single word of the charge itself: Antarctica.  Personally, I’d support launching an investigation of NSF’s Antarctica facilities, were it discovered that the people stuck in them weren’t porn surfing and Jello wrestling and skinny-dipping.

55 Responses to “Sen. Tom Coburn, the National Science Foundation, and Antarctican Jello Wrestling”

  1. wolfgang Says:

    >> Jello wrestling and skinny-dipping
    outdoors?

  2. X Says:

    I look forward to receiving the list of Amerikanische Physik that we’ll be allowed to study next year.

  3. Scott Says:

    wolfgang: I wondered the same thing! But given the Antarctican traditions I know about, like the “300 Club” (one joins it by sitting in a sauna heated to 200 F, then running nude around the ceremonial pole marking the South Pole in -100 F weather, for a 300 F temperature difference), it wouldn’t shock me if it were outdoors.

  4. John Says:

    Peer review does not determine the funding balance between computer science and political science. That is determined by NSF administrators. Meddling in individual grants is stupid, but I don’t see why politicians should not get involved in large-scale funding decisions.

  5. Scott Says:

    John #4: First, the funding balance between disciplines is supposed to be determined at least partly by “proposal pressure.” Second, I believe this does count as politically-motivated meddling with individual grants.

    But, yes, I do accept the point that there can exist valid reasons for politicians to get involved in large-scale science funding decisions. I would add that, in my opinion, Sen. Coburn has no such valid reason. :-)

  6. Evan Says:

    Perhaps the senator should go on a fact-finding mission and spend a winter at the south pole research station.

  7. Carl Says:

    If you’re interested in Anarctica antics, read bigdeadplace.com. As you’d expect, overwintering isn’t good for sanity.

  8. Rahul Says:

    Comp. Sci. and Political Sci. are not close analogs. Poli. Sci. work can be far often dangerously close to ideology. Perhaps it is partisan work that he intends to throttle?

    To be fair, did anyone look at the list of NSF grants he vilifies on his website? Some of them (if true) indeed make me cringe. e.g. “• $581,000 on whether online dating site users are racist.” “$80,000 study on why the same teams always dominate March Madness;”

    Besides, Scott he isn’t interfering with the “peer-review process” as you write. Only the funding process.

    Which is a very different thing. He’s not proposing an embargo on irrelevant, redundant Poli. Sci. studies (nor stupid Farmville studies). Go ahead, peer-review and publish too. Just not at NSF expense.

  9. Rahul Says:

    Notwithstanding my above comment #8 Sen. Tom Coburn (R-Oklahoma) is possibly a complete idiot.

    All I want to say is:
    (1) NSF is not above criticism
    (2) Scientists shouldn’t rail at all external oversight
    (3) Fund allocation is not an activity as “pure” and academic as peer-review.

  10. Attila Szasz Says:

    meanwhile in Hungary: one of the guys from our governing party has tried to defend their dubious constitution change using Gödel’s theorems! the thing not just includes basically any antidemocratic stuff you can think of, but actually instructs the constitutional court to decide the new law’s legality strictly in its _own context_. so Mr. Kósa mentioned this self referencing issue, took a very “serious and professional” voice and started on gödel :D (errnously of course, and missing the formal system applicability point) then concluded that this deep result has never occured to anyone complaining on a daily hatred-politics basis, and implicitly that everyone opposing them ( includes most of EU authority by now) is dumb.

  11. Scott Says:

    Rahul: I actually don’t doubt that plenty of political science research is ideological and of low quality. The heart of the matter is that I see zero reason to believe that Congressional Republicans like Coburn are only interested in killing low-quality research. Rather, I think they’d like to slash almost all research that doesn’t have immediate applications in defense, medicine, agribusiness, or maybe a few other areas. The more so, of course, to whatever extent the research is uncomfortable for them religiously or ideologically. In other words: it seems to me that normalizing the sort of thing Coburn did would be like letting a frothy-mouthed, ax-wielding maniac loose in your neighborhood, on the theory that he’ll only target the criminals and not the law-abiding citizens.

    Two other points:

    (1) While it’s well-known that the NSF awards some stupid and wasteful grants (how could any granting agency avoid that??), it’s also well-known that the “summaries” produced by politicians trying to ridicule scientific research tend to be staggeringly dishonest. A good example is Sean Hannity’s idiotic attack on Rob Wood’s amazing RoboBees project—ironically, a project that could have enormous military applications. Indeed, reading over Hannity’s hit-list, the following rule of thumb occurs to me: if you can’t misdescribe a research project in a way that makes it sound like an absurd waste of money, then it probably isn’t worth doing!

    (2) What I meant by “peer review” was the science funding system established by Vannevar Bush in the aftermath of World War II—a system where granting decisions are left, as much as possible, to experts in the relevant fields. That system has worked extremely well for 60+ years—and while I’m happy if people experiment with alternatives to it (like Robin Hanson’s prediction-market system), in the meantime I see absolutely no reason to dismantle something that works.

  12. Curious Wavefunction Says:

    The problem is that “political science” can be very loosely construed and the restrictions could spin out of control down a slippery slope. How about research investigating the neuroscientific underpinnings of voter preferences? How about work investigating how economic circumstances affect political beliefs and vice versa? I can see a ban on funding for “political science” potentially encroaching on many other disciplines including economics, psychology and neuroscience.

  13. Michał Kotowski Says:

    @Attila: can you provide a link? This sounds hilarious ;).

  14. Rahul Says:

    Scott:

    I agree with you about the senator’s ulterior motives.

    The allocation by experts model is fine for within a field grants. But how can that decide which subject gets how much funds. Those priorities will have to some externally, in some sense.

    Finally, I totally disagree with your “proposal pressure” model of grant disbursement. That’s a bug not a feature. All it encourages is group-think and a wave following the latest fad. Fundamentally it provides positive feedback to every wave of hype and bubble. There’s a reason we have a panel of experts doing allocation and not a lottery.

  15. John Sidles Says:

    Scott asserts (#11)  “What I meant by “peer review” was the science funding system established by Vannevar Bush in the aftermath of World War II — a system where granting decisions are left, as much as possible, to  experts in the relevant fields  broad coalitions of stakeholders. That system  has worked extremely well  has evolved to be more specialized and less efficacious for 60+ years.”

    Scott, the edits to your assertion are prompted by James R. Killian’s illuminating memoir Sputnik, Scientists, and Eisenhower: a Memoir of the First Special Assistant to the President for Science and Technology (MIT Press, 1977).

    Shtetl Optimized readers are particularly to contrast Killian’s account of von Neumann’s Strategic Missile Evaluation Committee (SMEC, 1953), with the impact of the QIST Roadmap (2002–4). The 1953 STEM roadmap accurately foresaw advances that were transformative for every citizen of every nation of the globe … the 2004 roadmap, not so much, eh?

    If Senator Coburn’s critique is not as forceful as it might have been, surely this does not relieve the STEM community of responsibilities in this regard.

    Summary  The superficial Panglossian characterization of “peer review” in (#11) departs substantially from historical, contemporary, and foreseeable STEM realities.

  16. Attila Szasz Says:

    @Michał
    http://www.youtube.com/watch?v=akuBkYY2rHA

    (2:11)

    “i dont remember anything like the constitutional court was meant to inspect whether the constitution itself was non-constitutional: simply doesnt make sense, because youknow, a text can be inconsistent, but”

    (2:26 he pauses, changes the style)

    “its a very far-reaching theorem of IT and maths theory that er..so you can apply the gödel theorem to the constitution..i’ve never heard anything like that but ..i just want to say that obviously the knowledge of humankind is rich enough to be familiar with the problem the gödel theorem has already phrased and put into mathematical terms; it basically states the impossibility of having a system in which you couldn’t find consistent statements with the rest nevertheless pointing out of the system where they can’t be proven either true or false. but youknow, its a very complicated theory, so it means there is a theoretical possibility that the constitution is not constitutional, but anyway thats totally implausible.. such a tremendous philisophical question it didnt bothered anyone [here]; these complaints were phrased along primitive -i can honestly say- primitive daily political interests.”

  17. JimV Says:

    In Kim Stanley Robinson’s (NSF-funded*) novel “Antarctica”, he describes the “200 Club” as consisting of those who have gone from a 150F sauna to -50F outside McMurdo station. That seems more believable to me.

    I agree about Senator Coburn, of course.

    * The novel was heavily influenced by Robinson’s 1995 stay in Antarctica as part of the National Science Foundation’s Antarctic Artists and Writers Program — wikipedia

  18. Rahul Says:

    @John Sidles #15:

    Nice summary. Coburn might be the wrong person to do the change, but to pretend that nothing’s broken with the NSF system is wrong. There are serious troubles.

    Whether the NSF system works “extremely well” is debatable.

    In these times of budget cuts I’d add: “Only when the tide goes out do you discover who’s been swimming naked……”

  19. Anonymous Says:

    Scott, do you realize how silly your comments above sound like once they are combined together:

    You don’t approve of the NSF eliminating political science research, even though you don’t doubt that much of this research isn’t worth funding (“ideological and of low quality” in your words), because you fear that the author of the legislation might later propose legislation that would cut research you think is useful.

    If a pro-hard-science democrat you support had introduced the same legislation would you still find it objectionable?

  20. Jay Says:

    Oh Scott thanks for the good laugh. :-)

  21. Scott Says:

    Anonymous #19:

    1. If you want the research that changes the world (while helping your country economically and militarily), then you need to have a system for funding research. And if you have a system for funding research, then you’re also going to fund some crap, because no funding system is perfect. Indeed, the more breakthroughs you want, the more crap (and just solid but unexciting work) you also need to fund. That’s a simple, obvious truth, much as some people seem to dislike hearing it. The percentage of crap might be somewhat higher in the social sciences than in the hard sciences, but the basic principle is exactly the same.

    2. I don’t see anything particularly absurd about wanting not to empower further the forces of regression—Coburn was a terrible Senator, and had done plenty of damage (climate denial, etc.), long before this particular amendment. “First they came for the political scientists, and I didn’t speak up because I wasn’t a political scientist…”

    3. The circumstances under which a “pro-hard-science democrat” would introduce this sort of legislation would be so bizarre that I’m not sure how to answer your hypothetical—maybe I’d go to battle against that dastardly democrat atop a winged pig?

  22. Scott Says:

    John Sidles #15, Rahul #18: What’s your actual evidence that the peer-review system isn’t working well? What are your ideas for how to improve it?

    From my personal experience on NSF panels, I’m acutely aware of at least one problem with the current system: namely, the available funds are often so meager in comparison to the pool of talented researchers that the majority of great research goes unfunded, and scientists are forced to waste huge amounts of their time applying and reapplying, fighting over the few grants available. But I’m guessing that’s not the problem you had in mind. :-)

  23. Rahul Says:

    Scott #22:

    Oftentimes in my research area you had the same small cohort of academics wearing various hats: Writing proposals, serving on NSF committees granting funds, later on panels reviewing projects. Even on journal editorial boards, conference chairs etc.

    Two things get rife:

    (1) The I scratch your back and then you scratch mine syndrome
    (2) A severe conflict of interest in ensuring survival of my research area even at the cost of other more deserving areas not getting funded

    How brave and noble would you have to be to admit at a NSF funding round that “None of the QC proposals seem good. Let’s not hand out these funds”

    To clarify: I am not advocating breaking apart the entire existing system. But changes, yes.

  24. Rahul Says:

    Scott:

    I’m trying to think whether your (a)”the majority of great research goes unfunded” can be simultaneously consistent with my (b)”The majority of research that gets funded is not great”

    To me that’s great evidence of a broken system. :)

  25. Scott Says:

    Rahul #24: Well, there’s zero logical problem with both statements being simultaneously true. Some fields probably lean more toward (a), and others more toward (b), but there’s a “Heisenberg uncertainty relation” that prevents you from solving both problems simultaneously. If you want to fund a lot of great research, then you also have to fund a lot of research that isn’t so great, simply because no one can predict which research will be great above some given accuracy. On the other hand, when you factor in that the great research has often changed the basic conditions of human civilization, having also paid for some not-so-great research doesn’t seem like an impossibly steep price!

  26. Rahul Says:

    Scott #25

    I think we are far from this Heisenberg-like limit as far as research funding efficiency is concerned.

    It’d be like the Italian rail service using Heisenberg to make excuses for tardiness. :)

  27. John Sidles Says:

    Scott asks (#15) “What’s [the] evidence that the peer-review system isn’t working well?”

    Evidence that Senators (or Senatorial aides) can readily appreciate is readily found by perusing the Jefferson Library’s bound volumes of Time magazine (Monday, April 29, 1957), which is a theme issue on “Electronics: the New Age”, and perusing also Scientific American (any 1957 issue) with special attention to the job ads of the latter.

    The contrast with today’s Time magazine (extinct) and today’s Scientific American job ads (also extinct) points to a common-sense conclusion: public-sector STEM review cannot work well when private-sector STEM careers are economically non-viable.

    The remaining analysis is left as an exercise for Shtetl Optimized readers! :)

  28. Scott Says:

    Rahul #26: If we’re serious about evaluating the NSF’s efficiency, then the right comparison is not to some hypothetical ideal that only exists in someone’s head. Rather, the right comparison is to other science funding systems that actually exist or have existed: NIH, DoE, DARPA, granting agencies of other countries, maybe systems from earlier historical epochs … so, who has actually done a better job than NSF, and what lessons should NSF take from them?

  29. Lou Scheffer Says:

    I don’t know about NSF, but people have compared the NIH model (grants specify what is to be done) with the Howard Hughes Medical Institute (HHMI) model (support good people and check back in 5 years). There is some evidence that the you get more breakthroughs from the HHMI model. Academic study here: http://pazoulay.scripts.mit.edu/docs/hhmi.pdf and more popular account here: http://www.slate.com/articles/business/the_dismal_science/2010/01/how_to_make_america_more_innovative.html

    On the other hand you could argue these are both peer review, just different kinds. Disclaimer: I work for HHMI.

  30. Rahul Says:

    Scott #28:

    Even if, for arguments sake, NSF were the best funding agency out there that’s no reason to not want to fix its flaws.

    One (simplistic?) idea I have is that NSF funding / review committees should be more diverse. Bring in more industrial guys, academics from other related fields. More people who don’t have a direct finger in the pie. More people who could see the big picture.

  31. Anonymous Says:

    Scott #21:

    “And if you have a system for funding research, then you’re also going to fund some crap, because no funding system is perfect.”

    I agree that it is impossible to avoid funding crap. I have no issue with funding projects that fail or, in retrospect, turn out to be crap. What we shouldn’t do is look at projects that are clearly identifiable as crap before they are started and throw up are hands and say `oh well, we knew some of these projects would be crap, so what’s the point in not funding these.’

  32. Scott Says:

    Good, now we have two actual ideas on the table for how to improve the NSF!

    I love the idea of Lou Scheffer #29 to focus more on good people—were I put in charge of NSF and given a free hand, I would implement that idea tomorrow. In my experience, the best NSF panelists often try to approximate that system in practice: “Who cares what exactly the PI is proposing, or whether she followed the right formatting rules in proposing it! Just look at her goddamn track record!!”

    Given my experience, I’m less enamored of Rahul #30’s idea. Giving straight theoretical computer science proposals to reviewers who aren’t theoretical computer scientists is indeed done sometimes, and turns out to be an excellent way to get doofus-ignorant reviews (both ignorantly negative and ignorantly positive). I’m all for interdisciplinarity—for example, a quantum computing proposal should often be evaluated by both computer scientists and physicists, and a bioinformatics proposal by both computer scientists and biologists. And I also agree that someone from a different field can often provide a “big-picture perspective” that supplements the experts’ perspective: for example, if the theoretical crypto experts are vacillating on a proposal, it can be incredibly useful for a systems security person to come in and say “hey, I find this exciting!” But no proposal should ever not be reviewed by the people who are most qualified to evaluate it technically.

  33. John Sidles Says:

    Scott asks ( #28): who has actually done a better job than NSF, and what lessons should NSF take from them?

    Surely there’s been no shortage of concrete suggestions, Scott. E.g., we have Craig Venter’s remedy (excerpted)

    We’re not short on science funding in this country. We have incredible amounts of science funding. We just don’t spend it very well. Peer-review funding is risk-averse. …

    The academics might not like it, but peer review is like the prisoners running the prison. They’re not going to vote for change. Universities like this system because it helps support the universities. We have to change it so that 25%, 30%, 40% of the money is set aside for true risk research with independent parties to do that. That’s going to disrupt a lot of things.

    I argue that the American public should be outraged that there’s not 10 times to 100 times more breakthroughs in medicine every year over what we’re getting, particularly for the money that’s being spent.

    Hmmm  perhaps the American public should be comparably outraged that (in Venter’s phrase) “there’s not 10 times to 100 times more breakthroughs in  medical research  quantum research every year over what we’re getting”?

    Venter-style STEM reforms are of course inherently conservative, in that Senator Coburn and his Congressional colleagues can implement these reforms via immediate purely administrative measures, e.g., by boosting the fraction of research funding devoted to SBIR/STTR programs from the present 5% level to a level of (say) 15%.

    Problem solved! Uhhhh … or at least, problem transformed. :)

    Radical research reforms also are conceivable, but are less susceptible to administrative implementation. Wendell Berry’s now-celebrated NKU Commencement Address of 2009 (YouTube Part I and YouTube Part II) is a refreshingly acerbic example.

    Regarding the merits of Venter-style and/or Berry-style STEM reforms, reasonable opinions of course may vary. Yet especially for young STEM researchers, surely it is a grave mistake to remain ignorant of the broad spectrum of remedies that are being proposed.

  34. mkatkov Says:

    Old Russian joke.

    Mom, “the Russian revolution was designed by scientists or politicians?”

    “Of cause, politicians, scientists first test their model on the animals.”

  35. v Says:

    I love the idea of Lou Scheffer #29 to focus more on good people—were I put in charge of NSF and given a free hand, I would implement that idea tomorrow. In my experience, the best NSF panelists often try to approximate that system in practice: “Who cares what exactly the PI is proposing, or whether she followed the right formatting rules in proposing it! Just look at her goddamn track record!!”

    But isn’t this going to result in a large scale rich-get-richer phenomenon?
    Many universities reduce faculty teaching loads based on the grant money they bring in. So someone who starts with a good track record and gets their initial grant now has more time on hand to improve their research track record than someone stuck teaching 4 or 6 (or however many) classes a year in addition to doing research. So of course this puts them in a much better position to get the next grant, which further improves their position etc. This is probably already the case to some extent (especially in fields like Math, I hear), but changing the system so people are funded based on their track record without particular reference to their proposed projects is only going to exacerbate the disparity.

    But perhaps you want to argue that this system, though possibly worse for the individual researchers (at least the unfortunate ones), is still better for science as a whole. I wonder if that is true.

  36. Scott Says:

    v #35: In your scenario, the most obvious problem seems to lie with the universities that have a policy of reducing your teaching load depending on how much grant money you bring in! I’m glad MIT has nothing like that—it sounds like a bad policy that would set up all sorts of perverse incentives.

    Having said that, yes, there is a more general “rich-get-richer” problem. If you give grants only to the people with the best track records, then they’ll use those grants to accumulate even better track records, etc., while everyone else gets left out. The best way I know of to correct for that is to bias the granting toward scientists at the beginning of their careers (fresh out of grad school), who show promise despite having only a short track record. And NSF does try to do exactly that through the CAREER awards, which I think there should be a lot more of than there are now (but try arguing for that in today’s budget climate!).

  37. Rahul Says:

    Model a: Give smart scientists money and time and they’ll come up with something good.

    Model b: Select smart scientists and pay them to work on a specific project or towards a specific goal.

    I think both models have a role to play and any good funding agency ought to split its money across the two modes.

    One problem with the ‘best track records” idea is that track records are these days significantly decided by how much grants you brought in. The risk is bad grants begetting more bad grants.

  38. Rahul Says:

    Just for kicks, I searched Google Scholar for the phrases “quantum computing” and “was supported by NSF” for the period 2009-2013.

    The list is here:

    http://scholar.google.com/scholar?as_ylo=2009&q=quantum+computing+%22was+supported+by+NSF%22&hl=en&as_sdt=1,5&as_vis=1

    I’m no QC expert, but looking at the papers what’s people’s gut reaction? What percent of these projects look like good use of NSF money?

    I don’t know. But I’d like to know.

  39. Scott Says:

    Rahul #38: I just looked through the first 10 pages of hits on the link you provided. I found at least ten papers that I know to be good (some extremely good), and only one paper that I know to be bad. (Most of them were chemistry/physics papers that I can’t evaluate.) My gut reaction is that there’s certainly no obvious problem here.

    Meanwhile, how many worthy projects went unfunded due to constraints in the NSF budget—whose total size is roughly equal to what the US spends each year on counterproductive corn subsidies that make us all fatter, and a drop in the bucket compared to fighter jets built in politically-influential states that the Pentagon doesn’t want? I don’t know. But I’d like to know.

  40. Rahul Says:

    “Adiabatic Quantum Algorithm for Search Engine Ranking”

    Did that one fall under “good use of NSF money”? Or cannot judge?

    http://prl.aps.org/abstract/PRL/v108/i23/e230506

  41. John Sidles Says:

    Startling (to me) are the 7,000+ “quantum computer” patents that Google Search finds.

    Nominally each of these patents is an enabling disclosure … so how is it that quantum computing devices are not ubiquitous? Why is no burgeoning quantum computer industry clamoring to hire QIT graduates?

    Whether or not this class of questions lacks good answers, it’s evident that it lacks consensus answers.

    A Modest Postulate  Quantum computing research may have created more private-sector jobs for patent agents than for all other STEM disciplines put together.

  42. Anon Says:

    Scott 39:

    Money for worthy non-funded QC projects can be found on Dr. Coburn’s webpage that you linked:

    • $1.7 billion in unspent funds sitting in expired, undisbursed grant accounts

  43. Scott Says:

    Anon #42: If you’d read the news accounts, you would’ve learned that that “$1.7 billion in unspent funds”—the biggest substantive accusation in Coburn’s report—was based on a simple accounting misunderstanding! The $1.7 billion is set aside for multiyear grants, and is being used for that purpose exactly as intended.

  44. Rahul Says:

    @John #41:

    I find the patents less distressing assuming they were filed with private money.

    OTOH, I find it worrying that public money gets spent on developing remote QC “frills” (e.g. QC search engine algos. ) before even the fundamental matters get thrashed out.

    This might yet be a house of cards. I’d rather funding was concentrated on thrashing out the substantive issues and getting a scalable QC working.

  45. Scott Says:

    Rahul #44: Oh, all right. I didn’t want to say so in public, but that search engine paper was the one paper in the first ten pages of hits that knew I disliked, or at least that I thought should have been written differently (instead of “PageRank”, they could just as well have said “principal eigenvector”). And even then, if you read the paper, it does have serious technical content, and it’s by authors who have done many other things that are great and that I’d be happy to fund. Meanwhile, in the same list, we’ve got the important paper of Ambainis et al. on AND/OR trees, awesome work by Boneh, Zhandry, et al. on quantum-resistant cryptography, the Jordan-Lee-Preskill paper on simulation of quantum field theories, work by Bravyi, Hastings, et al. on topological order, a beautiful paper on the adversary method and the Haar wavelet transform by Shelby Kimmel (full disclosure: she wrote it for a project in my class), … In summary, if your link in comment #38 is your evidence of systemic problems in the field of quantum computing, then your evidence SUCKS!

  46. Rahul Says:

    [I]nstead of “PageRank”, they could just as well have said “principal eigenvector”

    Isn’t there a limit to how many fancy / elegant algorithms we will allow ourselves to develop that will only work on one special kind of machine that itself has a fair chance of turning out to be entirely hypothetical or at least far enough into the future that talents might be usefully directed elsewhere (perhaps elsewhere within QC itself too)?

    It’s like spending money to develop Cruise Control or Assisted Parking before rolling out the first primitive car from the assembly line.

    This isn’t a criticism of QC research in general; but isn’t there leeway to more usefully re-balance the portfolio of current QC research?

  47. Scott Says:

    Rahul #46: I have a feeling we’ve discussed this before, but the majority of QC funding does go toward implementation-oriented work (in large part because theory is cheaper). Meanwhile, I’ve never, ever justified my work, to funding agencies or the public, primarily in terms of the practical applications in a future where QCs become practical (though if there turn out to be some applications, that’s great). For me, the real point has always been to understand the ultimate limits on what’s knowable and computable in the physical universe—a quest that strikes me as every bit as basic as the quest to understand the ultimate constituents of matter, or how the mind works, or anything else in science. If one accepts that, then the question simply reduces to the old one of whether society wants to support fundamental science at all.

  48. Lou Scheffer Says:

    Scott #45 said: “I thought should have been written differently (instead of “PageRank”, they could just as well have said “principal eigenvector”).”

    I think the opposite – PageRank is better than eigenvector. Consider 3 types of readers, of increasing sophistication.

    For the first, consider any scientifically interested person who last used eigenvectors in their college linear algebra course – think biologist, chemist, and so on. To them “principal eigenvector” gives no clue what this might be useful for. I believe any paper, wherever possible, ought explain at least the motivation as clearly and generally as possible. PageRank, or Search Engine Ranking, clearly wins here.

    Second, consider a somewhat more sophisticated reader who knows that QC algorithms are known for some problems but not others – say a typical engineer or physicist. If you title the paper “A QC algorithm for principle eigenvectors of large sparse matrices”, then they’ll probably mentally file it under “specialized problems QC can attack”. (If they know right away this is PageRank, then they’re an expert – see below). If you call it PageRank, then the’ll put it under possible practical uses for QC, such as factoring.

    Finally, to the expert it makes little difference. If you write “principle eigenvector of sparse matrices” they’ll know this can be used for page ranking. Likewise, they understand that if you have a good solution to PageRank, it can perhaps be used for other principle eigenvector problems.

    And if you think there is any chance it will be reviewed by anyone who is not an expert, as Rahul above suggests, then PageRank is better as well.

    So in general, “PageRank” helps the non-expert, and does not hurt the expert at all.

  49. Nilima Says:

    - The Canadian equivalent of the NSF is called NSERC. It has programs which fund people rather than projects, in the sense that grants are 5 years in duration, and for a program of research (rather than just specific project). So if one’s interested in comparing a different way of doing things, this may be data to compare against.

    – The NSF actually *does* get experts from ‘outside’ the system to help adjudicate proposals. By this I mean scientists from other countries can, and do, serve on panels. For example, I have served on several in areas where I have some expertise. I don’t apply for grants in the US. NSF grantees don’t serve on NSERC panels (we don’t have the same system). So the mutual back-scratching fear is negligible.

    I have to say as an outsider the system seems to work reasonably well. I see far too many good ideas not get funded, and see the proposals again. What a waste of time.

    As for your esteemed Republican Senator, he’s taking science in an interesting direction. Saddle up and bring along your beef jerky, theoretical STEM cowgirls. http://boingboing.net/2013/03/13/how-ocean-science-saves-money.html

  50. John Sidles Says:

    Today’s April Fools essay on Gödel’s Lost Letter and P=NP — titled “Interstellar Quantum Computation” — has a comment
    Why Gil Kalai is Right: an open letter from the Leprechaun King to a candid world
    which argues that Senator Tom Coburn’s critique of NSF priorities rests upon mathematical/scientific foundations that can be rationally embraced by young QIT/STEM researchers.

  51. Jim Van Zandt Says:

    In your blogroll, you might adjust the “Quantum Quandries” link to point to Matthew Leifler’s blog: http://feeds.feedburner.com/MattLeifer . (Sorry for the off-topic comment :-).

  52. Rahul Says:

    Is “n-Category Café” listed twice in the blogroll because it is so good? :)

  53. milkshaken Says:

    a decade old-but-good US Antarctic station goofiness:

    http://www.moderndrunkardmagazine.com/issues/08-04/08-04-southpole.htm

  54. Shtetl-Optimized » Blog Archive » I was right: Congress’s attack on the NSF widens Says:

    [...] month, I blogged about Sen. Tom Coburn (R-Oklahoma) passing an amendment blocking the National Science Foundation from [...]

  55. Lawyers and the Politics of Science | Hot and Cold Fusion Says:

    [...] seriously impact a broad swath of science.  My favorite Quantum Computing blogger warned about it here and followed up with a sober [...]

Leave a Reply