Archive for the ‘Nerd Interest’ Category

Happy Second Birthday Lily

Wednesday, January 21st, 2015

cat2

Two years ago, I blogged when Lily was born.  Today I can blog that she runs, climbs, swims (sort of), constructs 3-word sentences, demands chocolate cake, counts to 10 in both English and Hebrew, and knows colors, letters, shapes, animals, friends, relatives, the sun, and the moon.  To all external appearances she’s now conscious as you and I are (and considerably more so than the cat in the photo).

But the most impressive thing Lily does—the thing that puts her far beyond where her parents were at the same age, in a few areas—is her use of the iPad.  There she does phonics exercises, plays puzzle games that aren’t always trivial for me to win, and watches educational videos on YouTube (skipping past the ads, and complaining if the Internet connection goes down).  She chooses the apps and videos herself, easily switching between them when she gets bored.  It’s a sight to behold, and definitely something to try with your own toddler if you have one.  (There’s a movement these days that encourages parents to ban kids from using touch-screen devices, fearful that too much screen time will distract them from the real world.  To which I reply: for better or worse, this is the real world that our kids will grow up into.)

People often ask whether Dana and I will steer Lily into becoming a theoretical computer scientist like us.  My answer is “hell no”: I’ll support Lily in whatever she wants to do, whether that means logic, combinatorics, algebraic geometry, or even something further afield like theoretical neuroscience or physics.

As recent events illustrated, the world is not always the kindest place for nerds (male or female), with our normal ways of thinking, talking, and interacting sometimes misunderstood by others in the cruelest ways imaginable.  Yet despite everything, nerds do sometimes manage to meet, get married, and even produce offspring with nerd potential of their own.  We’re here, we’re sometimes inappropriately clear, and we’re not going anywhere.

So to life!  And happy birthday Lily!

What I believe

Tuesday, December 30th, 2014

Two weeks ago, prompted by a commenter named Amy, I wrote by far the most personal thing I’ve ever made public—what’s now being referred to in some places as just “comment 171.”  My thinking was: I’m giving up a privacy that I won’t regain for as long as I live, opening myself to ridicule, doing the blog equivalent of a queen-and-two-rook sacrifice.  But at least—and this is what matters—no one will ever again be able to question the depth of my feminist ideals.  Not after they understand how I clung to those ideals through a decade when I wanted to die.  And any teenage male nerds who read this blog, and who find themselves in a similar hole, will know that they too can get out without giving up on feminism. Surely that’s a message any decent person could get behind?

Alas, I was overoptimistic.  Twitter is now abuzz with people accusing me of holding precisely the barbaric attitudes that my story was all about resisting, defeating, and escaping, even when life throws you into those nasty attitudes’ gravity well, even when it tests you as most of your critics will never be tested.  Many of the tweets are full of the courageous clucks of those who speak for justice as long as they’re pretty sure their friends will agree with them: wow just wow, so sad how he totes doesn’t get it, expletives in place of arguments.  This whole affair makes me despair of the power of language to convey human reality—or at least, of my own ability to use language for that end.  I took the most dramatic, almost self-immolating step I could to get people to see me as I was, rather than according to some preexisting mental template of a “privileged, entitled, elite male scientist.”  And many responded by pressing down the template all the more firmly, twisting my words until they fit, and then congratulating each other for their bravery in doing so.

Here, of course, these twitterers (and redditors and facebookers) inadvertently helped make my argument for me.  Does anyone still not understand the sort of paralyzing fear that I endured as a teenager, that millions of other nerds endure, and that I tried to explain in the comment—the fear that civilized people will condemn you as soon as they find out who you really are (even if the truth seems far from uncommonly bad), that your only escape is to hide or lie?

Thankfully, not everyone responded with snarls.  Throughout the past two weeks, I’ve been getting regular emails from shy nerds who thanked me profusely for sharing as I did, for giving them hope for their own lives, and for articulating a life-crushing problem that anyone who’s spent a day among STEM nerds knows perfectly well, but that no one acknowledges in polite company.  I owe the writers of those emails more than they owe me, since they’re the ones who convinced me that on balance, I did the right thing.

I’m equally grateful to have gotten interesting, compassionate responses from feminist women.  The most striking was that of Laurie Penny in the New Statesman—a response that others of Penny’s views should study, if they want to understand how to win hearts and change minds.

I do not intend for a moment to minimise Aaronson’s suffering. Having been a lonely, anxious, horny young person who hated herself and was bullied I can categorically say that it is an awful place to be. I have seen responses to nerd anti-feminism along the lines of ‘being bullied at school doesn’t make you oppressed.’ Maybe it’s not a vector of oppression in the same way, but it’s not nothing. It burns. It takes a long time to heal.

Feminism, however, is not to blame for making life hell for ‘shy, nerdy men.’ Patriarchy is to blame for that. It is a real shame that Aaronson picked up Dworkin rather than any of the many feminist theorists and writers who manage to combine raw rage with refusal to resort to sexual shame as an instructive tool. Weaponised shame- male, female or other- has no place in any feminism I subscribe to. Ironically, Aronson [sic] actually writes a lot like Dworkin- he writes from pain felt and relived and wrenched from the intimate core of himself, and because of that his writing is powerfully honest, but also flawed …

What fascinates me about Aaronson’s piece, in which there was such raw, honest suffering, was that there was not one mention of women in any respect other than how they might relieve him from his pain by taking pity, or educating him differently. And Aaronson is not a misogynist. Aaronson is obviously a compassionate, well-meaning and highly intelligent man [damn straight—SA]

I’ll have more to say about Penny’s arguments in a later post—where I agree and where I part ways from her—but there’s one factual point I should clear up now.  When I started writing comment 171, I filled it with anecdotes from the happier part of my life (roughly, from age 24 onward): the part where I finally became able to ask; where women, with a frequency that I couldn’t have imagined as a teenager, actually answered ‘yes’; and where I got to learn about their own fears and insecurities and quirks.  In the earlier draft, I also wrote about my wife’s experiences as a woman in computer science, which differed from Amy’s in some crucial ways.  But then I removed it all, for a simple reason: because while I have the right to bare my own soul on my blog, I don’t have the right to bare other people’s unless they want me to.

Without further ado, and for the benefit of the world’s Twitterariat, I’m now just going to state nine of my core beliefs.

1. I believe that women are authors of their own stories, that they don’t exist merely to please men, that they are not homogeneous, that they’re not slot machines that ‘pay out’ but only if you say the right things.  I don’t want my two-year-old daughter to grow up to be anyone else’s property, and I’m happy that she won’t.  And I’d hope all this would no more need to be said, than (say) that Gentiles shouldn’t be slaughtered to use their blood in making matzo.

2. I believe everyone’s story should be listened to—and concretely, that everyone should feel 300% welcome to participate in my comments section.  I don’t promise to agree with you, but I promise to try to engage your ideas thoughtfully, whether you’re a man, woman, child, AI-bot, or unusually-bright keyboard-pecking chicken.  Indeed, I spend a nontrivial fraction of my life doing exactly that (well, not so much with chickens).

3. I believe no one has the right to anyone else’s sexual affections.  I believe establishing this principle was one of the triumphs of modern civilization.

4. I believe women who go into male-dominated fields like math, CS, and physics deserve praise, encouragement, and support.  But that’s putting the point too tepidly: if I get to pick 100 people (unrelated to me) to put onto a spaceship as the earth is being destroyed, I start thinking immediately about six or seven of my female colleagues in complexity and quantum computing.  And no, Twitter: not because being female, they could help repopulate the species.  Just because they’re great people.

5. I believe there still exist men who think women are inferior, that they have no business in science, that they’re good only for sandwich-making and sex.  Though I don’t consider it legally practicable, as a moral matter I’d be fine if every such man were thrown in prison for life.

6. I believe that even if they don’t hold views anything like the above (as, overwhelmingly, they don’t), there might be nerdy males who unintentionally behave in ways that tend to drive some women away from science.  I believe this is a complicated problem best approached with charity: we want win-win solutions, where no one is made to feel despised because of who they are.  Toward that end, I believe open, honest communication (as I’ve been trying to foster on this blog) is essential.

7. I believe that no one should be ashamed of inborn sexual desires: not straight men, not straight women, not gays, not lesbians, not even pedophiles (though in the last case, there might really be no moral solution other than a lifetime of unfulfilled longing).  Indeed, I’ve always felt a special kinship with gays and lesbians, precisely because the sense of having to hide from the world, of being hissed at for a sexual makeup that you never chose, is one that I can relate to on a visceral level.  This is one reason why I’ve staunchly supported gay marriage since adolescence, when it was still radical.  It’s also why the tragedy of Alan Turing, of his court-ordered chemical castration and subsequent suicide, was one of the formative influences of my life.

8. I believe that “the problem of the nerdy heterosexual male” is surely one of the worst social problems today that you can’t even acknowledge as being a problem—the more so, if you weight the problems by how likely academics like me are to know the sufferers and to feel a personal stake in helping them. How to help all the young male nerds I meet who suffer from this problem, in a way that passes feminist muster, and that triggers the world’s sympathy rather than outrage, is a problem that interests me as much as P vs. NP, and that right now seems about equally hard.

9. I believe that, just as there are shy, nerdy men, there are also shy, nerdy women, who likewise suffer from feeling unwanted, sexually invisible, or ashamed to express their desires.  On top of that, these women also have additional difficulties that come with being women!  At the same time, I also think there are crucial differences between the two cases—at least in the world as it currently exists—which might make the shy-nerdy-male problem vastly harder to solve than the shy-nerdy-female one.  Those differences, and my advice for shy nerdy females, will be the subject of another post.  (That’s the thing about blogging: in for a penny, in for a post.)


Update (Dec. 31): I struggle always to be ready to change my views in light of new arguments and evidence. After reflecting on the many thoughtful comments here, there are two concessions that I’m now willing to make.

The first concession is that, as Laurie Penny maintained, my problems weren’t caused by feminism, but rather by the Patriarchy. One thing I’ve learned these last few days is that, as many people use it, the notion of “Patriarchy” is sufficiently elastic as to encompass almost anything about the relations between the sexes that is, or has ever been, bad or messed up—regardless of who benefits, who’s hurt, or who instigated it. So if you tell such a person that your problem was not caused by the Patriarchy, it’s as if you’ve told a pious person that a certain evil wasn’t the Devil’s handiwork: the person has trouble even parsing what you said, since within her framework, “evil” and “Devil-caused” are close to synonymous. If you want to be understood, far better just to agree that it was Beelzebub and be done with it. This might sound facetious, but it’s really not: I believe in the principle of always adopting the other side’s terms of reference, whenever doing so will facilitate understanding and not sacrifice what actually matters to you.

Smash the Patriarchy!

The second concession is that, all my life, I’ve benefited from male privilege, white privilege, and straight privilege. I would only add that, for some time, I was about as miserable as it’s possible for a person to be, so that in an instant, I would’ve traded all three privileges for the privilege of not being miserable. And if, as some suggested, there are many women, blacks, and gays who would’ve gladly accepted the other side of that trade—well then, so much the better for all of us, I guess. “Privilege” simply struck me as a pompous, cumbersome way to describe such situations: why not just say that person A’s life stinks in this way, and person B’s stinks in that way? If they’re not actively bothering each other, then why do we also need to spread person A’s stink over to person B and vice versa, by claiming they’re each “privileged” by not having the other one’s?

However, I now understand why so many people became so attached to that word: if I won’t use it, they think it means I think that sexism, racism, and homophobia don’t exist, rather than just that I think people fixated on a really bad way to talk about these problems.


Update (Jan. 1): Yesterday I gave a seminar at the Hebrew University of Jerusalem. Since I’d been spending all my time dealing with comment-171-gate, I showed up with no slides, no notes, no anything—just me and the whiteboard. But for an hour and a half, I got to forget entirely about the thousands of people on the Internet I’d never met who were now calling me an asshole because of wild, “postmodernist” misreadings of a blog comment, which twisted what I said (and meant) into its exact opposite, building up a fake-Scott-Aaronson onto whom the ax-grinders could project all of their own bogeymen. For 90 minutes I got to forget all that, and just throw myself into separations between randomized and quantum query complexity. It was the most cathartic lecture of my life. And in the near future, I’d like more such catharses. Someday I’ll say more about the inexhaustibly-fascinating topic of nerds and sex—and in particular, I’ll write the promised post about shy female nerds—but not now. This will be my last post on the subject for a while.

On balance, I don’t regret having shared my story—because it prompted an epic discussion; because I learned so much from the dozens of other nerd coming-of-age stories that it drew out, similar to mine but also different; because what I learned will change the way I talk about these issues in the future; and most of all, because so many people, men and women, emailed me to say how my speaking out gave them hope for their own lives. But I do regret a few rhetorical flourishes, which I should have known might be misread maliciously, though I could never have guessed how maliciously. I never meant to minimize the suffering of other people, nor to deny that many others have had things as bad or worse than I did (again, how does one even compare?). I meant only that, if we’re going to discuss how to change the culture of STEM fields, or design sexual-conduct policies to minimize suffering, then I request a seat at the table not as the “white male powerful oppressor figure,” but as someone who also suffered something atypically extreme, overcame it, and gained relevant knowledge that way. I never meant to suggest that anyone else should leave the table.

To the people who tweeted that female MIT students should now be afraid to take classes with me: please check out the beautiful blog post by Yan, a female student who did take 6.045 with me. See also this by Lisa Danz and this by Chelsea Voss.

More broadly: thank you to everyone who sent me messages of support, but especially to all the female mathematicians and scientists who did so.  I take great solace from the fact that, of all the women and men whose contributions to the world I had respected beforehand, not one (to my knowledge) reacted to this affair in a mean-spirited way.

Happy New Year, everyone. May 2015 be a year of compassion and understanding.


Update (Jan. 2): If you’ve been following this at all, then please, please, please read Scott Alexander’s tour-de-force post. To understand what it was like for me to read this, after all I’ve been through the past few days, try to imagine Galileo’s Dialogue Concerning the Two Chief World Systems, the American Declaration of Independence, John Stuart Mill’s The Subjection of Women, and Clarence Darrow’s closing arguments in the Scopes trial all rolled into one, except with you as the protagonist. Reason and emotion are traditionally imagined as opposites, but that’s never seemed entirely right to me: while, yes, part of reason is learning how to separate out emotion, I never experience such intense emotion as when, like with Alexander’s piece, I see reason finally taking a stand, reason used to face down a thousand bullies and as a fulcrum to move the world.


Update (Jan. 13): Please check out this beautiful Quora answer by Jean Yang, a PhD student in MIT CSAIL. She’s answering the question: “What do you think of Scott Aaronson’s comment #171 and the subsequent posts?”

More generally, I’ve been thrilled by the almost-unanimously positive reactions that I’ve been getting these past two weeks from women in STEM fields, even as so many people outside STEM have responded with incomprehension and cruelty.  Witnessing that pattern has—if possible—made me even more of a supporter and admirer of STEM women than I was before this thing started.

The Turing movie

Tuesday, December 16th, 2014

Last week I finally saw The Imitation Game, the movie with Benedict Cumberbatch as Alan Turing.

OK, so for those who haven’t yet seen it: should you?  Here’s my one paragraph summary: imagine that you told the story of Alan Turing—one of the greatest triumphs and tragedies of human history, needing no embellishment whatsoever—to someone who only sort-of understood it, and who filled in the gaps with weird fabrications and Hollywood clichés.  And imagine that person retold the story to a second person, who understood even less, and that that person retold it to a third, who understood least of all, but who was charged with making the movie that would bring Turing’s story before the largest audience it’s ever had.  And yet, imagine that enough of the enormity of the original story made it through this noisy channel, that the final product was still pretty good.  (Except, imagine how much better it could’ve been!)

The fabrications were especially frustrating to me, because we know it’s possible to bring Alan Turing’s story to life in a way that fully honors the true science and history.  We know that, because Hugh Whitemore’s 1986 play Breaking the Code did it.  The producers of The Imitation Game would’ve done better just to junk their script, and remake Breaking the Code into a Hollywood blockbuster.  (Note that there is a 1996 BBC adaptation of Breaking the Code, with Derek Jacobi as Turing.)

Anyway, the movie focuses mostly on Turing’s codebreaking work at Bletchley Park, but also jumps around in time to his childhood at Sherborne School, and to his arrest for “homosexual indecency” and its aftermath.  Turing’s two world-changing papers—On Computable Numbers and Computing Machinery and Intelligence—are both mentioned, though strangely, his paper about computing zeroes of the Riemann zeta function is entirely overlooked.

Here are my miscellaneous comments:

  • The boastful, trash-talking, humor-impaired badass-nerd of the movie seems a lot closer to The Big Bang Theory‘s Sheldon Cooper, or to some other Hollywood concept of “why smart people are so annoying,” than to the historical Alan Turing.  (At least in Sheldon’s case, the archetype is used for laughs, not drama or veracity.)  As portrayed in the definitive biography (Andrew Hodges’ Alan Turing: The Enigma), Turing was eccentric, sure, and fiercely individualistic (e.g., holding up his pants with pieces of string), but he didn’t get off on insulting the intelligence of the people around him.
  • In the movie, Turing is pretty much singlehandedly responsible for designing, building, and operating the Bombes (the codebreaking machines), which he does over the strenuous objections of his superiors.  This, of course, is absurd: Bletchley employed about 10,000 people at its height.  Turing may have been the single most important cog in the operation, but he was still a cog.  And by November 1942, the operation was already running smoothly enough that Turing could set sail for the US (in waters that were now much safer, thanks to Bletchley!), to consult on other cryptographic projects at Bell Labs.
  • But perhaps the movie’s zaniest conceit is that Turing was also in charge of deciding what to do with Bletchley’s intelligence (!).  In the movie, it falls to him, not the military, to decide which ship convoys will be saved, and which sacrificed to prevent spilling Bletchley’s secret.  If that had any historicity to it, it would surely be the most military and political power ever entrusted to a mathematician (update: see the comments section for potential counterexamples).
  • It’s true that Turing (along with three other codebreakers) wrote a letter directly to Winston Churchill, pleading for more funding for Bletchley Park—and that Churchill saw the letter, and ordered “Action this day! Make sure they have all they want on extreme priority.”  However, the letter was not a power play to elevate Turing over Hugh Alexander and his other colleagues: in fact, Alexander co-signed the letter.  More broadly, the fierce infighting between Turing and everyone else at Bletchley Park, central to the movie’s plot, seems to have been almost entirely invented for dramatic purposes.
  • The movie actually deserves a lot of credit for getting right that the major technical problem of Bletchley Park was how to get the Bombes to search through keys fast enough—and that speeding things up is where Turing made a central contribution.  As a result, The Imitation Game might be the first Hollywood movie ever made whose plot revolves around computational efficiency.  (Counterexamples, anyone?)  Unfortunately, the movie presents Turing’s great insight as being that one can speed up the search by guessing common phrases, like “HEIL HITLER,” that are likely to be in the plaintext.  That was, I believe, obvious to everyone from the beginning.
  • Turing never built a computer in his own home, and he never named a computer “Christopher,” after his childhood crush Christopher Morcom.  (On the other hand, Christopher Morcom existed, and his early death from tuberculosis really did devastate Turing, sending him into morbid-yet-prescient ruminations about whether a mind could exist separately from a brain.)
  • I found it ironic that The Imitation Game, produced in 2014, is far more squeamish about on-screen homosexuality than Breaking the Code, produced in 1986.  Turing talks about being gay (which is an improvement over 2001’s Enigma, which made Turing straight!), but is never shown embracing another man.  However, the more important problem is that the movie botches the story of the burglary of Turing’s house (i.e., the event that led to Turing’s arrest and conviction for homosexual indecency), omitting the role of Turing’s own naiveté in revealing his homosexuality to the police, and substituting some cloak-and-dagger spy stuff.  Once again, Breaking the Code handled this perfectly.
  • In one scene, Euler is pronounced “Yooler.”

For more, see an excellent piece in Slate, How Accurate Is The Imitation Game?.  And for other science bloggers’ reactions, see this review by Christos Papadimitriou (which I thought was extremely kind, though it focuses more on Turing himself than on the movie), this reaction by Peter Woit, which largely echoes mine, and this by Clifford Johnson.

Walter Lewin

Wednesday, December 10th, 2014

Yesterday I heard the sad news that Prof. Walter Lewin, age 78—perhaps the most celebrated physics teacher in MIT’s history—has been stripped of his emeritus status and barred from campus, and all of his physics lectures removed from OpenCourseWare, because an internal investigation found that he had been sexually harassing students online.  I don’t know anything about what happened beyond the terse public announcements, but those who do know tell me that the charges were extremely serious, and that “this wasn’t a borderline case.”

I’m someone who feels that sexual harassment must never be tolerated, neither here nor anywhere else.  But I also feel that, if a public figure is going to be publicly brought down like this (yes, even by a private university), then the detailed findings of the investigation should likewise be made public, regardless of how embarrassing they are.  I know others differ, but I think the need of the world to see that justice was done overrides MIT’s internal administrative needs, and even Prof. Lewin’s privacy (the names of any victims could, of course, be kept secret).

More importantly, I wish to register that I disagree in the strongest possible terms with MIT’s decision to remove Prof. Lewin’s lectures from OpenCourseWare—thereby forcing the tens of thousands of students around the world who were watching these legendary lectures to hunt for ripped copies on BitTorrent.  (Imagine that: physics lectures as prized contraband!)  By all means, punish Prof. Lewin as harshly as he deserves, but—as students have been pleading on Reddit, in the MIT Tech comments section, and elsewhere—don’t also punish the countless students of both sexes who continue to benefit from his work.  (For godsakes, I’d regard taking down the lectures as a tough call if Prof. Lewin had gone on a murder spree.)  Doing this sends the wrong message about MIT’s values, and is a gift to those who like to compare modern American college campuses to the Soviet Union.

Update: For those who are interested, while the comment section starts out with a discussion of whether Walter Lewin’s physics lectures should’ve been removed from OCW, it’s now broadened to include essentially all aspects of the human condition.

What does the NSA think of academic cryptographers? Recently-declassified document provides clues

Sunday, November 16th, 2014

Brighten Godfrey was one of my officemates when we were grad students at Berkeley.  He’s now a highly-successful computer networking professor at the University of Illinois Urbana-Champaign, where he studies the wonderful question of how we could get the latency of the Internet down to the physical limit imposed by the finiteness of the speed of light.  (Right now, we’re away from that limit by a factor of about 50.)

Last week, Brighten brought to my attention a remarkable document: a 1994 issue of CryptoLog, an NSA internal newsletter, which was recently declassified with a few redactions.  The most interesting thing in the newsletter is a trip report (pages 12-19 in the newsletter, 15-22 in the PDF file) by an unnamed NSA cryptographer, who attended the 1992 EuroCrypt conference, and who details his opinions on just about every talk.  If you’re interested in crypto, you really need to read this thing all the way through, but here’s a small sampling of the zingers:

  • Three of the last four sessions were of no value whatever, and indeed there was almost nothing at Eurocrypt to interest us (this is good news!). The scholarship was actually extremely good; it’s just that the directions which external cryptologic researchers have taken are remarkably far from our own lines of interest.
  • There were no proposals of cryptosystems, no novel cryptanalysis of old designs, even very little on hardware design. I really don’t see how things could have been any better for our purposes. We can hope that the absentee cryptologists stayed away because they had no new ideas, or even that they’ve taken an interest in other areas of research.
  • Alfredo DeSantis … spoke on “Graph decompositions and secret-sharing schemes,” a silly topic which brings joy to combinatorists and yawns to everyone else.
  • Perhaps it is beneficial to be attacked, for you can easily augment your publication list by offering a modification.
  • This result has no cryptanalytic application, but it serves to answer a question which someone with nothing else to think about might have asked.
  • I think I have hammered home my point often enough that I shall regard it as proved (by emphatic enunciation): the tendency at IACR meetings is for academic scientists (mathematicians, computer scientists, engineers, and philosophers masquerading as theoretical computer scientists) to present commendable research papers (in their own areas) which might affect cryptology at some future time or (more likely) in some other world. Naturally this is not anathema to us.
  • The next four sessions were given over to philosophical matters. Complexity theorists are quite happy to define concepts and then to discuss them even though they have no examples of them.
  • Don Beaver (Penn State), in another era, would have been a spellbinding charismatic preacher; young, dashing (he still wears a pony-tail), self-confident and glib, he has captured from Silvio Micali the leadership of the philosophic wing of the U.S. East Coast cryptanalytic community.
  • Those of you who know my prejudice against the “zero-knowledge” wing of the philosophical camp will be surprised to hear that I enjoyed the three talks of the session better than any of that ilk that I had previously endured. The reason is simple: I took along some interesting reading material and ignored the speakers. That technique served to advantage again for three more snoozers, Thursday’s “digital signature and electronic cash” session, but the final session, also on complexity theory, provided some sensible listening.
  • But it is refreshing to find a complexity theory talk which actually addresses an important problem!
  • The other two talks again avoided anything of substance.  [The authors of one paper] thought it worthwhile, in dealing [with] the general discrete logarithm problem, to prove that the problem is contained in the complexity classes NP and co-AM, but is unlikely to be in co-NP.
  • And Ueli Maurer, again dazzling us with his brilliance, felt compelled, in “Factoring with an Oracle” to arm himself with an Oracle (essentially an Omniscient Being that complexity theorists like to turn to when they can’t solve a problem) while factoring. He’s calculating the time it would take him (and his Friend) to factor, and would like also to demonstrate his independence by consulting his Partner as seldom as possible. The next time you find yourself similarly equipped, you will perhaps want to refer to his paper.
  • The conference again offered an interesting view into the thought processes of the world’s leading “cryptologists.” It is indeed remarkable how far the Agency has strayed from the True Path.

Of course, it would be wise not to read too much into this: it’s not some official NSA policy statement, but the griping of a single, opinionated individual somewhere within the NSA, who was probably bored and trying to amuse his colleagues.  All the same, it’s a fascinating document, not only for its zingers about people who are still very much active on the cryptographic scene, but also for its candid insights into what the NSA cares about and why, and for its look into the subculture within cryptography that would lead, years later, to Neal Koblitz’s widely-discussed anti-provable-security manifestos.

Reading this document drove home for me that the “provable security wars” are a very simple matter of the collision of two communities with different intellectual goals, not of one being right and the other being wrong.  Here’s a fun exercise: try reading this trip report while remembering that, in the 1980s—i.e., the decade immediately preceding the maligned EuroCrypt conference—the “philosophic wing” of cryptography that the writer lampoons actually succeeded in introducing revolutionary concepts (interactive proofs, zero-knowledge, cryptographic pseudorandomness, etc.) that transformed the field, concepts that have now been recognized with no fewer than three Turing Awards (to Yao, Goldwasser, and Micali).  On the other hand, it’s undoubtedly true that this progress was of no immediate interest to the NSA.  On the third hand, the “philosophers” might reply that helping the NSA wasn’t their goal.  The best interests of the NSA don’t necessarily coincide with the best interests of scientific advancement (not to mention the best interests of humanity—but that’s a separate debate).

Interstellar’s dangling wormholes

Monday, November 10th, 2014

Update (Nov. 15): A third of my confusions addressed by reading Kip Thorne’s book! Details at the bottom of this post.


On Saturday Dana and I saw Interstellar, the sci-fi blockbuster co-produced by the famous theoretical physicist Kip Thorne (who told me about his work on this movie when I met him eight years ago).  We had the rare privilege of seeing the movie on the same day that we got to hang out with a real astronaut, Dan Barry, who flew three shuttle missions and did four spacewalks in the 1990s.  (As the end result of a project that Dan’s roboticist daughter, Jenny Barry, did for my graduate course on quantum complexity theory, I’m now the coauthor with both Barrys on a paper in Physical Review A, about uncomputability in quantum partially-observable Markov decision processes.)

Before talking about the movie, let me say a little about the astronaut.  Besides being an inspirational example of someone who’s achieved more dreams in life than most of us—seeing the curvature of the earth while floating in orbit around it, appearing on Survivor, and publishing a Phys. Rev. A paper—Dan is also a passionate advocate of humanity’s colonizing other worlds.  When I asked him whether there was any future for humans in space, he answered firmly that the only future for humans was in space, and then proceeded to tell me about the technical viability of getting humans to Mars with limited radiation exposure, the abundant water there, the romantic appeal that would inspire people to sign up for the one-way trip, and the extinction risk for any species confined to a single planet.  Hearing all this from someone who’d actually been to space gave Interstellar, with its theme of humans needing to leave Earth to survive (and its subsidiary theme of the death of NASA’s manned space program meaning the death of humanity), a special vividness for me.  Granted, I remain skeptical about several points: the feasibility of a human colony on Mars in the foreseeable future (a self-sufficient human colony on Antarctica, or under the ocean, strike me as plenty hard enough for the next few centuries); whether a space colony, even if feasible, cracks the list of the top twenty things we ought to be doing to mitigate the risk of human extinction; and whether there’s anything more to be learned, at this point in history, by sending humans to space that couldn’t be learned a hundred times more cheaply by sending robots.  On the other hand, if there is a case for continuing to send humans to space, then I’d say it’s certainly the case that Dan Barry makes.

OK, but enough about the real-life space traveler: what did I think about the movie?  Interstellar is a work of staggering ambition, grappling with some of the grandest themes of which sci-fi is capable: the deterioration of the earth’s climate; the future of life in the universe; the emotional consequences of extreme relativistic time dilation; whether “our” survival would be ensured by hatching human embryos in a faraway world, while sacrificing almost all the humans currently alive; to what extent humans can place the good of the species above family and self; the malleability of space and time; the paradoxes of time travel.  It’s also an imperfect movie, one with many “dangling wormholes” and unbalanced parentheses that are still generating compile-time errors in my brain.  And it’s full of stilted dialogue that made me giggle—particularly when the characters discussed jumping into a black hole to retrieve its “quantum data.”  Also, despite Kip Thorne’s involvement, I didn’t find the movie’s science spectacularly plausible or coherent (more about that below).  On the other hand, if you just wanted a movie that scrupulously obeyed the laws of physics, rather than intelligently probing their implications and limits, you could watch any romantic comedy.  So sure, Interstellar might make you cringe, but if you like science fiction at all, then it will also make you ponder, stare awestruck, and argue with friends for days afterward—and enough of the latter to make it more than worth your while.  Just one tip: if you’re prone to headaches, do not sit near the front of the theater, especially if you’re seeing it in IMAX.

For other science bloggers’ takes, see John Preskill (who was at a meeting with Steven Spielberg to brainstorm the movie in 2006), Sean Carroll, Clifford Johnson, and Peter Woit.

In the rest of this post, I’m going to list the questions about Interstellar that I still don’t understand the answers to (yes, the ones still not answered by the Interstellar FAQ).  No doubt some of these are answered by Thorne’s book The Science of Interstellar, which I’ve ordered (it hasn’t arrived yet), but since my confusions are more about plot than science, I’m guessing that others are not.

SPOILER ALERT: My questions give away basically the entire plot—so if you’re planning to see the movie, please don’t read any further.  After you’ve seen it, though, come back and see if you can help with any of my questions.


1. What’s causing the blight, and the poisoning of the earth’s atmosphere?  The movie is never clear about this.  Is it a freak occurrence, or is it human-caused climate change?  If the latter, then wouldn’t it be worth some effort to try to reverse the damage and salvage the earth, rather than escaping through a wormhole to another galaxy?

2. What’s with the drone?  Who sent it?  Why are Cooper and Murph able to control it with their laptop?  Most important of all, what does it have to do with the rest of the movie?

3. If NASA wanted Cooper that badly—if he was the best pilot they’d ever had and NASA knew it—then why couldn’t they just call him up?  Why did they have to wait for beings from the fifth dimension to send a coded message to his daughter revealing their coordinates?  Once he did show up, did they just kind of decide opportunistically that it would be a good idea to recruit him?

4. What was with Cooper’s crash in his previous NASA career?  If he was their best pilot, how and why did the crash happen?  If this was such a defining, traumatic incident in his life, why is it never brought up for the rest of the movie?

5. How is NASA funded in this dystopian future?  If official ideology holds that the Apollo missions were faked, and that growing crops is the only thing that matters, then why have the craven politicians been secretly funneling what must be trillions of dollars to a shadow-NASA, over a period of fifty years?

6. Why couldn’t NASA have reconnoitered the planets using robots—especially since this is a future where very impressive robots exist?  Yes, yes, I know, Matt Damon explains in the movie that humans remain more versatile than robots, because of their “survival instinct.”  But the crew arrives at the planets missing extremely basic information about them, like whether they’re inhospitable to human life because of freezing temperatures or mile-high tidal waves.  This is information that robotic probes, even of the sort we have today, could have easily provided.

7. Why are the people who scouted out the 12 planets so limited in the data they can send back?  If they can send anything, then why not data that would make Cooper’s mission completely redundant (excepting, of course, the case of the lying Dr. Mann)?  Does the wormhole limit their transmissions to 1 bit per decade or something?

8. Rather than wasting precious decades waiting for Cooper’s mission to return, while (presumably) billions of people die of starvation on a fading earth, wouldn’t it make more sense for NASA to start colonizing the planets now?  They could simply start trial colonies on all the planets, even if they think most of the colonies will fail.  Yes, this plan involves sacrificing individuals for the greater good of humanity, but NASA is already doing that anyway, with its slower, riskier, stupider reconnaissance plan.  The point becomes even stronger when we remember that, in Professor Brand’s mind, the only feasible plan is “Plan B” (the one involving the frozen human embryos).  Frozen embryos are (relatively) cheap: why not just spray them all over the place?  And why wait for “Plan A” to fail before starting that?

9. The movie involves a planet, Miller, that’s so close to the black hole Gargantua, that every hour spent there corresponds to seven years on earth.  There was an amusing exchange on Slate, where Phil Plait made the commonsense point that a planet that deep in a black hole’s gravity well would presumably get ripped apart by tidal forces.  Plait later had to issue an apology, since, in conceiving this movie, Kip Thorne had made sure that Gargantua was a rapidly rotating black hole—and it turns out that the physics of rotating black holes are sufficiently different from those of non-rotating ones to allow such a planet in principle.  Alas, this clever explanation still leaves me unsatisfied.  Physicists, please help: even if such a planet existed, wouldn’t safely landing a spacecraft on it, and getting it out again, require a staggering amount of energy—well beyond what the humans shown in the movie can produce?  (If they could produce that much acceleration and deceleration, then why couldn’t they have traveled from Earth to Saturn in days rather than years?)  If one could land on Miller and then get off of it using the relatively conventional spacecraft shown in the movie, then the amusing thought suggests itself that one could get factor-of-60,000 computational speedups, “free of charge,” by simply leaving one’s computer in space while one spent some time on the planet.  (And indeed, something like that happens in the movie: after Cooper and Anne Hathaway return from Miller, Romilly—the character who stayed behind—has had 23 years to think about physics.)

10. Why does Cooper decide to go into the black hole?  Surely he could jettison enough weight to escape the black hole’s gravity by sending his capsule into the hole, while he himself shared Anne Hathaway’s capsule?

11. Speaking of which, does Cooper go into the black hole?  I.e., is the “tesseract” something he encounters before or after he crosses the event horizon?  (Or maybe it should be thought of as at the event horizon—like a friendlier version of the AMPS firewall?)

12. Why is Cooper able to send messages back in time—but only by jostling books around, moving the hands of a watch, and creating patterns of dust in one particular room of one particular house?  (Does this have something to do with love and gravity being the only two forces in the universe that transcend space and time?)

13. Why does Cooper desperately send the message “STAY” to his former self?  By this point in the movie, isn’t it clear that staying on Earth means the death of all humans, including Murph?  If Cooper thought that a message could get through at all, then why not a message like: “go, and go directly to Edmunds’ planet, since that’s the best one”?  Also, given that Cooper now exists outside of time, why does he feel such desperate urgency?  Doesn’t he get, like, infinitely many chances?

14. Why is Cooper only able to send “quantum data” that saves the world to the older Murph—the one who lives when (presumably) billions of people are already dying of starvation?  Why can’t he send the “quantum data” back to the 10-year-old Murph, for example?  Even if she can’t yet understand it, surely she could hand it over to Professor Brand.  And even if this plan would be unlikely to succeed: again, Cooper now exists outside of time.  So can’t he just keep going back to the 10-year-old Murph, rattling those books over and over until the message gets through?

15. What exactly is the “quantum data” needed for, anyway?  I gather it has something to do with building a propulsion system that can get the entire human population out of the earth’s gravity well at a reasonable cost?  (Incidentally, what about all the animals?  If the writers of the Old Testament noticed that issue, surely the writers of Interstellar could.)

16. How does Cooper ever make it out of the black hole?  (Maybe it was explained and I missed it: once he entered the black hole, things got extremely confusing.)  Do the fifth-dimensional beings create a new copy of Cooper outside the black hole?  Do they postselect on a branch of the wavefunction where he never entered the black hole in the first place?  Does Murph use the “quantum data” to get him out?

17. At his tearful reunion with the elderly Murph, why is Cooper totally uninterested in meeting his grandchildren and great-grandchildren, who are in the same room?  And why are they uninterested in meeting him?  I mean, seeing Murph again has been Cooper’s overriding motivation during his journey across the universe, and has repeatedly been weighed against the survival of the entire human race, including Murph herself.  But seeing Murph’s kids—his grandkids—isn’t even worth five minutes?

18. Speaking of which, when did Murph ever find time to get married and have kids?  Since she’s such a major character, why don’t we learn anything about this?

19. Also, why is Murph an old woman by the time Cooper gets back?  Yes, Cooper lost a few decades because of the time dilation on Miller’s planet.  I guess he lost the additional decades while entering and leaving Gargantua?  If the five-dimensional beings were able to use their time-travel / causality-warping powers to get Cooper out of the black hole, couldn’t they have re-synced his clock with Murph’s while they were at it?

20. Why does Cooper need to steal a spaceship to get to Anne Hathaway’s planet?  Isn’t Murph, like, the one in charge?  Can’t she order that a spaceship be provided for Cooper?

21. Astute readers will note that I haven’t yet said anything about the movie’s central paradox, the one that dwarfs all the others.  Namely, if humans were going to go extinct without a “wormhole assist” from the humans of the far future, then how were there any humans in the far future to provide the wormhole assist?  And conversely, if the humans of the far future find themselves already existing, then why do they go to the trouble to put the wormhole in their past (which now seems superfluous, except maybe for tidying up the story of their own origins)?  The reason I didn’t ask about this is that I realize it’s supposed to be paradoxical; we’re supposed to feel vertigo thinking about it.  (And also, it’s not entirely unrelated to how PSPACE-complete problems get solved with polynomial resources, in my and John Watrous’s paper on computation with closed timelike curves.)  My problem is a different one: if the fifth-dimensional, far-future humans have the power to mold their own past to make sure everything turned out OK, then what they actually do seems pathetic compared to what they could do.  For example, why don’t they send a coded message to the 21st-century humans (similar to the coded messages that Cooper sends to Murph), telling them how to avoid the blight that destroys their crops?  Or just telling them that Edmunds’ planet is the right one to colonize?  Like the God of theodicy arguments, do the future humans want to use their superpowers only to give us a little boost here and there, while still leaving us a character-forming struggle?  Even if this reticence means that billions of innocent people—ones who had nothing to do with the character-forming struggle—will die horrible deaths?  If so, then I don’t understand these supposedly transcendently-evolved humans any better than I understand the theodical God.


Anyway, rather than ending on that note of cosmic pessimism, I guess I could rejoice that we’re living through what must be the single biggest month in the history of nerd cinema—what with a sci-fi film co-produced by a great theoretical physicist, a Stephen Hawking biopic, and the Alan Turing movie coming out in a few weeks.  I haven’t yet seen the latter two.  But it looks like the time might be ripe to pitch my own decades-old film ideas, like “Radical: The Story of Évariste Galois.”


Update (Nov. 15): I just finished reading Kip Thorne’s interesting book The Science of Interstellar.  I’d say that it addresses (doesn’t always clear up, but at least addresses) 7 of my 21 confusions: 1, 4, 9, 10, 11, 15, and 19.  Briefly:

1. Thorne correctly notes that the movie is vague about what’s causing the blight and the change to the earth’s atmosphere, but he discusses a bunch of possibilities, which are more in the “freak disaster” than the “manmade” category.

4. Cooper’s crash was supposed to have been caused by a gravitational anomaly, as the bulk beings of the far future were figuring out how to communicate with 21st-century humans.  It was another foreshadowing of those bulk beings.

9. Thorne notices the problem of the astronomical amount of energy needed to safely land on Miller’s planet and then get off of it—given that this planet is deep inside the gravity well of the black hole Gargantua, and orbiting Gargantua at a large fraction of the speed of light.  Thorne offers a solution that can only be called creative: namely, while nothing about this was said in the movie (since Christopher Nolan thought it would confuse people), it turns out that the crew accelerated to relativistic speed and then decelerated using a gravitational slingshot around a second, intermediate-mass black hole, which just happened to be in the vicinity of Gargantua at precisely the right times for this.  Thorne again appeals to slingshots around unmentioned but strategically-placed intermediate-mass black holes several more times in the book, to explain other implausible accelerations and decelerations that I hadn’t even noticed.

10. Thorne acknowledges that Cooper didn’t really need to jump into Gargantua in order to jettison the mass of his body (which is trivial compared to the mass of the spacecraft).  Cooper’s real reason for jumping, he says, was the desperate hope that he could somehow find the quantum data there needed to save the humans on Earth, and then somehow get it out of the black hole and back to the humans.  (This being a movie, it of course turns out that Cooper was right.)

11. Yes, Cooper encounters the tesseract while inside the black hole.  Indeed, he hits it while flying into a singularity that’s behind the event horizon, but that isn’t the black hole’s “main” singularity—it’s a different, milder singularity.

15. While this wasn’t made clear in the movie, the purpose of the quantum data was indeed to learn how to manipulate the gravitational anomalies in order to decrease Newton’s constant G in the vicinity of the earth—destroying the earth but also allowing all the humans to escape its gravity with the rocket fuel that’s available.  (Again, nothing said about the poor animals.)

19. Yes, Cooper lost the additional decades while entering Gargantua.  (Furthermore, while Thorne doesn’t discuss this, I guess he must have lost them only when he was still with Anne Hathaway, not after he separates from her.  For otherwise, Anne Hathaway would also be an old woman by the time Cooper reaches her on Edmunds’ planet, contrary to what’s shown in the movie.)

Microsoft SVC

Tuesday, September 23rd, 2014

By now, the news that Microsoft abruptly closed its Silicon Valley research lab—leaving dozens of stellar computer scientists jobless—has already been all over the theoretical computer science blogosphere: see, e.g., Lance, Luca, Omer Reingold, Michael Mitzenmacher.  I never made a real visit to Microsoft SVC (only went there once IIRC, for a workshop, while a grad student at Berkeley); now of course I won’t have the chance.

The theoretical computer science community, in the Bay Area and elsewhere, is now mobilizing to offer visiting positions to the “refugees” from Microsoft SVC, until they’re able to find more permanent employment.  I was happy to learn, this week, that MIT’s theory group will likely play a small part in that effort.

Like many others, I confess to bafflement about Microsoft’s reasons for doing this.  Won’t the severe damage to MSR’s painstakingly-built reputation, to its hiring and retention of the best people, outweigh the comparatively small amount of money Microsoft will save?  Did they at least ask Mr. Gates, to see whether he’d chip in the proverbial change under his couch cushions to keep the lab open?  Most of all, why the suddenness?  Why not wind the lab down over a year, giving the scientists time to apply for new jobs in the academic hiring cycle?  It’s not like Microsoft is in a financial crisis, lacking the cash to keep the lights on.

Yet one could also view this announcement as a lesson in why academia exists and is necessary.  Yes, one should applaud those companies that choose to invest a portion of their revenue in basic research—like IBM, the old AT&T, or Microsoft itself (which continues to operate great research outfits in Redmond, Santa Barbara, both Cambridges, Beijing, Bangalore, Munich, Cairo, and Herzliya).  And yes, one should acknowledge the countless times when academia falls short of its ideals, when it too places the short term above the long.  All the same, it seems essential that our civilization maintain institutions for which the pursuit and dissemination of knowledge are not just accoutrements for when financial times are good and the Board of Directors is sympathetic, but are the institution’s entire reasons for being: those activities that the institution has explicitly committed to support for as long as it exists.

Steven Pinker’s inflammatory proposal: universities should prioritize academics

Thursday, September 11th, 2014

If you haven’t yet, I urge you to read Steven Pinker’s brilliant piece in The New Republic about what’s broken with America’s “elite” colleges and how to fix it.  The piece starts out as an evisceration of an earlier New Republic article on the same subject by William Deresiewicz.  Pinker agrees with Deresiewicz that something is wrong, but finds Deresiewicz’s diagnosis of what to be lacking.  The rest of Pinker’s article sets out his own vision, which involves America’s top universities taking the radical step of focusing on academics, and returning extracurricular activities like sports to their rightful place as extras: ways for students to unwind, rather than a university’s primary reason for existing, or a central criterion for undergraduate admissions.  Most controversially, this would mean that the admissions process at US universities would become more like that in virtually every other advanced country: a relatively-straightforward matter of academic performance, rather than an exercise in peering into the applicants’ souls to find out whether they have a special je ne sais quoi, and the students (and their parents) desperately gaming the intentionally-opaque system, by paying consultants tens of thousands of dollars to develop souls for them.

(Incidentally, readers who haven’t experienced it firsthand might not be able to understand, or believe, just how strange the undergraduate admissions process in the US has become, although Pinker’s anecdotes give some idea.  I imagine anthropologists centuries from now studying American elite university admissions, and the parenting practices that have grown up around them, alongside cannibalism, kamikaze piloting, and other historical extremes of the human condition.)

Pinker points out that a way to assess students’ ability to do college coursework—much more quickly and accurately than by relying on the soul-detecting skills of admissions officers—has existed for a century.  It’s called the standardized test.  But unlike in the rest of the world (even in ultraliberal Western Europe), standardized tests are politically toxic in the US, seen as instruments of racism, classism, and oppression.  Pinker reminds us of the immense irony here: standardized tests were invented as a radical democratizing tool, as a way to give kids from poor and immigrant families the chance to attend colleges that had previously only been open to the children of the elite.  They succeeded at that goal—too well for some people’s comfort.

We now know that the Ivies’ current emphasis on sports, “character,” “well-roundedness,” and geographic diversity in undergraduate admissions was consciously designed (read that again) in the 1920s, by the presidents of Harvard, Princeton, and Yale, as a tactic to limit the enrollment of Jews.  Nowadays, of course, the Ivies’ “holistic” admissions process no longer fulfills that original purpose, in part because American Jews learned to play the “well-roundedness” game as well as anyone, shuttling their teenage kids between sports, band practice, and faux charity work, while hiring professionals to ghostwrite application essays that speak searingly from the heart.  Today, a major effect of “holistic” admissions is instead to limit the enrollment of Asian-Americans (especially recent immigrants), who tend disproportionately to have superb SAT scores, but to be deficient in life’s more meaningful dimensions, such as lacrosse, student government, and marching band.  More generally—again, pause to wallow in the irony—our “progressive” admissions process works strongly in favor of the upper-middle-class families who know how to navigate it, and against the poor and working-class families who don’t.

Defenders of the status quo have missed this reality on the ground, it seems to me, because they’re obsessed with the notion that standardized tests are “reductive”: that is, that they reduce a human being to a number.  Aren’t there geniuses who bomb standardized tests, they ask, as well as unimaginative grinds who ace them?  And if you make test scores a major factor in admissions, then won’t students and teachers train for the tests, and won’t that pervert open-ended intellectual curiosity?  The answer to both questions, I think, is clearly “yes.”  But the status-quo-defenders never seem to take the next step, of examining the alternatives to standardized testing, to see whether they’re even worse.

I’d say the truth is this: spots at the top universities are so coveted, and so much rarer than the demand, that no matter what you use as your admissions criterion, that thing will instantly get fetishized and turned into a commodity by students, parents, and companies eager to profit from their anxiety.  If it’s grades, you’ll get a grades fetish; if sports, you’ll get a sports fetish; if community involvement, you’ll get soup kitchens sprouting up for the sole purpose of giving ambitious 17-year-olds something to write about in their application essays.  If Harvard and Princeton announced that from now on, they only wanted the most laid-back, unambitious kids, the ones who spent their summers lazily skipping stones in a lake, rather than organizing their whole lives around getting in to Harvard and Princeton, tens of thousands of parents in the New York metropolitan area would immediately enroll their kids in relaxation and stone-skipping prep courses.  So, given that reality, why not at least make the fetishized criterion one that’s uniform, explicit, predictively valid, relatively hard to game, and relevant to universities’ core intellectual mission?

(Here, I’m ignoring criticisms specific to the SAT: for example, that it fails to differentiate students at the extreme right end of the bell curve, thereby forcing the top schools to use other criteria.  Even if those criticisms are true, they could easily be fixed by switching to other tests.)

I admit that my views on this matter might be colored by my strange (though as I’ve learned, not at all unique) experience, of getting rejected from almost every “top” college in the United States, and then, ten years later, getting recruited for faculty jobs by the very same institutions that had rejected me as a teenager.  Once you understand how undergraduate admissions work, the rejections were unsurprising: I was a 15-year-old with perfect SATs and a published research paper, but not only was I young and immature, with spotty grades and a weird academic trajectory, I had no sports, no music, no diverse leadership experiences.  I was a narrow, linear, A-to-B thinker who lacked depth and emotional intelligence: the exact opposite of what Harvard and Princeton were looking for in every way.  The real miracle is that despite these massive strikes against me, two schools—Cornell and Carnegie Mellon—were nice enough to give me a chance.  (I ended up going to Cornell, where I got a great education.)

Some people would say: so then what’s the big deal?  If Harvard or MIT reject some students that maybe they should have admitted, those students will simply go elsewhere, where—if they’re really that good—they’ll do every bit as well as they would’ve done at the so-called “top” schools.  But to me, that’s uncomfortably close to saying: there are millions of people who go on to succeed in life despite childhoods of neglect and poverty.  Indeed, some of those people succeed partly because of their rough childhoods, which served as the crucibles of their character and resolve.  Ergo, let’s neglect our own children, so that they too can have the privilege of learning from the school of hard knocks just like we did.  The fact that many people turn out fine despite unfairness and adversity doesn’t mean that we should inflict unfairness if we can avoid it.

Let me end with an important clarification.  Am I saying that, if I had dictatorial control over a university (ha!), I would base undergraduate admissions solely on standardized test scores?  Actually, no.  Here’s what I would do: I would admit the majority of students mostly based on test scores.  A minority, I would admit because of something special about them that wasn’t captured by test scores, whether that something was musical or artistic talent, volunteer work in Africa, a bestselling smartphone app they’d written, a childhood as an orphaned war refugee, or membership in an underrepresented minority.  Crucially, though, the special something would need to be special.  What I wouldn’t do is what’s done today: namely, to turn “specialness” and “well-roundedness” into commodities that the great mass of applicants have to manufacture before they can even be considered.

Other than that, I would barely look at high-school grades, regarding them as too variable from one school to another.  And, while conceding it might be impossible, I would try hard to keep my university in good enough financial shape that it didn’t need any legacy or development admits at all.


Update (Sep. 14): For those who feel I’m exaggerating the situation, please read the story of commenter Jon, about a homeschooled 15-year-old doing graduate-level work in math who, three years ago, was refused undergraduate admission to both Berkeley and Caltech, with the math faculty powerless to influence the admissions officers. See also my response.

Do theoretical computer scientists despise practitioners? (Answer: no, that’s crazy)

Thursday, August 28th, 2014

A roboticist and Shtetl-Optimized fan named Jon Groff recently emailed me the following suggestion for a blog entry:

I think a great idea for an entry would be the way that in fields like particle physics the theoreticians and experimentalists get along quite well but in computer science and robotics in particular there seems to be a great disdain for the people that actually do things from the people that like to think about them. Just thought I’d toss that out there in case you are looking for some subject matter.

After I replied (among other things, raising my virtual eyebrows over his rosy view of the current state of theoretician/experimentalist interaction in particle physics), Jon elaborated on his concerns in a subsequent email:

[T]here seems to be this attitude in CS that getting your hands dirty is unacceptable. You haven’t seen it because you sit a lofty heights and I tend to think you always have. I have been pounding out code since ferrite cores. Yes, Honeywell 1648A, so I have been looking up the posterior of this issue rather than from the forehead as it were. I guess my challenge would be to find a noteworthy computer theoretician somewhere and ask him:
1) What complete, working, currently functioning systems have you designed?
2) How much of the working code did you contribute?
3) Which of these systems is still operational and in what capacity?
Or say, if the person was a famous robotics professor or something you may ask:
1) Have you ever actually ‘built’ a ‘robot’?
2) Could you, if called upon, design and build an easily tasked robot safe for home use using currently available materials and code?

So I wrote a second reply, which Jon encouraged me to turn into a blog post (kindly giving me permission to quote him).  In case it’s of interest to anyone else, my reply is below.


Dear Jon,

For whatever it’s worth, when I was an undergrad, I spent two years working as a coder for Cornell’s RoboCup robot soccer team, handling things like the goalie.  (That was an extremely valuable experience, one reason being that it taught me how badly I sucked at meeting deadlines, documenting my code, and getting my code to work with other people’s code.)   Even before that, I wrote shareware games with my friend Alex Halderman (now a famous computer security expert at U. of Michigan); we made almost $30 selling them.  And I spent several summers working on applied projects at Bell Labs, back when that was still a thing.  And by my count, I’ve written four papers that involved code I personally wrote and experiments I did (one on hypertext, one on stylometric clusteringone on Boolean function query properties, one on improved simulation of stabilizer circuits—for the last of these, the code is actually still used by others).  While this is all from the period 1994-2004 (these days, if I need any coding done, I use the extremely high-level programming language called “undergrad”), I don’t think it’s entirely true to say that I “never got my hands dirty.”

But even if I hadn’t had any of those experiences, or other theoretical computer scientists hadn’t had analogous ones, your questions still strike me as unfair.  They’re no more fair than cornering a star coder or other practical person with questions like, “Have you ever proved a theorem?  A nontrivial theorem?  Why is BPP contained in P/poly?  What’s the cardinality of the set of Turing-degrees?”  If the coder can’t easily answer these questions, would you say it means that she has “disdain for theorists”?  (I was expecting some discussion of this converse question in your email, and was amused when I didn’t find any.)

Personally, I’d say “of course not”: maybe the coder is great at coding, doesn’t need theory very much on a day-to-day basis and doesn’t have much free time to learn it, but (all else equal) would be happy to know more.  Maybe the coder likes theory as an outsider, even has friends from her student days who are theorists, and who she’d go to if she ever did need their knowledge for her work.  Or maybe not.  Maybe she’s an asshole who looks down on anyone who doesn’t have the exact same skill-set that she does.  But I certainly couldn’t conclude that from her inability to answer basic theory questions.

I’d say just the same about theorists.  If they don’t have as much experience building robots as they should have, don’t know as much about large software projects as they should know, etc., then those are all defects to add to the long list of their other, unrelated defects.  But it would be a mistake to assume that they failed to acquire this knowledge because of disdain for practical peoplerather than for mundane reasons like busyness or laziness.

Indeed, it’s also possible that they respect practical people all the more, because they tried to do the things the practical people are good at, and discovered for themselves how hard they were.  Maybe they became theorists partly because of that self-discovery—that was certainly true in my case.  Maybe they’d be happy to talk to or learn from a practical roboticist like yourself, but are too shy or too nerdy to initiate the conversation.

Speaking of which: yes, let’s let bloom a thousand collaborations between theorists and practitioners!  Those are the lifeblood of science.  On the other hand, based on personal experience, I’m also sensitive to the effect where, because of pressures from funding agencies, theorists have to try to pretend their work is “practically relevant” when they’re really just trying to discover something cool, while meantime, practitioners have to pretend their work is theoretically novel or deep, when really, they’re just trying to write software that people will want to use.  I’d love to see both groups freed from this distorting influence, so that they can collaborate for real reasons rather than fake ones.

(I’ve also often remarked that, if I hadn’t gravitated to the extreme theoretical end of computer science, I think I might have gone instead to the extreme practical end, rather than to any of the points in between.  That’s because I hate the above-mentioned distorting influence: if I’m going to try to understand the ultimate limits of computation, then I should pursue that wherever it leads, even if it means studying computational models that won’t be practical for a million years.  And conversely, if I’m going to write useful software, I should throw myself 100% into that, even if it means picking an approach that’s well-understood, clunky, and reliable over an approach that’s new, interesting, elegant, and likely to fail.)

Best,
Scott

US State Department: Let in cryptographers and other scientists

Saturday, July 26th, 2014

Predictably, my last post attracted plenty of outrage (some of it too vile to let through), along with the odd commenter who actually agreed with what I consider my fairly middle-of-the-road, liberal Zionist stance.  But since the outrage came from both sides of the issue, and the two sides were outraged about the opposite things, I guess I should feel OK about it.

Still, it’s hard not to smart from the burns of vituperation, so today I’d like to blog about a very different political issue: one where hopefully almost all Shtetl-Optimized readers will actually agree with me (!).

I’ve learned from colleagues that, over the past year, foreign-born scientists have been having enormously more trouble getting visas to enter the US than they used to.  The problem, I’m told, is particularly severe for cryptographers: embassy clerks are now instructed to ask specifically whether computer scientists seeking to enter the US work in cryptography.  If an applicant answers “yes,” it triggers a special process where the applicant hears nothing back for months, and very likely misses the workshop in the US that he or she had planned to attend.  The root of the problem, it seems, is something called the Technology Alert List (TAL), which has been around for a while—the State Department beefed it up in response to the 9/11 attacks—but which, for some unknown reason, is only now being rigorously enforced.  (Being marked as working in one of the sensitive fields on this list is apparently called “getting TAL’d.”)

The issue reached a comical extreme last October, when Adi Shamir, the “S” in RSA, Turing Award winner, and foreign member of the US National Academy of Sciences, was prevented from entering the US to speak at a “History of Cryptology” conference sponsored by the National Security Agency.  According to Shamir’s open letter detailing the incident, not even his friends at the NSA, or the president of the NAS, were able to grease the bureaucracy at the State Department for him.

It should be obvious to everyone that a crackdown on academic cryptographers serves no national security purpose whatsoever, and if anything harms American security and economic competitiveness, by diverting scientific talent to other countries.  (As Shamir delicately puts it, “the number of terrorists among the members of the US National Academy of Science is rather small.”)  So:

  1. Any readers who have more facts about what’s going on, or personal experiences, are strongly encouraged to share them in the comments section.
  2. Any readers who might have any levers of influence to pull on this issue—a Congressperson to write to, a phone call to make, an Executive Order to issue (I’m talking to you, Barack), etc.—are strongly encouraged to pull them.