My Quora session

May 19th, 2016

Here it is.  Enjoy!  (But sorry, no new questions right now.)

Three announcements

May 9th, 2016

(-3) Bonus Announcement of May 30: As a joint effort by Yuri Matiyasevich, Stefan O’Rear, and myself, and using the Not-Quite-Laconic language that Stefan adapted from Adam Yedidia’s Laconic, we now have a 744-state TM that halts iff there’s a counterexample to the Riemann Hypothesis.

(-2) Today’s Bonus Announcement: Stefan O’Rear says that his Turing machine to search for contradictions in ZFC is now down to 1919 states.  If verified, this is an important milestone: our upper bound on the number of Busy Beaver values that are knowable in standard mathematics is now less than the number of years since the birth of Christ (indeed, even since the generally-accepted dates for the writing of the Gospels).

Stefan also says that his Not-Quite-Laconic system has yielded a 1008-state Turing machine to search for counterexamples to the Riemann Hypothesis, improving on our 5372 states.

(-1) Another Bonus Announcement: Great news, everyone!  Using a modified version of Adam Yedidia’s Laconic language (which he calls NQL, for Not Quite Laconic), Stefan O’Rear has now constructed a 5349-state Turing machine that directly searches for contradictions in ZFC (or rather in Metamath, which is known to be equivalent to ZFC), and whose behavior is therefore unprovable in ZFC, assuming ZFC is consistent.  This, of course, improves on my and Adam’s state count by 2561 states—but it also fixes the technical issue with needing to assume a large cardinal axiom (SRP) in order to prove that the TM runs forever.  Stefan promises further state reductions in the near future.

In other news, Adam has now verified the 43-state Turing machine by Jared S that halts iff there’s a counterexample to Goldbach’s Conjecture.  The 27-state machine by code golf addict is still being verified.

(0) Bonus Announcement: I’ve had half a dozen “Ask Me Anything” sessions on this blog, but today I’m trying something different: a Q&A session on Quora.  The way it works is that you vote for your favorite questions; then on Tuesday, I’ll start with the top-voted questions and keep going down the list until I get tired.  Fire away!  (And thanks to Shreyes Seshasai at Quora for suggesting this.)

(1) When you announce a new result, the worst that can happen is that the result turns out to be wrong, trivial, or already known.  The best that can happen is that the result quickly becomes obsolete, as other people race to improve it.  With my and Adam Yedidia’s work on small Turing machines that elude set theory, we seem to be heading for that best case.  Stefan O’Rear wrote a not-quite-Laconic program that just searches directly for contradictions in a system equivalent to ZFC.  If we could get his program to compile, it would likely yield a Turing machine with somewhere around 6,000-7,000 states whose behavior was independent of ZFC, and would also fix the technical problem with my and Adam’s machine Z, where one needed to assume a large-cardinal axiom called SRP to prove that Z runs forever.  While it would require a redesign from the ground up, a 1,000-state machine whose behavior eludes ZFC also seems potentially within reach using Stefan’s ideas.  Meanwhile, our 4,888-state machine for Goldbach’s conjecture seems to have been completely blown out of the water: first, a commenter named Jared S says he’s directly built a 73-state machine for Goldbach (now down to 43 states); second, a commenter named “code golf addict” claims to have improved on that with a mere 31 states (now down to 27 states).  These machines are now publicly posted, but still await detailed verification.

(2) My good friend Jonah Sinick cofounded Signal Data Science, a data-science summer school that will be running for the second time this summer.  They operate on an extremely interesting model, which I’m guessing might spread more widely: tuition is free, but you pay 10% of your first year’s salary after finding a job in the tech sector.  He asked me to advertise them, so—here!

(3) I was sad to read the news that Uber and Lyft will be suspending all service in Austin, because the city passed an ordinance requiring their drivers to get fingerprint background checks, and imposing other regulations that Uber and Lyft argue are incompatible with their model of part-time drivers.  The companies, of course, are also trying to send a clear message to other cities about what will happen if they don’t get the regulatory environment they want.  To me, the truth of the matter is that Uber/Lyft are like the web, Google, or smartphones: clear, once-per-decade quality-of-life advances that you try once, and then no longer understand how you survived without.  So if Austin wants to maintain a reputation as a serious, modern city, it has no choice but to figure out some way to bring these companies back to the negotiating table.  On the other hand, I’d also say to Uber and Lyft that, even if they needed to raise fares to taxi levels to comply with the new regulations, I expect they’d still do a brisk business!

For me, the “value proposition” of Uber has almost nothing to do with the lower fares, even though they’re lower.  For me, it’s simply about being able to get from one place to another without needing to drive and park, and also without needing desperately to explain where you are, over and over, to a taxi dispatcher who sounds angry that you called and who doesn’t understand you because of a combination of language barriers and poor cellphone reception and your own inability to articulate your location.  And then wondering when and if your taxi will ever show up, because the dispatcher couldn’t promise a specific time, or hung up on you before you could ask them.  And then embarking on a second struggle, to explain to the driver where you’re going, or at least convince them to follow the Google Maps directions.  And then dealing with the fact that the driver has no change, you only have twenties and fifties, and their little machine that prints receipts is out of paper so you can’t submit your trip for reimbursement either.

So yes, I really hope Uber, Lyft, and the city of Austin manage to sort this out before Dana and I move there!  On the other hand, I should say that there’s another part of the new ordinance—namely, requiring Uber and Lyft cars to be labeled—that strikes me as an unalloyed good.  For if there’s one way in which Uber is less convenient than taxis, it’s that you can never figure out which car is your Uber, among all the cars stopping or slowing down near you that look vaguely like the one in the app.

The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me

May 3rd, 2016

I’ve supervised a lot of great student projects in my nine years at MIT, but my inner nerdy teenager has never been as personally delighted by a project as it is right now.  Today, I’m proud to announce that Adam Yedidia, a PhD student at MIT (but an MEng student when he did most of this work), has explicitly constructed a one-tape, two-symbol Turing machine with 7,918 states, whose behavior (when run on a blank tape) can never be proven from the usual axioms of set theory, under reasonable consistency hypotheses.  Adam has also constructed a 4,888-state Turing machine that halts iff there’s a counterexample to Goldbach’s Conjecture, and a 5,372-state machine that halts iff there’s a counterexample to the Riemann Hypothesis.  In all three cases, this is the first time we’ve had a reasonable explicit upper bound on how many states you need in a Turing machine before you can see the behavior in question.

Here’s our research paper, on which Adam generously included me as a coauthor, even though he did the heavy lifting.  Also, here’s a github repository where you can download all the code Adam used to generate these Turing machines, and even use it to build your own small Turing machines that encode interesting mathematical statements.  Finally, here’s a YouTube video where Adam walks you through how to use his tools.

A more precise statement of our main result is this: we give a 7,918-state Turing machine, called Z (and actually explicitly listed in our paper!), such that:

  1. Z runs forever, assuming the consistency of a large-cardinal theory called SRP (Stationary Ramsey Property), but
  2. Z can’t be proved to run forever in ZFC (Zermelo-Fraenkel set theory with the Axiom of Choice, the usual foundation for mathematics), assuming that ZFC is consistent.

A bit of background: it follows, as an immediate consequence of Gödel’s Incompleteness Theorem, that there’s some computer program, of some length, that eludes the power of ordinary mathematics to prove what it does, when it’s run with an unlimited amount of memory.  So for example, such a program could simply enumerate all the possible consequences of the ZFC axioms, one after another, and halt if it ever found a contradiction (e.g., a proof of 1+1=3).  Assuming ZFC is consistent, this program must run forever.  But again assuming ZFC is consistent, ZFC can’t prove that the program runs forever, since if it did, then it would prove its own consistency, thereby violating the Second Incompleteness Theorem!

Alas, this argument still leaves us in the dark about where, in space of computer programs, the “Gödelian gremlin” rears its undecidable head.  A program that searches for an inconsistency in ZFC is a fairly complicated animal: it needs to encode not only the ZFC axiom schema, but also the language and inference rules of first-order logic.  Such a program might be thousands of lines long if written in a standard programming language like C, or millions of instructions if compiled down to a bare-bones machine code.  You’d certainly never run across such a program by chance—not even if you had a computer the size of the observable universe, trying one random program after another for billions of years in a “primordial soup”!

So the question stands—a question that strikes me as obviously important, even though as far as I know, only one or two people ever asked the question before us; see here for example.  Namely: do the axioms of set theory suffice to analyze the behavior of every computer program that’s at most, let’s say, 50 machine instructions long?  Or are there super-short programs that already exhibit “Gödelian behavior”?

Theoretical computer scientists might object that this is “merely a question of constants.”  Well yes, OK, but the origin of life in our universe—a not entirely unrelated puzzle—is also “merely a question of constants”!  In more detail, we know that it’s possible with our laws of physics to build a self-replicating machine: say, DNA or RNA and their associated paraphernalia.  We also know that tiny molecules like H2O and CO2 are not self-replicating.  But we don’t know how small the smallest self-replicating molecule can be—and that’s an issue that influences whether we should expect to find ourselves alone in the universe or find it teeming with life.

Some people might also object that what we’re asking about has already been studied, in the half-century quest to design the smallest universal Turing machine (the subject of Stephen Wolfram’s $25,000 prize in 2007, to which I responded with my own $25.00 prize).  But I see that as fundamentally different, for the following reason.  A universal Turing machine—that is, a machine that simulates any other machine that’s described to it on its input tape—has the privilege of offloading almost all of its complexity onto the description format for the input machine.  So indeed, that’s exactly what all known tiny universal machines do!  But a program that checks (say) Goldbach’s Conjecture, or the Riemann Hypothesis, or the consistency of set theory, on an initially blank tape, has no such liberty.  For such machines, the number of states really does seem like an intrinsic measure of complexity, because the complexity can’t be shoehorned anywhere else.

One can also phrase what we’re asking in terms of the infamous Busy Beaver function.  Recall that BB(n), or the nth Busy Beaver number, is defined to be the maximum number of steps that any n-state Turing machine takes when run on an initially blank tape, assuming that the machine eventually halts. The Busy Beaver function was the centerpiece of my 1998 essay Who Can Name the Bigger Number?, which might still attract more readers than anything else I’ve written since. As I stressed there, if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory.  For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.

But the BB function has a second amazing property: namely, it’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle.  To see why, consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory.  Clearly such a machine could be built, with some finite number of states k.  But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent!  For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.

OK, but we can now ask a quantitative question: how many values of the BB function is it possible for us to know?  Where exactly is the precipice at which this function “departs the realm of mortals and enters the realm of God”: is it closer to n=10 or to n=10,000,000?  In practice, four values of BB have been determined so far:

  • BB(1)=1
  • BB(2)=6
  • BB(3)=21 (Lin and Rado 1965)
  • BB(4)=107 (Brady 1975)

We also know some lower bounds:

See Heiner Marxen’s page or the Googology Wiki (which somehow I only learned about today) for more information.

Some Busy Beaver enthusiasts have opined that even BB(6) will never be known exactly.  On the other hand, the abstract argument from before tells us only that, if we confine ourselves to (say) ZF set theory, then there’s some k—possibly in the tens of millions or higher—such that the values of BB(k), BB(k+1), BB(k+2), and so on can never be proven.  So again: is the number of knowable values of the BB function more like 10, or more like a million?

This is the question that Adam and I (but mostly Adam) have finally addressed.

It’s hopeless to design a Turing machine by hand for all but the simplest tasks, so as a first step, Adam created a new programming language, called Laconic, specifically for writing programs that compile down to small Turing machines.  Laconic programs actually compile to an intermediary language called TMD (Turing Machine Descriptor), and from there to Turing machines.

Even then, we estimate that a direct attempt to write a Laconic program that searched for a contradiction in ZFC would lead to a Turing machine with millions of states.  There were three ideas needed to get the state count down to something reasonable.

The first was to take advantage of the work of Harvey Friedman, who’s one of the one or two people I mentioned earlier who’s written about these problems before.  In particular, Friedman has been laboring since the 1960s to find “natural” arithmetical statements that are provably independent of ZFC or other strong set theories.  (See this AMS Notices piece by Martin Davis for a discussion of Friedman’s progress as of 2006.)  Not only does Friedman’s quest continue, but some of his most important progress has come only within the last year.  His statements—typically involving objects called “order-invariant graphs”—strike me as alien, and as far removed from anything I’d personally have independent reasons to think about (but is that just a sign of my limited perspective?).  Be that as it may, Friedman’s statements still seem a lot easier to encode as short computer programs than the full apparatus of first-order logic and set theory!  So that’s what we started with; our work wouldn’t have been possible without Friedman (who we consulted by email throughout the project).

The second idea was something we called “on-tape processing.”  Basically, instead of compiling directly from Laconic down to Turing machine, Adam wrote an interpreter in Turing machine (which took about 4000 states—a single, fixed cost), and then had the final Turing machine first write a higher-level program onto its tape and then interpret that program.  Instead of the compilation process producing a huge multiplicative overhead in the number of Turing machine states (and a repetitive machine), this approach gives us only an additive overhead.  We found that this one idea decreased the number of states by roughly an order of magnitude.

The third idea was first suggested in 2002 by Ben-Amram and Petersen (and refined for us by Luke Schaeffer); we call it “introspective encoding.”  When we write the program to be interpreted onto the Turing machine tape, the naïve approach would use one Turing machine state per bit.  But that’s clearly wasteful, since in an n-state Turing machine, every state contains ~log(n) bits of information (because of the other states it needs to point to).  A better approach tries to exploit as many of those bits as it can; doing that gave us up to a factor-of-5 additional savings in the number of states.

For Goldbach’s Conjecture and the Riemann Hypothesis, we paid the same 4000-state overhead for the interpreter, but then the program to be interpreted was simpler, giving a smaller overall machine.  Incidentally, it’s not intuitively obvious that the Riemann Hypothesis is equivalent to the statement that some particular computer program runs forever, but it is—that follows, for example, from work by Lagarias and by Davis, Matijasevich, and Robinson (we used the latter; an earlier version of this post incorrectly stated that we used the Lagarias result).

To preempt the inevitable question in the comments section: yes, we did run these Turing machines for a while, and no, none of them had halted after a day or so.  But before you interpret that as evidence in favor of Goldbach, Riemann, and the consistency of ZFC, you should probably know that a Turing machine to test whether all perfect squares are less than 5, produced using Laconic, needed to run for more than an hour before it found the first counterexample (namely, 32=9) and halted.  Laconic Turing machines are optimized only for the number of states, not for speed, to put it mildly.

Of course, three orders of magnitude still remain between the largest value of n (namely, 4) for which BB(n) is known to be knowable in ZFC-based mathematics, and the smallest value of n (namely, 7,918) for which BB(n) is known to be unknowable.  I’m optimistic that further improvements are possible to the machine Z—whether that means simplifications to Friedman’s statement, a redesigned interpreter (possibly using lambda calculus?), or a “multi-stage rocket model” where a bare-bones interpreter would be used to unpack a second, richer interpreter which would be used to unpack a third, etc., until you got to the actual program you cared about.  But I’d be shocked if anyone in my lifetime determined the value of BB(10), for example, or proved the value independent of set theory.  Even after the Singularity happens, I imagine that our robot overlords would find the determination of BB(10) quite a challenge.

In an early Shtetl-Optimized post, I described theoretical computer science as “quantitative epistemology.”  Constructing small Turing machines whose behavior eludes set theory is not conventional theoretical computer science by any stretch of the imagination: it’s closer in practice to programming languages or computer architecture, or even the recreational practice known as code-golfing.  On the other hand, I’ve never been involved with any other project that was so clearly, explicitly about pinning down the quantitative boundary between the knowable and the unknowable.

Comments on our paper are welcome.

Addendum: Some people might wonder “why Turing machines,” as opposed to a more reasonable programming language like C or Python.  Well, first of all, we needed a language that could address an unlimited amount of memory.  Also, the BB function is traditionally defined in terms of Turing machines.  But the most important issue is that we wanted there to be no suspicion whatsoever that our choice of programming language was artificially helping to make our machine small.  And hopefully everyone can agree that one-tape, two-symbol Turing machines aren’t designed for anyone’s convenience!

“Largely just men doing sums”: My review of the excellent Ramanujan film

May 1st, 2016

[Warning: This movie review contains spoilers, as well as a continued fraction expansion.]

These days, it takes an extraordinary occasion for me and Dana to arrange the complicated, rocket-launch-like babysitting logistics involved in going out for a night at the movies.  One such an occasion was an opening-weekend screening of The Man Who Knew Infinitythe new movie about Srinivasa Ramanujan and his relationship with G. H. Hardy—followed by a Q&A with Matthew Brown (who wrote and directed the film), Robert Kanigel (who wrote the biography on which the film was based), and Fields Medalist Manjul Bhargava (who consulted on the film).

I read Kanigel’s The Man Who Knew Infinity in the early nineties; it was a major influence on my life.  There were equations in that book to stop a nerdy 13-year-old’s pulse, like

$$1+9\left( \frac{1}{4}\right) ^{4}+17\left( \frac{1\cdot5}{4\cdot8}\right)
^{4}+25\left( \frac{1\cdot5\cdot9}{4\cdot8\cdot12}\right) ^{4}+\cdots
=\frac{2^{3/2}}{\pi^{1/2}\Gamma\left( 3/4\right) ^{2}}$$

$$\frac{1}{1+\frac{e^{-2\pi}}{1+\frac{e^{-4\pi}}{1+\frac{e^{-6\pi}}{1+\cdots}}%
}}=\left( \sqrt{\frac{5+\sqrt{5}}{2}}-\frac{\sqrt{5}+1}{2}\right)
\sqrt[5]{e^{2\pi}}$$

A thousand pages of exposition about Ramanujan’s mysterious self-taught mathematical style, the effect his work had on Hardy and Littlewood, his impact on the later development of analysis, etc., could never replace the experience of just staring at these things!  Popularizers are constantly trying to “explain” mathematical beauty by comparing it to art, music, or poetry, but I can best understand art, music, and poetry if I assume other people experience them like the above identities.  Across all the years and cultures and continents, can’t you feel Ramanujan himself leaping off your screen, still trying to make you see this bizarre aspect of the architecture of reality that the goddess Namagiri showed him in a dream?

Reading Kanigel’s book, I was also entranced by the culture of early-twentieth-century Cambridge mathematics: the Tripos, Wranglers, High Table.  I asked, why was I here and not there?  And even though I was (and remain) at most 1729-1729 of a Ramanujan, I could strongly identify with his story, because I knew that I, too, was about to embark on the journey from total scientific nobody to someone who the experts might at least take seriously enough to try to prove him wrong.

Anyway, a couple years after reading Kanigel’s biography, I went to the wonderful Canada/USA MathCamp, and there met Richard K. Guy, who’d actually known Hardy.  I couldn’t have been more impressed had Guy visited Platonic heaven and met π and e there.  To put it mildly, no one in my high school had known G. H. Hardy.

I often fantasized—this was the nineties—about writing the screenplay myself for a Ramanujan movie, so that millions of moviegoers could experience the story as I did.  Incidentally, I also fantasized about writing screenplays for Alan Turing and John Nash movies.  I do have a few mathematical biopic ideas that haven’t yet been taken, and for which any potential buyers should get in touch with me:

  • Radical: The Story of Évariste Galois
  • Give Me a Place to Stand: Archimedes’ Final Days
  • Mathématicienne: Sophie Germain In Her Prime
  • The Prime Power of Ludwig Sylow
    (OK, this last one would be more of a limited-market release)

But enough digressions; how was the Ramanujan movie?

Just as Ramanujan himself wasn’t an infallible oracle (many of his claims, e.g. his formula for the prime counting function, turned out to be wrong), so The Man Who Knew Infinity isn’t a perfect movie.  Even so, there’s no question that this is one of the best and truest movies ever made about mathematics and mathematicians, if not the best and truest.  If you’re the kind of person who reads this blog, go see it now.  Don’t wait!  As they stressed at the Q&A, the number of tickets sold in the first couple weeks is what determines whether or not the movie will see a wider release.

More than A Beautiful Mind or Good Will Hunting or The Imitation Game, or the play Proof, or the TV series NUMB3RS, the Ramanujan movie seems to me to respect math as a thing-in-itself, rather than just a tool or symbol for something else that interests the director much more.  The background to the opening credits—and what better choice could there be?—is just page after page from Ramanujan’s notebooks.  Later in the film, there’s a correct explanation of what the partition function P(n) is, and of one of Ramanujan’s and Hardy’s central achievements, which was to give an asymptotic formula for P(n), namely $$ P(n) \approx \frac{e^{π \sqrt{2n/3}}}{4\sqrt{3}n}, $$ and to prove the formula’s correctness.

The film also makes crystal-clear that pure mathematicians do what they do not because of applications to physics or anything else, but simply because they feel compelled to: for the devout Ramanujan, math was literally about writing down “the thoughts of God,” while for the atheist Hardy, math was a religion-substitute.  Notably, the movie explores the tension between Ramanujan’s untrained intuition and Hardy’s demands for rigor in a way that does them both justice, resisting the Hollywood urge to make intuition 100% victorious and rigor just a stodgy punching bag to be defeated.

For my taste, the movie could’ve gone even further in the direction of “letting the math speak”: for example, it could’ve explained just one of Ramanujan’s infinite series.  Audiences might even have liked some more T&A (theorems and asymptotic bounds).  During the Q&A that I attended, I was impressed to see moviegoers repeatedly pressing a somewhat-coy Manjul Bhargava to explain Ramanujan’s actual mathematics (e.g., what exactly were the discoveries in his first letter to Hardy?  what was in Ramanujan’s Lost Notebook that turned out to be so important?).  Then again, this was Cambridge, MA, so the possibility should at least be entertained that what I witnessed was unrepresentative of American ticket-buyers.

From what I’ve read, the movie is also true to South Indian dress, music, religion, and culture.  Yes, the Indian characters speak to each other in English rather than Tamil, but Brown explained that as a necessary compromise (not only for the audience’s sake, but also because Dev Patel and the other Indian actors didn’t speak Tamil).

Some reviews have mentioned issues with casting and characterization.  For example, Hardy is portrayed by Jeremy Irons, who’s superb but also decades older than Hardy was at the time he knew Ramanujan.  Meanwhile Ramanujan’s wife, Janaki, is played by a fully-grown Devika Bhise; the real Janaki was nine (!) when she married Ramanujan, and fourteen when Ramanujan left for England.  J. E. Littlewood is played as almost a comic-relief buffoon, so much so that it feels incongruous when, near the end of the film, Irons-as-Hardy utters the following real-life line:

I still say to myself when I am depressed and find myself forced to listen to pompous and tiresome people, “Well, I have done one thing you could never have done, and that is to have collaborated with Littlewood and Ramanujan on something like equal terms.”

Finally, a young, mustachioed Bertrand Russell is a recurring character.  Russell and Hardy really were friends and fellow WWI pacifists, but Hardy seeking out Bertie’s advice about each Ramanujan-related development seems like almost certainly just an irresistible plot device.

But none of that matters.  What bothered me more were the dramatizations of the prejudice Ramanujan endured in England.  Ramanujan is shown getting knocked to the ground, punched, and kicked by British soldiers barking anti-Indian slurs at him; he then shows up for his next meeting with Hardy covered in bruises, which Hardy (being aloof) neglects to ask about.  Ramanujan is also depicted getting shoved, screamed at, and told never to return by a math professor who he humiliates during a lecture.  I understand why Brown made these cinematic choices: there’s no question that Ramanujan experienced prejudice and snobbery in Cambridge, and that he often felt lonely and unwelcome there.  And it’s surely easier to show Ramanujan literally getting beaten up by racist bigots, than to depict his alienation from Cambridge society as the subtler matter that it most likely was.  To me, though, that’s precisely why the latter choice would’ve been even more impressive, had the film managed to pull it off.

Similarly, during World War I, the film shows not only Trinity College converted into a military hospital, and many promising students marched off to their deaths (all true), but also a shell exploding on campus near Ramanujan, after which Ramanujan gazes in horror at the bleeding dead bodies.  Like, isn’t the truth here dramatic enough?

One other thing: the movie leaves you with the impression that Ramanujan died of tuberculosis.  More recent analysis concluded that it was probably hepatic amoebiasis that he brought with him from India—something that could’ve been cured with the medicine of the time, had anyone correctly diagnosed it.  (Incidentally, the film completely omits Ramanujan’s final year, back in India, when he suffered a relapse of his illness and slowly withered away, yet with Janaki by his side, continued to do world-class research and exchanged letters with Hardy until the very last days.  Everyone I read commented that this was “the right dramatic choice,” but … I dunno, I would’ve shown it!)

But enough!  I fear that to harp on these defects is to hold the film to impossibly-high, Platonic standards, rather than standards that engage with the reality of Hollywood.  An anecdote that Brown related at the end of the Q&A session brought this point home for me.  Apparently, Brown struggled for an entire decade to attract funding for a film about a turn-of-the-century South Indian mathematician visiting Trinity College, Cambridge, whose work had no commercial or military value whatsoever.  At one point, Brown was actually told that he could get the movie funded, if he’d agree to make Ramanujan fall in love with a white nurse, so that a British starlet who would sell tickets could be cast as his love interest.  One can only imagine what a battle it must have been to get a correct explanation of the partition function onto the screen.

In the end, though, nothing made me appreciate The Man Who Knew Infinity more than reading negative reviews of it, like this one by Olly Richards:

Watching someone balancing algorithms or messing about with multivariate polynomials just isn’t conducive to urgently shovelling popcorn into your face.  Difficult to dislike, given its unwavering affection for its subject, The Man Who Knew Infinity is nevertheless hamstrung by the dryness of its subject … Sturdy performances and lovely scenery abound, but it’s still largely just men doing sums; important sums as it turns out, but that isn’t conveyed to the audience until the coda [which mentions black holes] tells us of the major scientific advances they aided.

On behalf of mathematics, on behalf of my childhood self, I’m grateful that Brown fought this fight, and that he won as much as he did.  Whether you walk, run, board a steamship, or take taxi #1729, go see this film.

Addendum: See also this review by Peter Woit, and this in Notices of the AMS by Ramanujan expert George Andrews.

Me interviewed by John Horgan (the author of “The End of Science”)

April 21st, 2016

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

Grading Trudeau on quantum computing

April 17th, 2016

Update (4/19): Inspired by Trudeau’s performance (which they clocked at 35 seconds), Maclean’s magazine asked seven quantum computing researchers—me, Krysta Svore, Aephraim Steinberg, Barry Sanders, Davide Venturelli, Martin Laforest, and Murray Thom—to also explain quantum computing in 35 seconds or fewer.  You can see all the results here (here’s the audio from my entry).


The emails starting hitting me like … a hail of maple syrup from the icy north.  Had I seen the news?  Justin Trudeau, the dreamy young Prime Minister of Canada, visited the Perimeter Institute for Theoretical Physics in Waterloo, one of my favorite old haunts.  At a news conference at PI, as Trudeau stood in front of a math-filled blackboard, a reporter said to him: “I was going to ask you to explain quantum computing, but — when do you expect Canada’s ISIL mission to begin again, and are we not doing anything in the interim?”

Rather than answering immediately about ISIL, Trudeau took the opportunity to explain quantum computing:

“Okay, very simply, normal computers work, uh, by [laughter, applause] … no no no, don’t interrupt me.  When you walk out of here, you will know more … no, some of you will know far less about quantum computing, but most of you … normal computers work, either there’s power going through a wire, or not.  It’s 1, or a 0, they’re binary systems.  Uh, what quantum states allow for is much more complex information to be encoded into a single bit.  Regular computer bit is either a 1 or a 0, on or off.  A quantum state can be much more complex than that, because as we know [speeding up dramatically] things can be both particle and wave at the same times and the uncertainty around quantum states [laughter] allows us to encode more information into a much smaller computer.  So, that’s what exciting about quantum computing and that’s… [huge applause] don’t get me going on this or we’ll be here all day, trust me.”

What marks does Trudeau get for this?  On the one hand, the widespread praise for this reply surely says more about how low the usual standards for politicians are, and about Trudeau’s fine comic delivery, than about anything intrinsic to what he said.  Trudeau doesn’t really assert much here: basically, he just says that normal computers work using 1’s and 0’s, and that quantum computers are more complicated than that in some hard-to-explain way.  He gestures toward the uncertainty principle and wave/particle duality, but he doesn’t say anything about the aspects of QM most directly relevant to quantum computing—superposition or interference or the exponential size of Hilbert space—nor does he mention what quantum computers would or wouldn’t be used for.

On the other hand, I’d grade Trudeau’s explanation as substantially more accurate than what you’d get from a typical popular article.  For pay close attention to what the Prime Minister never says: he never says that a qubit would be “both 0 and 1 at the same time,” or any equivalent formulation.  (He does say that quantum states would let us “encode more information into a much smaller computer,” but while Holevo’s Theorem says that’s false for a common interpretation of “information,” it’s true for other reasonable interpretations.)  The humorous speeding up as he mentions particle/wave duality and the uncertainty principle clearly suggests that he knows it’s more subtle than just “0 and 1 at the same time,” and he also knows that he doesn’t really get it and that the journalists in the audience don’t either.  When I’m grading exams, I always give generous partial credit for honest admissions of ignorance.  B+.

Anyway, I’d be curious to know who at PI prepped Trudeau for this, and what they said.  Those with inside info, feel free to share in the comments (anonymously if you want!).

(One could also compare against Obama’s 2008 answer about bubblesort, which was just a mention of a keyword by comparison.)

Update: See also a Motherboard article where Romain Alléaume, Amr Helmy, Michele Mosca, and Aephraim Steinberg rate Trudeau’s answer, giving it 7/10, no score, 9/10, and 7/10 respectively.

A postdoc post

March 19th, 2016

I apologize that this announcement is late in this year’s hiring season, but here goes.  I’m seeking postdocs in computational complexity and/or quantum information science to join me at UT Austin starting in Fall of 2016.  As I mentioned before, there’s a wonderful CS theory group at UT that you can work with and benefit from, including Adam Klivans, David Zuckerman, Anna Gal, Vijaya Ramachandran, Brent Waters, Eric Price, Greg Plaxton, and of course my wife Dana Moshkovitz, who will be joining UT as well.  If you’re interested, please email me a CV and a short cover letter, and ask your PhD adviser and one or two others to email me recommendation letters.  The postdoc would be for two years by default.

    Update (March 26): If you want to be considered for next year, please get your application to me by March 31st.

    Another Update: I’m very honored, along with fourteen others, to have received a 2016 US National Security Science and Engineering Faculty Fellowship (NSSEFF), which supports unclassified basic research related in some way to DoD interests. My project is called “Paths to Quantum Supremacy.” Now that my Waterman award has basically been spent down, this is where much of the funding for quantum computing initiatives at UT Austin will come from for the next five years.

Quantum. Crypto. Things happen. I blog.

March 6th, 2016

1. A bunch of people emailed me to ask about the paper “Realization of a scalable Shor algorithm”: a joint effort by the groups of my MIT colleague Ike Chuang and of Innsbruck’s Rainer Blatt.  The paper has been on the arXiv since July, but last week everyone suddenly noticed it because it appeared in Science.  See also the articles in MIT News and IEEE Spectrum.

Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5.  Now, one might object that the “15=3×5 theorem” has by now been demonstrated many times using quantum computing (indeed, Chuang himself was involved in the historic first such demonstration, with Neil Gershenfeld in 1997).  Furthermore, if one counts demonstrations not based on quantum computing, some people have claimed even earlier precedents for that theorem.

Nevertheless, as far as I can tell, the new work is a genuine milestone in experimental QC, because it dispenses with most of the precompilation tricks that previous demonstrations of Shor’s algorithm used.  “Precompilation tricks” are a fancier term for “cheating”: i.e., optimizing a quantum circuit in ways that would only make sense if you already assumed that 15 was, indeed, 3×5.  So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before.

Of course, as I’m sure the authors would acknowledge, the word “scalable” in their title admits multiple interpretations, rather like the word “possible.”  (It’s possible to buy strawberry Mentos, and it’s also possible to convert the Sun into computronium, but for different senses of “possible.”)  As I wrote in the comments section of my last post:

There are still all the difficulties of integrating a huge number of qubits—which, in ion-trap implementations, would almost certainly mean having many traps that can communicate with each other using gate teleportation—as well as implementing quantum fault-tolerance (meaning: doing 2-qubit gates at the fault-tolerance threshold, moving qubits around to the right places, pumping in fresh qubits, pumping out dirty ones, etc).  Those all remain major engineering problems for the future.

See also this comment by Vaughan Pratt, who remarks: “the MIT press release … would appear to have translated [‘scalable’] to mean that RSA was now approaching its best-by date, although the paper itself makes no such claim.”

In any case, regardless of how long it takes until we can factor enormous numbers like 91, congratulations to the MIT and Innsbruck groups on what’s certainly progress toward scalable ion-trap QC!

2. Other people wrote to ask about a striking recent preprint of Kaplan, Leurent, Leverrier, and Naya-Plasencia, which points out how Simon’s algorithm—i.e., the forerunner of Shor’s algorithm—can be used to break all sorts of practical private-key authentication schemes in quantum polynomial time, assuming the adversary can query the scheme being attacked on a coherent superposition of inputs.  In practice, this assumption is unlikely to hold, unless the adversary gets the actual obfuscated code of the scheme being attacked (in which case it holds).  Also, this is not the first time Simon’s algorithm has been used to attack cryptography; previous work in the same spirit by Kuwakado and Morii showed how to use Simon’s algorithm to break the 3-round Feistel scheme and the Even-Mansour scheme, again if we assume superposition queries.

Even so, Kaplan et al. seem to pretty dramatically expand the range of “practical” cryptosystems that are known to be vulnerable to Simon attacks in the superposed-query model.  I suspect this will force a revision in how we talk about Simon’s algorithm: from “useless, but theoretically important, and historically important because it led to Shor’s algorithm” to “actually maybe not that useless.”  (See here for a previous attempt of mine to give an interesting “explicit” problem that Simon’s algorithm solves in polynomial time, but that’s classically hard.  Alas, my candidate problem turned out to be classically easy.)  This is analogous to the revision that “Einstein-certified randomness” and the RUV theorem recently forced in how we talk about Bell’s inequality: we can no longer tell students that Bell’s work was important because of the conceptual point it proved about local hidden variables, and because of all the other stuff it led to, even though it obviously has no applications in and of itself.  Now it does have applications in and of itself.

To a quantum complexity theorist like me, who doesn’t know nearly as much applied crypto as he should, the real news in the Kaplan et al. paper is not that Simon’s algorithm can break the sorts of systems they study.  Rather, it’s that so many systems that are vulnerable to Simon attack exist and are used in the first place!  Once people understand the problem, I doubt it will be hard to design schemes of similar efficiency that remain quantum-secure even in the superposed-query model (under some plausible assumption, like that an underlying one-way function is quantum-secure).  Indeed, recent work of Boneh and Zhandry, among others, has already taken significant steps in that direction.  So the situation doesn’t seem “as bad” as it was with public-key crypto, where once Shor’s algorithm comes along, the plausibly quantum-secure alternatives that we currently know (like lattice-based crypto and quantum key distribution) are either much less efficient than RSA and Diffie-Hellman, or else require new hardware.  Still, the new observations about Simon’s algorithm show us how the history of quantum computing could have unfolded differently: rather than Simon → Shor → everyone gets excited (because their crypto is now vulnerable), people could’ve gotten cryptographically excited immediately after Simon.

3. Speaking of Diffie-Hellman, belated congratulations to Whitfield Diffie and Martin Hellman for an extremely well-deserved Turing Award!

4. At MIT’s weekly quantum information group meeting, Aram Harrow spoke about his new paper with Ed Farhi, “Quantum Supremacy through the Quantum Approximate Optimization Algorithm.”  Using the same arguments developed around 2010 by me and Alex Arkhipov, and (independently) by Bremner, Jozsa, and Shepherd, this paper shows that, even though the recently-developed QAOA/Quinoa quantum optimization algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever that algorithm does do, at least there’s no polynomial-time classical algorithm that samples from the same distribution over outputs, unless the polynomial hierarchy collapses.

In other words: even if the algorithm fails at its original goal, it’s still hard for a classical computer to reproduce its exact pattern of failure!  Hence: Quantum Supremacy.

A secondary goal of Aram and Eddie’s paper is to make the Aaronson-Arkhipov and Bremner et al. arguments more accessible to physicists, by decreasing the amount of “weird complexity theory” invoked.  (I suppose I’ve asked for this—for physicists to de-complexify complexity theory—by telling everyone for years how easy quantum mechanics becomes once you take away the physics!)  I’ll leave it to physicists to judge how well Aram and Eddie succeed at their pedagogical goal, but I’m thrilled by any such effort to communicate across fields.  Aram’s talk would surely have served that same educational purpose, had it not gotten derailed partway through by Donald Trump jokes from the audience.  (My contribution: “Aram, will you disavow support from quantum supremacists?”)


Unrelated Update: Some people might be interested in this brief interview with Michael Cerullo, who read The Ghost in the Quantum Turing Machine and wanted to ask me about “the relevance of quantum mechanics to brain preservation, uploading, and identity.”

From Boston to Austin

February 28th, 2016

I have some big news—well, not for the world, but for me personally.  Starting this summer, I’ll be leaving MIT, and starting a new phase of my life, as David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin.  I’ll also be the founding director of UT Austin’s new quantum computing center, whose motto will be “Hook ’em Hadamards“, and whose logo will depict a fierce longhorn bull, whose horns are bra and ket signs enclosing an inner product between two quantum states.  My brilliant and talented wife, Dana Moshkovitz Aaronson, will also be joining UT Austin, as a tenured Associate Professor of Computer Science.  Our current PhD students will remain enrolled at MIT, while also spending as much time as they like in Austin.

I’ll deeply miss MIT and Boston.  More than anything else, I’ll miss the phenomenal students at MIT, who I’ve had the immense privilege to teach and learn from for nine years.  Go Beavers!  I’m grateful as well to my many friends and colleagues who made my years at MIT so rewarding: a time of personal growth, in which I developed from a skinny, insecure 26-year-old nerd, blogging and trying to prove oracle separations, into a pot-bellied, tenured, 34-year-old married-father nerd, still blogging and trying to prove the same oracle separations (but no longer as diligently).

To nip an otherwise-inevitable rumor: I wasn’t forced to leave MIT over anything here on Shtetl-Optimized.  I feel the need to say this because, within the last year, I’ve spent hundreds of miserable hours scrolling through social media threads wherein stranger after stranger proclaimed me basically the world’s worst scum (tied, perhaps, with the other Scott A.), and even called on MIT to fire me.  Given that experience, it was repeatedly jarring for me to reenter reality and discover how irrelevant this all was, either to MIT or to any of the universities that recruited me and Dana.  Bizarre as it sounds, CS departments mostly cared about what actual research we were doing and could bring to them!  So students and faculty afraid to debate anything controversial online under their real names, however politely, should know that even in 2016, the banner of academic freedom yet waves.

Without further ado, let me list ten things that are awesome about Austin and that helped attract me and Dana there.

  1. One of the strongest CS departments and theory groups in the world.  From 1984 until his death in 2002, UT Austin was home to Edsger Dijkstra, who not only discovered Dijkstra’s algorithm but also penned the immortal words that might as well be tattooed on my stomach: computer science is no more about computers than astronomy is about telescopes.  Today, Austin’s CS department is rapidly expanding, and just within theory, is home to David Zuckerman, Anna Gal, Vijaya Ramachandran, Brent Waters, Eric Price, and Greg Plaxton.  With me and Dana there as well, I can say with all due modesty that we intend to compete against any CS theory program anywhere in the world.
  2. Adam Klivans.  The closest I’ve had to a mentor in the exceedingly narrow field of theoretical computer science humor.
  3. An outstanding recent track record with CS theory PhD students.  Since the turn of the century, UT Austin has produced Sasha Sherstov, Anup Rao, Allison Bishop LewkoSeth Pettie, Vladimir Trifonov, Raghu Meka, and other stars of the CS theory world.  That record lets me without the slightest hesitation tell hotshot undergrads who want to do classical and/or quantum complexity theory to apply to Austin for grad school.
  4. The opportunity to build—or rather, rebuild—a UT presence in quantum computing.  While I’m excited to help build a new group—and I feel like it’s the right time in my career to do that—I can’t say that this is the first time UT Austin will have a significant presence in quantum computing.  Way back in the late 70s and early 80s, UT was home to most of the (proto) “quantum computing research” that existed on earth.  It’s there that John Archibald Wheeler philosophized about “It from Bit,” that Bryce deWitt popularized the Many-Worlds Interpretation and Hugh Everett gave his only public lecture on the subject, that David Deutsch did a four-year postdoc in which he formed the seeds of the idea of quantum computing, and that Wojciech Zurek, William Wootters, and Benjamin Schumacher (who between them, founded decoherence theory, proved the No-Cloning Theorem, and coined the term “qubit”) did their PhDs.  I’m honored to tread in their footsteps.
  5. Money.  Texas, as it turns out, has a lot of it.  Now, the conventional wisdom would be that Texas’ wealth is mostly irrelevant to academic scientists, because it’s controlled by reactionary oilmen for whom curiosity-driven research is not exactly the top priority.  That might have been true about the administrations of George W. Bush or Rick Perry.  But Texas’ current governor, Greg Abbott, while still a right-wing Republican, also pushed through an aggressive $4-billion measure called the Governor’s University Research Initiative, one of whose central goals is to recruit leading scientists to Texas.
  6. Weather.  To a first approximation, Austin is lovely and pleasant during the academic year (even as the planet warms, this should remain true for at least a few more decades)—and while I’d sort of vaguely imagined all of Texas as a giant desert, Austin turns out to be lush and green and full of swimming holes.  The summers, of course, are hot enough to fuse hydrogen.  But for academics like me and Dana, it turns out that there’s an elegant solution to that, one unavailable for dealing with New England winters.  That solution is to leave town, to use June, July, and August for miscellaneous academic travel.
  7. Quality of life.  If we’re being honest, I’m not someone likely to spend much time at indie-rock festivals, or whatever quirky cultural stuff it is that’s made Austin the fastest-growing city in the US.  But here’s something I do care about: even though highway traffic in Austin is bad and getting worse, that need not affect my life too much.  Research indicates that, for roughly the price of our current 2-bedroom condo in Cambridge, we could get a lovely 4-bedroom with a yard that’s in walking distance to the UT campus, as well as to stores, restaurants, good schools, and parks.
  8. Schools.  I had a pretty miserable experience growing up.  I don’t know if Lily (or any future children Dana and I have) will be anything like I was, but given that she’s in an “at-risk population” for nerdiness, I’d love to live in a place with nerd education options that don’t stink.  Austin, happily, has two renowned math/science magnet schools—Kealing Middle School and LASA High School—which admit based on test scores.  (By contrast, in most parts in the US, such programs either don’t exist or admit purely by lottery.)  Austin also has the only elementary school, anywhere, whose admissions director told me that sure, they’d let a student skip a grade if it made sense academically.
  9. Steven Weinberg.  I confess: it probably affected me more than it should that arguably the greatest scientist now walking the earth, a legend of physics who could go wherever the hell he wanted, has chosen to spend the past thirty-plus years at UT Austin.  On our last visit there, Dana, my parents, and I had the honor of having dinner with Weinberg.  After we’d exchanged stories about Telluride House at Cornell, where Weinberg and I both lived as undergrads (and which apparently changed little between the 1950s and the 1990s), Weinberg sung the praises of Austin for hours.  (Admittedly, part of why Weinberg enjoys Austin so much is that there it’s easy to be on a first-name basis with the mayor, tech billionaires, and all the other leaders of the city—an advantage that might be Nobel-laureate-specific!)
  10. Adventure.  By background and temperament, I’m probably one of the “least Texan” Americans imaginable: a nerdy east-coast Jewish intellectual who enjoys snow, can’t much tolerate spicy food, is bored by cowboy movies and fears physical confrontation.  Indeed, until I actually visited the place, my only real associations with Texas were tumbleweeds blowing across a parched desert hellscape, oilmen with giant belt buckles guffawing about so-called global warming, crazed football hooligans filling city-sized stadiums, shotguns, rattlesnakes, and George W. Bush.  But then, the contrast between the over-the-top image and the pleasant reality of Austin had the paradoxical effect of making moving to Texas feel like an adventure—an adventure with an acceptable risk profile.  Like, if I’m going to uproot at all, why not to a place that’s strange and different and interesting?

Even given the above, some people will ask about things they’d consider obvious dealbreakers for moving to Texas.  In particular, what about the infamous new law that essentially forces UT Austin to let students carry concealed firearms to class?  Well, I oppose that law.  Indeed, when I haven’t been angering the social-justice left, I’ve been angering the right by (for example) blogging about my strong support for gun control.  To me, it seems like a terrible idea for the Texas state legislature, which provides only 14% of the UT system’s budget, to force on UT a gun policy that its faculty and students overwhelmingly hate.  And I admired Steven Weinberg’s announcement that he intends to defy the law in his classroom, and fight it out in court if necessary.  (Weinberg also gave, as one reason to oppose the law, how much harder it will make it for UT to recruit faculty.)

But at the same time … Dana is Israeli.  For her, it’s perfectly normal to go outside and see 18-year-old girls chatting and laughing with huge-ass machine guns slung over their shoulders.  Having spent a month of each year in Tel Aviv, seeing passersby with guns has become, if not exactly normal to me, then not something I fear 2% as much as I fear crashing my car.  And indeed, if one takes a statistical approach to risk, Austin has a much lower per-capita violent crime rate than Boston does.

And yes, I know, the US and Israel have completely different gun cultures: in Israel, for example, the only people carrying around semiautomatics are trained and monitored conscripts; there’s no concept of a private “right” to such a weapon.  And yes, the principle matters.  But if one is unwilling to move to any place that has any laws one disagrees with, one should probably look into faculty positions on offshore barges or Jupiter.

Austin itself, of course, is only slightly less liberal than Portland, the blueberry in the tomato soup as Rick Perry so memorably put it.  Even so, the maps insist that Austin is in Texas, which means that while there one will probably encounter Texans.  (A friend, on hearing that Dana took a quick liking to Austin when she visited, quipped that it was probably because Austin reminded her of Israel: “hot and surrounded by hostile territory.”)

Now, the actual Texans who I’ve met so far have been frighteningly warm and hospitable.  But the question stands: what will I do if, while living there, I meet (let’s suppose) some sun-calloused cattle ranchers who consider me an arrogant, effete coastal liberal who patronizes them in blog posts like this one?  What if they tell me to scram, head back east, and never mess with Texas again?

Well, I’ve already decided what I’d like to do in this hypothetical situation.  I’d like to invite the ranchers over to my place for some barbecued beers and ice-cold steaks, or whatever it is you eat in Texas, and tell them all about quantum query algorithms, and ask them about cattle feed, and try to find common ground, just like I tried to find common ground with the other end of the political spectrum—with the folks who called me a clueless, patriarchal, entitled white male douchebro who silenced their already-marginalized voices by not agreeing with everything they said.  For I’ve increasingly come to the conviction that, while you might fail to find common ground with someone, you’ve got to try, you’ve got to steelman their argument and learn whatever you can from it.  I once, for example, thought about the Religious Right as purely contemptible, deserving only unthinking snark, and I was completely wrong.  Even when I was right on the underlying issues, I was wrong on the epistemology.  In Texas, hopefully I’ll have a chance to do better.

In summary:

  1. Wherever you’re at in your career, if you’d like to do quantum information and/or theoretical computer science research on the wild frontier—if QMA, QCMA, and BQP/qpoly strike you as little more than wayward calves to be lassoed in—then please consider joining us at UT Austin.  To be concrete: we’ll be looking for distinguished senior faculty to hire under the Governor’s University Research Initiative, we’ll be looking for tenure-track junior faculty in both CS and ECE, we’ll be looking for postdocs, we’ll be looking for grad students, and finally we’ll be looking for undergrads who’d like to join the innovative Turing Scholars honors program.
  2. If you’d just like to come for a week and give a seminar, we’ll have money for that too.
  3. Hook ’em Hadamards!

Totally Unrelated Update (Feb. 29): Michael Mitzenmacher has asked me to announce that nominations are open for the SIGACT Distinguished Service Prize. More information is available here.

The universe has a high (but not infinite) Sleep Number

February 12th, 2016

As everyone knows, this was a momentous week in the history of science.  And I don’t need to tell you why: the STOC and CCC accepted paper lists finally came out.

Haha, kidding!  I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).

By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined.  By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton).  But this was detected, in possibly the highest-precision measurement ever made.

As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space?  I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2.  Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close.  Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3).  Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter.  Would you feel that?  Not sure.  At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point.  (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.)  Now, the black holes themselves were orbiting about 200 miles from each other before they merged.  So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!

In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight.  People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.”  But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.

As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong.  Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.