A postdoc post

March 19th, 2016

I apologize that this announcement is late in this year’s hiring season, but here goes.  I’m seeking postdocs in computational complexity and/or quantum information science to join me at UT Austin starting in Fall of 2016.  As I mentioned before, there’s a wonderful CS theory group at UT that you can work with and benefit from, including Adam Klivans, David Zuckerman, Anna Gal, Vijaya Ramachandran, Brent Waters, Eric Price, Greg Plaxton, and of course my wife Dana Moshkovitz, who will be joining UT as well.  If you’re interested, please email me a CV and a short cover letter, and ask your PhD adviser and one or two others to email me recommendation letters.  The postdoc would be for two years by default.

    Update (March 26): If you want to be considered for next year, please get your application to me by March 31st.

    Another Update: I’m very honored, along with fourteen others, to have received a 2016 US National Security Science and Engineering Faculty Fellowship (NSSEFF), which supports unclassified basic research related in some way to DoD interests. My project is called “Paths to Quantum Supremacy.” Now that my Waterman award has basically been spent down, this is where much of the funding for quantum computing initiatives at UT Austin will come from for the next five years.

Quantum. Crypto. Things happen. I blog.

March 6th, 2016

1. A bunch of people emailed me to ask about the paper “Realization of a scalable Shor algorithm”: a joint effort by the groups of my MIT colleague Ike Chuang and of Innsbruck’s Rainer Blatt.  The paper has been on the arXiv since July, but last week everyone suddenly noticed it because it appeared in Science.  See also the articles in MIT News and IEEE Spectrum.

Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5.  Now, one might object that the “15=3×5 theorem” has by now been demonstrated many times using quantum computing (indeed, Chuang himself was involved in the historic first such demonstration, with Neil Gershenfeld in 1997).  Furthermore, if one counts demonstrations not based on quantum computing, some people have claimed even earlier precedents for that theorem.

Nevertheless, as far as I can tell, the new work is a genuine milestone in experimental QC, because it dispenses with most of the precompilation tricks that previous demonstrations of Shor’s algorithm used.  “Precompilation tricks” are a fancier term for “cheating”: i.e., optimizing a quantum circuit in ways that would only make sense if you already assumed that 15 was, indeed, 3×5.  So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before.

Of course, as I’m sure the authors would acknowledge, the word “scalable” in their title admits multiple interpretations, rather like the word “possible.”  (It’s possible to buy strawberry Mentos, and it’s also possible to convert the Sun into computronium, but for different senses of “possible.”)  As I wrote in the comments section of my last post:

There are still all the difficulties of integrating a huge number of qubits—which, in ion-trap implementations, would almost certainly mean having many traps that can communicate with each other using gate teleportation—as well as implementing quantum fault-tolerance (meaning: doing 2-qubit gates at the fault-tolerance threshold, moving qubits around to the right places, pumping in fresh qubits, pumping out dirty ones, etc).  Those all remain major engineering problems for the future.

See also this comment by Vaughan Pratt, who remarks: “the MIT press release … would appear to have translated [‘scalable’] to mean that RSA was now approaching its best-by date, although the paper itself makes no such claim.”

In any case, regardless of how long it takes until we can factor enormous numbers like 91, congratulations to the MIT and Innsbruck groups on what’s certainly progress toward scalable ion-trap QC!

2. Other people wrote to ask about a striking recent preprint of Kaplan, Leurent, Leverrier, and Naya-Plasencia, which points out how Simon’s algorithm—i.e., the forerunner of Shor’s algorithm—can be used to break all sorts of practical private-key authentication schemes in quantum polynomial time, assuming the adversary can query the scheme being attacked on a coherent superposition of inputs.  In practice, this assumption is unlikely to hold, unless the adversary gets the actual obfuscated code of the scheme being attacked (in which case it holds).  Also, this is not the first time Simon’s algorithm has been used to attack cryptography; previous work in the same spirit by Kuwakado and Morii showed how to use Simon’s algorithm to break the 3-round Feistel scheme and the Even-Mansour scheme, again if we assume superposition queries.

Even so, Kaplan et al. seem to pretty dramatically expand the range of “practical” cryptosystems that are known to be vulnerable to Simon attacks in the superposed-query model.  I suspect this will force a revision in how we talk about Simon’s algorithm: from “useless, but theoretically important, and historically important because it led to Shor’s algorithm” to “actually maybe not that useless.”  (See here for a previous attempt of mine to give an interesting “explicit” problem that Simon’s algorithm solves in polynomial time, but that’s classically hard.  Alas, my candidate problem turned out to be classically easy.)  This is analogous to the revision that “Einstein-certified randomness” and the RUV theorem recently forced in how we talk about Bell’s inequality: we can no longer tell students that Bell’s work was important because of the conceptual point it proved about local hidden variables, and because of all the other stuff it led to, even though it obviously has no applications in and of itself.  Now it does have applications in and of itself.

To a quantum complexity theorist like me, who doesn’t know nearly as much applied crypto as he should, the real news in the Kaplan et al. paper is not that Simon’s algorithm can break the sorts of systems they study.  Rather, it’s that so many systems that are vulnerable to Simon attack exist and are used in the first place!  Once people understand the problem, I doubt it will be hard to design schemes of similar efficiency that remain quantum-secure even in the superposed-query model (under some plausible assumption, like that an underlying one-way function is quantum-secure).  Indeed, recent work of Boneh and Zhandry, among others, has already taken significant steps in that direction.  So the situation doesn’t seem “as bad” as it was with public-key crypto, where once Shor’s algorithm comes along, the plausibly quantum-secure alternatives that we currently know (like lattice-based crypto and quantum key distribution) are either much less efficient than RSA and Diffie-Hellman, or else require new hardware.  Still, the new observations about Simon’s algorithm show us how the history of quantum computing could have unfolded differently: rather than Simon → Shor → everyone gets excited (because their crypto is now vulnerable), people could’ve gotten cryptographically excited immediately after Simon.

3. Speaking of Diffie-Hellman, belated congratulations to Whitfield Diffie and Martin Hellman for an extremely well-deserved Turing Award!

4. At MIT’s weekly quantum information group meeting, Aram Harrow spoke about his new paper with Ed Farhi, “Quantum Supremacy through the Quantum Approximate Optimization Algorithm.”  Using the same arguments developed around 2010 by me and Alex Arkhipov, and (independently) by Bremner, Jozsa, and Shepherd, this paper shows that, even though the recently-developed QAOA/Quinoa quantum optimization algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever that algorithm does do, at least there’s no polynomial-time classical algorithm that samples from the same distribution over outputs, unless the polynomial hierarchy collapses.

In other words: even if the algorithm fails at its original goal, it’s still hard for a classical computer to reproduce its exact pattern of failure!  Hence: Quantum Supremacy.

A secondary goal of Aram and Eddie’s paper is to make the Aaronson-Arkhipov and Bremner et al. arguments more accessible to physicists, by decreasing the amount of “weird complexity theory” invoked.  (I suppose I’ve asked for this—for physicists to de-complexify complexity theory—by telling everyone for years how easy quantum mechanics becomes once you take away the physics!)  I’ll leave it to physicists to judge how well Aram and Eddie succeed at their pedagogical goal, but I’m thrilled by any such effort to communicate across fields.  Aram’s talk would surely have served that same educational purpose, had it not gotten derailed partway through by Donald Trump jokes from the audience.  (My contribution: “Aram, will you disavow support from quantum supremacists?”)


Unrelated Update: Some people might be interested in this brief interview with Michael Cerullo, who read The Ghost in the Quantum Turing Machine and wanted to ask me about “the relevance of quantum mechanics to brain preservation, uploading, and identity.”

From Boston to Austin

February 28th, 2016

I have some big news—well, not for the world, but for me personally.  Starting this summer, I’ll be leaving MIT, and starting a new phase of my life, as David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin.  I’ll also be the founding director of UT Austin’s new quantum computing center, whose motto will be “Hook ’em Hadamards“, and whose logo will depict a fierce longhorn bull, whose horns are bra and ket signs enclosing an inner product between two quantum states.  My brilliant and talented wife, Dana Moshkovitz Aaronson, will also be joining UT Austin, as a tenured Associate Professor of Computer Science.  Our current PhD students will remain enrolled at MIT, while also spending as much time as they like in Austin.

I’ll deeply miss MIT and Boston.  More than anything else, I’ll miss the phenomenal students at MIT, who I’ve had the immense privilege to teach and learn from for nine years.  Go Beavers!  I’m grateful as well to my many friends and colleagues who made my years at MIT so rewarding: a time of personal growth, in which I developed from a skinny, insecure 26-year-old nerd, blogging and trying to prove oracle separations, into a pot-bellied, tenured, 34-year-old married-father nerd, still blogging and trying to prove the same oracle separations (but no longer as diligently).

To nip an otherwise-inevitable rumor: I wasn’t forced to leave MIT over anything here on Shtetl-Optimized.  I feel the need to say this because, within the last year, I’ve spent hundreds of miserable hours scrolling through social media threads wherein stranger after stranger proclaimed me basically the world’s worst scum (tied, perhaps, with the other Scott A.), and even called on MIT to fire me.  Given that experience, it was repeatedly jarring for me to reenter reality and discover how irrelevant this all was, either to MIT or to any of the universities that recruited me and Dana.  Bizarre as it sounds, CS departments mostly cared about what actual research we were doing and could bring to them!  So students and faculty afraid to debate anything controversial online under their real names, however politely, should know that even in 2016, the banner of academic freedom yet waves.

Without further ado, let me list ten things that are awesome about Austin and that helped attract me and Dana there.

  1. One of the strongest CS departments and theory groups in the world.  From 1984 until his death in 2002, UT Austin was home to Edsger Dijkstra, who not only discovered Dijkstra’s algorithm but also penned the immortal words that might as well be tattooed on my stomach: computer science is no more about computers than astronomy is about telescopes.  Today, Austin’s CS department is rapidly expanding, and just within theory, is home to David Zuckerman, Anna Gal, Vijaya Ramachandran, Brent Waters, Eric Price, and Greg Plaxton.  With me and Dana there as well, I can say with all due modesty that we intend to compete against any CS theory program anywhere in the world.
  2. Adam Klivans.  The closest I’ve had to a mentor in the exceedingly narrow field of theoretical computer science humor.
  3. An outstanding recent track record with CS theory PhD students.  Since the turn of the century, UT Austin has produced Sasha Sherstov, Anup Rao, Allison Bishop LewkoSeth Pettie, Vladimir Trifonov, Raghu Meka, and other stars of the CS theory world.  That record lets me without the slightest hesitation tell hotshot undergrads who want to do classical and/or quantum complexity theory to apply to Austin for grad school.
  4. The opportunity to build—or rather, rebuild—a UT presence in quantum computing.  While I’m excited to help build a new group—and I feel like it’s the right time in my career to do that—I can’t say that this is the first time UT Austin will have a significant presence in quantum computing.  Way back in the late 70s and early 80s, UT was home to most of the (proto) “quantum computing research” that existed on earth.  It’s there that John Archibald Wheeler philosophized about “It from Bit,” that Bryce deWitt popularized the Many-Worlds Interpretation and Hugh Everett gave his only public lecture on the subject, that David Deutsch did a four-year postdoc in which he formed the seeds of the idea of quantum computing, and that Wojciech Zurek, William Wootters, and Benjamin Schumacher (who between them, founded decoherence theory, proved the No-Cloning Theorem, and coined the term “qubit”) did their PhDs.  I’m honored to tread in their footsteps.
  5. Money.  Texas, as it turns out, has a lot of it.  Now, the conventional wisdom would be that Texas’ wealth is mostly irrelevant to academic scientists, because it’s controlled by reactionary oilmen for whom curiosity-driven research is not exactly the top priority.  That might have been true about the administrations of George W. Bush or Rick Perry.  But Texas’ current governor, Greg Abbott, while still a right-wing Republican, also pushed through an aggressive $4-billion measure called the Governor’s University Research Initiative, one of whose central goals is to recruit leading scientists to Texas.
  6. Weather.  To a first approximation, Austin is lovely and pleasant during the academic year (even as the planet warms, this should remain true for at least a few more decades)—and while I’d sort of vaguely imagined all of Texas as a giant desert, Austin turns out to be lush and green and full of swimming holes.  The summers, of course, are hot enough to fuse hydrogen.  But for academics like me and Dana, it turns out that there’s an elegant solution to that, one unavailable for dealing with New England winters.  That solution is to leave town, to use June, July, and August for miscellaneous academic travel.
  7. Quality of life.  If we’re being honest, I’m not someone likely to spend much time at indie-rock festivals, or whatever quirky cultural stuff it is that’s made Austin the fastest-growing city in the US.  But here’s something I do care about: even though highway traffic in Austin is bad and getting worse, that need not affect my life too much.  Research indicates that, for roughly the price of our current 2-bedroom condo in Cambridge, we could get a lovely 4-bedroom with a yard that’s in walking distance to the UT campus, as well as to stores, restaurants, good schools, and parks.
  8. Schools.  I had a pretty miserable experience growing up.  I don’t know if Lily (or any future children Dana and I have) will be anything like I was, but given that she’s in an “at-risk population” for nerdiness, I’d love to live in a place with nerd education options that don’t stink.  Austin, happily, has two renowned math/science magnet schools—Kealing Middle School and LASA High School—which admit based on test scores.  (By contrast, in most parts in the US, such programs either don’t exist or admit purely by lottery.)  Austin also has the only elementary school, anywhere, whose admissions director told me that sure, they’d let a student skip a grade if it made sense academically.
  9. Steven Weinberg.  I confess: it probably affected me more than it should that arguably the greatest scientist now walking the earth, a legend of physics who could go wherever the hell he wanted, has chosen to spend the past thirty-plus years at UT Austin.  On our last visit there, Dana, my parents, and I had the honor of having dinner with Weinberg.  After we’d exchanged stories about Telluride House at Cornell, where Weinberg and I both lived as undergrads (and which apparently changed little between the 1950s and the 1990s), Weinberg sung the praises of Austin for hours.  (Admittedly, part of why Weinberg enjoys Austin so much is that there it’s easy to be on a first-name basis with the mayor, tech billionaires, and all the other leaders of the city—an advantage that might be Nobel-laureate-specific!)
  10. Adventure.  By background and temperament, I’m probably one of the “least Texan” Americans imaginable: a nerdy east-coast Jewish intellectual who enjoys snow, can’t much tolerate spicy food, is bored by cowboy movies and fears physical confrontation.  Indeed, until I actually visited the place, my only real associations with Texas were tumbleweeds blowing across a parched desert hellscape, oilmen with giant belt buckles guffawing about so-called global warming, crazed football hooligans filling city-sized stadiums, shotguns, rattlesnakes, and George W. Bush.  But then, the contrast between the over-the-top image and the pleasant reality of Austin had the paradoxical effect of making moving to Texas feel like an adventure—an adventure with an acceptable risk profile.  Like, if I’m going to uproot at all, why not to a place that’s strange and different and interesting?

Even given the above, some people will ask about things they’d consider obvious dealbreakers for moving to Texas.  In particular, what about the infamous new law that essentially forces UT Austin to let students carry concealed firearms to class?  Well, I oppose that law.  Indeed, when I haven’t been angering the social-justice left, I’ve been angering the right by (for example) blogging about my strong support for gun control.  To me, it seems like a terrible idea for the Texas state legislature, which provides only 14% of the UT system’s budget, to force on UT a gun policy that its faculty and students overwhelmingly hate.  And I admired Steven Weinberg’s announcement that he intends to defy the law in his classroom, and fight it out in court if necessary.  (Weinberg also gave, as one reason to oppose the law, how much harder it will make it for UT to recruit faculty.)

But at the same time … Dana is Israeli.  For her, it’s perfectly normal to go outside and see 18-year-old girls chatting and laughing with huge-ass machine guns slung over their shoulders.  Having spent a month of each year in Tel Aviv, seeing passersby with guns has become, if not exactly normal to me, then not something I fear 2% as much as I fear crashing my car.  And indeed, if one takes a statistical approach to risk, Austin has a much lower per-capita violent crime rate than Boston does.

And yes, I know, the US and Israel have completely different gun cultures: in Israel, for example, the only people carrying around semiautomatics are trained and monitored conscripts; there’s no concept of a private “right” to such a weapon.  And yes, the principle matters.  But if one is unwilling to move to any place that has any laws one disagrees with, one should probably look into faculty positions on offshore barges or Jupiter.

Austin itself, of course, is only slightly less liberal than Portland, the blueberry in the tomato soup as Rick Perry so memorably put it.  Even so, the maps insist that Austin is in Texas, which means that while there one will probably encounter Texans.  (A friend, on hearing that Dana took a quick liking to Austin when she visited, quipped that it was probably because Austin reminded her of Israel: “hot and surrounded by hostile territory.”)

Now, the actual Texans who I’ve met so far have been frighteningly warm and hospitable.  But the question stands: what will I do if, while living there, I meet (let’s suppose) some sun-calloused cattle ranchers who consider me an arrogant, effete coastal liberal who patronizes them in blog posts like this one?  What if they tell me to scram, head back east, and never mess with Texas again?

Well, I’ve already decided what I’d like to do in this hypothetical situation.  I’d like to invite the ranchers over to my place for some barbecued beers and ice-cold steaks, or whatever it is you eat in Texas, and tell them all about quantum query algorithms, and ask them about cattle feed, and try to find common ground, just like I tried to find common ground with the other end of the political spectrum—with the folks who called me a clueless, patriarchal, entitled white male douchebro who silenced their already-marginalized voices by not agreeing with everything they said.  For I’ve increasingly come to the conviction that, while you might fail to find common ground with someone, you’ve got to try, you’ve got to steelman their argument and learn whatever you can from it.  I once, for example, thought about the Religious Right as purely contemptible, deserving only unthinking snark, and I was completely wrong.  Even when I was right on the underlying issues, I was wrong on the epistemology.  In Texas, hopefully I’ll have a chance to do better.

In summary:

  1. Wherever you’re at in your career, if you’d like to do quantum information and/or theoretical computer science research on the wild frontier—if QMA, QCMA, and BQP/qpoly strike you as little more than wayward calves to be lassoed in—then please consider joining us at UT Austin.  To be concrete: we’ll be looking for distinguished senior faculty to hire under the Governor’s University Research Initiative, we’ll be looking for tenure-track junior faculty in both CS and ECE, we’ll be looking for postdocs, we’ll be looking for grad students, and finally we’ll be looking for undergrads who’d like to join the innovative Turing Scholars honors program.
  2. If you’d just like to come for a week and give a seminar, we’ll have money for that too.
  3. Hook ’em Hadamards!

Totally Unrelated Update (Feb. 29): Michael Mitzenmacher has asked me to announce that nominations are open for the SIGACT Distinguished Service Prize. More information is available here.

The universe has a high (but not infinite) Sleep Number

February 12th, 2016

As everyone knows, this was a momentous week in the history of science.  And I don’t need to tell you why: the STOC and CCC accepted paper lists finally came out.

Haha, kidding!  I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).

By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined.  By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton).  But this was detected, in possibly the highest-precision measurement ever made.

As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space?  I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2.  Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close.  Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3).  Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter.  Would you feel that?  Not sure.  At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point.  (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.)  Now, the black holes themselves were orbiting about 200 miles from each other before they merged.  So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!

In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight.  People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.”  But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.

As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong.  Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.

“Why does the universe exist?” … finally answered (or dissolved) in this blog post!

February 6th, 2016

In my previous post, I linked to seven Closer to Truth videos of me spouting about free will, Gödel’s Theorem, black holes, etc. etc.  I also mentioned that there was a segment of me talking about why the universe exists that for some reason they didn’t put up.  Commenter mjgeddes wrote, “Would have liked to hear your views on the existence of the universe question,” so I answered in another comment.

But then I thought about it some more, and it seemed inappropriate to me that my considered statement about why the universe exists should only be available as part of a comment thread on my blog.  At the very least, I thought, such a thing ought to be a top-level post.

So, without further ado:

My view is that, if we want to make mental peace with the “Why does the universe exist?” question, the key thing we need to do is forget about the universe for a while, and just focus on the meaning of the word “why.”  I.e., when we ask a why-question, what kind of answer are we looking for, what kind of answer would make us happy?

Notice, in particular, that there are hundreds of other why-questions, not nearly as prestigious as the universe one, yet that seem just as vertiginously unanswerable.  E.g., why is 5 a prime number?  Why does “cat” have 3 letters?

Now, the best account of “why”—and of explanation and causality—that I know about is the interventionist account, as developed for example in Judea Pearl’s work.  In that account, to ask “Why is X true?” is simply to ask: “What could we have changed in order to make X false?”  I.e., in the causal network of reality, what are the levers that turn X on or off?

This question can sometimes make sense even in pure math.  For example: “Why is this theorem true?” “It’s true only because we’re working over the complex numbers.  The analogous statement about real numbers is false.”  A perfectly good interventionist answer.

On the other hand, in the case of “Why is 5 prime?,” all the levers you could pull to make 5 composite involve significantly more advanced machinery than is needed to pose the question in the first place.  E.g., “5 is prime because we’re working over the ring of integers.  Over other rings, like Z[√5], it admits nontrivial factorizations.”  Not really an explanation that would satisfy a four-year-old (or me, for that matter).

And then we come to the question of why anything exists.  For an interventionist, this translates into: what causal lever could have been pulled in order to make nothing exist?  Well, whatever lever it was, presumably the lever itself was something—and so you see the problem right there.

Admittedly, suppose there were a giant red button, somewhere within the universe, that when pushed would cause the entire universe (including the button itself) to blink out of existence. In that case, we could say: the reason why the universe continues to exist is that no one has pushed the button yet. But even then, that still wouldn’t explain why the universe had existed.

Here’s some video of me spouting about Deep Questions

February 4th, 2016

In January 2014, I attended an FQXi conference on Vieques island in Puerto Rico.  While there, Robert Lawrence Kuhn interviewed me for his TV program Closer to Truth, which deals with science and religion and philosophy and you get the idea.  Alas, my interview was at the very end of the conference, and we lost track of the time—so unbeknownst to me, a plane full of theorists was literally sitting on the runway waiting for me to finish philosophizing!  This was the second time Kuhn interviewed me for his show; the first time was on a cruise ship near Norway in 2011.  (Thankless hero that I am, there’s nowhere I won’t travel for the sake of truth.)

Anyway, after a two-year wait, the videos from Puerto Rico are finally available online.  While my vignettes cover what, for most readers of this blog, will be very basic stuff, I’m sort of happy with how they turned out: I still stutter and rock back and forth, but not as much as usual.  For your viewing convenience, here are the new videos:

I had one other vignette, about why the universe exists, but they seem to have cut that one.  Alas, if I knew why the universe existed in January 2014, I can’t remember any more.

One embarrassing goof: I referred to the inventor of Newcomb’s Paradox as “Simon Newcomb.”  Actually it was William Newcomb: a distant relative of Simon Newcomb, the 19th-century astronomer who measured the speed of light.

At their website, you can also see my older 2011 videos, and videos from others who might be known to readers of this blog, like Marvin Minsky, Roger Penrose, Rebecca Newberger Goldstein, David ChalmersSean Carroll, Max Tegmark, David Deutsch, Raphael Bousso, Freeman DysonNick BostromRay Kurzweil, Rodney Brooks, Stephen Wolfram, Greg Chaitin, Garrett Lisi, Seth Lloyd, Lenny Susskind, Lee Smolin, Steven Weinberg, Wojciech Zurek, Fotini Markopoulou, Juan Maldacena, Don Page, and David Albert.  (No, I haven’t yet watched most of these, but now that I linked to them, maybe I will!)

Thanks very much to Robert Lawrence Kuhn and Closer to Truth (and my previous self, I guess?) for providing Shtetl-Optimized content so I don’t have to.


Update: Andrew Critch of CFAR asked me to post the following announcement.

We’re seeking a full time salesperson for the Center for Applied Rationality in Berkeley, California. We’ve streamlined operations to handle large volume in workshop admissions, and now we need that volume to pour in. Your role would be to fill our workshops, events, and alumni community with people. Last year we had 167 total new alumni. This year we want 120 per month. Click here to find out more.

Marvin Minsky

January 26th, 2016

Yesterday brought the sad news that Marvin Minsky passed away at age 88.  I never met Minsky (I wish I had); I just had one email exchange with him back in 2002, about Stephen Wolfram’s book.  But Minsky was my academic great-grandfather (through Manuel Blum and Umesh Vazirani), and he influenced me in many other ways.  For example, in his and Papert’s 1968 book Perceptrons—notorious for “killing neural net research for a decade,” because of its mis- or over-interpreted theorems about the representational limitations of single-layer neural nets—the way Minsky and Papert proved those theorems was by translating questions about computation into questions about the existence or nonexistence of low-degree polynomials with various properties, and then answering the latter questions using MATH.  Their “polynomial method” is now a mainstay of quantum algorithms research (having been brought to the subject by Beals et al.), and in particular, has been a mainstay of my own career.  Hardly Minsky’s best-known contribution to human knowledge, but that even such a relatively minor part of his oeuvre could have legs half a century later is a testament to his impact.

I’m sure readers will have other thoughts to share about Minsky, so please do so in the comments section.  Personal reminiscences are especially welcome.

Happy Third Birthday Lily!

January 21st, 2016

Non-Lily-Related Updates (Jan. 22)

Uri Bram posted a cute little article about whether he was justified, as a child, to tell his parents that he wouldn’t clean up his room because doing so would only increase the universe’s entropy and thereby hasten its demise. The article quotes me, Sean Carroll, and others about that important question.

On Wednesday I gave a TCS+ online seminar about “The Largest Possible Quantum Speedups.” If you’re interested, you can watch the YouTube video here.


cake

sculpture

art

art2

(I promised a while ago that I’d upload some examples of Lily’s MOMA-worthy modern artworks.  So, here are two!)

A few quotable quotes:

Daddy, when you were little, you were a girl like me!

I’m feeling a bit juicy [thirsty for juice].

Saba and Safta live in Israel. They’re mommy’s friends! [Actually they’re mommy’s parents.]

Me: You’re getting bigger every day!
Lily: But I’m also getting smaller every day!

Me: Then Goldilocks tasted the third bowl, which was Baby Bear’s, and it was just right!  So she ate it all up.  Then Goldilocks went…
Lily: No, then Goldilocks ate some cherries in the kitchen before she went to the bedroom.  And blueberries.
Me: Fine, so she ate cherries and blueberries.  Then she went to the bedroom, and she saw that there were three beds…
Lily: No, four beds!
Me: Fine, four beds.  So she laid in the first bed, but she said, “this bed is too hard.”
Lily: No, it was too comfortable!
Me: Too comfortable?  Is she some kind of monk?

Me [pointing to a taxidermed black bear in a museum]: What’s that?
Lily: A bear!
Me: Is it Winnie the Pooh?
Lily: No, it’s a different kind of bear.
Me [pointing to a tan bear in the next case]: So what about that one? Is that Winnie?
Lily: Yes! That’s Winnie the Pooh!
[Looking at it more closely] No, it’s a different kind of Winnie.

Lily: Why is it dark outside?
Me: Because it’s night time.
Lily: Why is it night time?
Me: Because the sun went to the other side of the world.
Lily: It went to China!
Me: Yes! It did in fact go to China.
Lily: Why did the sun go to China?
Me: Well, more accurately, it only seemed to go there, because the world that we’re on is spinning.
Lily: Why is the world spinning?
Me: Because of the conservation of angular momentum.
Lily: Why is the … consibation of amomomo?
Me: I suppose because of Noether’s Theorem, and the fact that our laws of physics are symmetric under spatial rotations.
Lily: Why is…
Me: That’s enough for today Lily!

Edging in: the biggest science news of 2015

January 3rd, 2016

For years, I was forced to endure life with my nose up against the glass of the Annual Edge Question.  What are you optimistic about?  Ooh! ooh! Call on me!  I’m optimistic about someday being able to prove my pessimistic beliefs (like P≠NP).  How is the Internet changing the way you think?  Ooh, ooh! I know! Google and MathOverflow are saving me from having to think at all!  So then why are they only asking Steven Pinker, Freeman Dyson, Richard Dawkins, David Deutsch, some random other people like that?

But all that has changed.  This year, I was invited to participate in Edge for the first time.  So, OK, here’s the question:

What do you consider the most interesting recent [scientific] news?  What makes it important?

My response is here.  I wasn’t in love with the question, because of what I saw as an inherent ambiguity in it: the news that’s most interesting to me, that I have a comparative advantage in talking about, and that people probably want to hear me talk about (e.g., progress in quantum computing), is not necessarily what I’d regard as the most important in any objective sense (e.g., climate change).  So, I decided to write my answer precisely about my internal tension in what I should consider most interesting: should it be the recent progress by John Martinis and others toward building a quantum computer?  Or should it be the melting glaciers, or something else that I’m confident will affect the future of the world?  Or possibly the mainstream attention now being paid to the AI-risk movement?  But if I really want to nerd out, then why not Babai’s graph isomorphism algorithm?  Or if I actually want to be honest about what excited me, then why not the superquadratic separations between classical and quantum query complexities for a total Boolean function, by Ambainis et al. and my student Shalev Ben-David?  On the other hand, how can I justify even caring about such things while the glaciers are melting?

So, yeah, my response tries to meditate on all those things.  My original title was “How nerdy do you want it?,” but John Brockman of Edge had me change it to something blander (“How widely should we draw the circle?”), and made a bunch of other changes from my usual style.  Initially I chafed at having an editor for what basically amounted to a blog post; on the other hand, I’m sure I would’ve gotten in trouble much less often on this blog had I had someone to filter my words for me.

Anyway, of course I wasn’t the only person to write about the climate crisis.  Robert Trivers, Laurence Smith, and Milford Wolpoff all wrote about it as well (Trivers most chillingly and concisely), while Max Tegmark wrote about the mainstreaming of AI risk.  John Naughton even wrote about Babai’s graph isomorphism breakthrough (though he seems unaware that the existing GI algorithms were already extremely fast in practice, and therefore makes misleading claims about the new algorithm’s practical applications).  Unsurprisingly, no one else wrote about breakthroughs in quantum query complexity: you’ll need to go to my essay for that!  A bit more surprisingly, no one besides me wrote about progress in quantum computing at all (if we don’t count the loophole-free Bell test).

Anyway, on reflection, 2015 actually was a pretty awesome year for science, no matter how nerdy you want it or how widely you draw the circle.  Here are other advances that I easily could’ve written about but didn’t:

I’ve now read all (more or less) of this year’s Edge responses.  Even though some of the respondents pushed personal hobbyhorses like I’d feared, I was impressed by how easy it was to discern themes: advances that kept cropping up in one answer after another and that one might therefore guess are actually important (or at least, are currently perceived to be important).

Probably at the top of the list was a new gene-editing technique called CRISPR: Randolph Neese, Paul Dolan, Eric Topol, Mark Pagel, and Stuart Firestein among others all wrote about this, and about its implications for creating designer humans.

Also widely-discussed was the discovery that most psychology studies fail to replicate (I’d long assumed as much, but apparently this was big news in psychology!): Nicholas Humphrey, Stephen Kosslyn, Jonathan Schooler, Ellen Winner, Judith Rich Harris, and Philip Tetlock all wrote about that.

Then there was the Pluto flyby, which Juan Enriquez, Roger Highfield, and Nicholas Christakis all wrote about.  (As Christakis, Master of Silliman College at Yale, was so recently a victim of a social-justice mob, I found it moving how he simply ignored those baying for his head and turned his attention heavenward in his Edge answer.)

Then there was progress in deep learning, including Google’s Deep Dream (those images of dogs in nebulae that filled your Facebook wall) and DeepMind (the program that taught itself how to play dozens of classic video games).  Steve Omohundro, Andy Clark, Jamshed Bharucha, Kevin Kelly, David Dalrymple, and Alexander Wissner-Gross all wrote about different aspects of this story.

And recent progress in SETI, which Yuri Milner (who’s given $100 million for it) and Mario Livio wrote about.

Unsurprisingly, a bunch of high-energy physicists wrote about high-energy physics at the LHC: how the Higgs boson was found (still news?), how nothing other than the Higgs boson was found (the biggest news?), but how there’s now the slightest hint of a new particle at 750 GeV.  See Lee Smolin, Garrett Lisi, Sean Carroll, and Sarah Demers.

Finally, way out on the Pareto frontier of importance and disgustingness was the recently-discovered therapeutic value of transplanting one person’s poop into another person’s intestines, which Joichi Ito, Pamela Rosenkranz, and Alan Alda all wrote about (it also, predictably, featured in a recent South Park episode).

Without further ado, here are 27 other answers that struck me in one way or another:

  • Steven Pinker on happy happy things are getting better (and we can measure it)
  • Freeman Dyson on the Dragonfly astronomical observatory
  • Jonathan Haidt on how prejudice against people of differing political opinions was discovered to have surpassed racial, gender, and religious prejudice
  • S. Abbas Raza on Piketty’s r>g
  • Rebecca Newberger Goldstein, thoughtful as usual, on the recent study that said it’s too simple to say female participation is lower in STEM fields—rather, female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort
  • Bill Joy on recent advances on reducing CO2 emissions
  • Paul Steinhardt on recent observations saying that, not only were the previous “B-modes from inflation” just galactic dust, but there are no real B-modes to within the current detection limits, and this poses a problem for inflation (I hadn’t heard about this last part)
  • Aubrey de Grey on new antibiotics that are grown in the soil rather than in lab cultures
  • John Tooby on the evolutionary rationale for germline engineering
  • W. Tecumseh Fitch on the coming reality of the “Jurassic Park program” (bringing back extinct species through DNA splicing—though probably not dinosaurs, whose DNA is too degraded)
  • Keith Devlin on the new prospect of using massive datasets (from MOOCs, for example) to actually figure out how students learn
  • Richard Muller on how air pollution in China has become one of the world’s worst problems (imagine every child in Beijing being force-fed two packs of cigarettes per day)
  • Ara Norenzayan on the demographic trends in religious belief
  • James Croak on amazing advances in battery technology (which were news to me)
  • Buddhini Samarasinghe on (among other things) the power of aspirin to possibly prevent cancer
  • Todd Sacktor on a new treatment for Parkinson’s
  • Charles Seife on the imminent availability of data about pretty much everything in our lives
  • Susan Blackmore on “that dress” and what it revealed about the human visual system
  • Brian Keating on experiments that should soon tell us the neutrinos’ masses (again, I hadn’t heard about these)
  • Michael McCullough on something called “reproductive religiosity theory,” which posits that the central purpose of religions is to enforce social norms around mating and reproduction (for what it’s worth, I’d always regarded that as obvious; it’s even expounded in the last chapter of Quantum Computing Since Democritus)
  • Greg Cochran on the origin of Europeans
  • David Buss on the “mating crisis among educated women”
  • Ed Regis on how high-fat diets are better (except, isn’t this the principle behind Atkins, and isn’t this pretty old news by now?)
  • Melanie Swan on blockchain-based cryptography, such as Bitcoin (though it wasn’t entirely clear to me what point Swan was making about it)
  • Paul Davies on LIGO getting ready to detect its first gravitational waves
  • Samuel Arbesman on how weather prediction has gotten steadily better (rendering our culture’s jokes about the perpetually-wrong weatherman outdated, with hardly anyone noticing)
  • Alison Gopnik on how the ubiquity of touchscreen devices like the iPad means that toddlers can now master computers, and this is something genuinely new under the sun (I can testify from personal experience that she’s onto something)

Then there were three answers for which the “progress” being celebrated, seemed to me to be progress racing faster into WrongVille:

  • Frank Tipler on how one can conclude a priori that there must be a Big Crunch to our future (and hence, the arena for Tiplerian theology) in order to prevent the black hole information paradox from arising, all recent cosmological evidence to the contrary be damned.
  • Ross Anderson on an exciting conference whose participants aim to replace quantum mechanics with local realistic theories.  (Anderson, in particular, is totally wrong that you can get Bell inequality violation from “a combination of local action and global correlation,” unless the global correlation goes as far as a ‘t-Hooft-like superdeterministic conspiracy.)
  • Gordon Kane on how the big news is that the LHC should soon see superparticles.  (This would actually be fine except that Kane omits the crucial context, that he’s been predicting superparticles just around the corner again and again for the past twenty years and they’ve never shown up)

Finally, two responses by old friends that amused me.  The science-fiction writer Rudy Rucker just became aware of the discovery of the dark energy back in 1998, and considers that to be exciting scientific news (yes, Rudy, so it was!).  And Michael Vassar —the Kevin Bacon or Paul Erdös of the rationalist world, the guy who everyone‘s connected to somehow—writes something about a global breakdown of economic rationality, $20 bills on the sidewalk getting ignored, that I had trouble understanding (though the fault is probably mine).

6.S899 Student Project Showcase!

December 22nd, 2015

As 2015 winds down, I thought I’d continue my tradition of using this blog to showcase some awesome student projects from my graduate class.  (For the previous project showcases from Quantum Complexity Theory, see here, here, and here.  Also see here for the showcase from Philosophy and Theoretical Computer Science.)

This fall, I taught 6.S899, a one-time “Seminar on Physics and Computation” that focused on BosonSampling, complexity and quantum gravity, and universality of physical systems.  There were also lots of guest lectures and student presentations.  Unfortunately, we didn’t do any notes or recordings.

Fortunately, though, the students did do projects, which were literature reviews some of which ventured into original research, and all nine have agreed to share their project reports here!  So enjoy, thanks so much to the students for making it a great class, and happy holidays.


Update (Dec. 23): Here are two conference announcements that I’ve been asked to make: Innovations in Theoretical Computer Science (ITCS) 2016, January 14-16 in Cambridge MA, and the Fifth Women in Theory Workshop, at the Simons Institute in Berkeley, May 22-25, 2016.