From Boston to Austin

February 28th, 2016

I have some big news—well, not for the world, but for me personally.  Starting this summer, I’ll be leaving MIT, and starting a new phase of my life, as David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin.  I’ll also be the founding director of UT Austin’s new quantum computing center, whose motto will be “Hook ’em Hadamards“, and whose logo will depict a fierce longhorn bull, whose horns are bra and ket signs enclosing an inner product between two quantum states.  My brilliant and talented wife, Dana Moshkovitz Aaronson, will also be joining UT Austin, as a tenured Associate Professor of Computer Science.  Our current PhD students will remain enrolled at MIT, while also spending as much time as they like in Austin.

I’ll deeply miss MIT and Boston.  More than anything else, I’ll miss the phenomenal students at MIT, who I’ve had the immense privilege to teach and learn from for nine years.  Go Beavers!  I’m grateful as well to my many friends and colleagues who made my years at MIT so rewarding: a time of personal growth, in which I developed from a skinny, insecure 26-year-old nerd, blogging and trying to prove oracle separations, into a pot-bellied, tenured, 34-year-old married-father nerd, still blogging and trying to prove the same oracle separations (but no longer as diligently).

To nip an otherwise-inevitable rumor: I wasn’t forced to leave MIT over anything here on Shtetl-Optimized.  I feel the need to say this because, within the last year, I’ve spent hundreds of miserable hours scrolling through social media threads wherein stranger after stranger proclaimed me basically the world’s worst scum (tied, perhaps, with the other Scott A.), and even called on MIT to fire me.  Given that experience, it was repeatedly jarring for me to reenter reality and discover how irrelevant this all was, either to MIT or to any of the universities that recruited me and Dana.  Bizarre as it sounds, CS departments mostly cared about what actual research we were doing and could bring to them!  So students and faculty afraid to debate anything controversial online under their real names, however politely, should know that even in 2016, the banner of academic freedom yet waves.

Without further ado, let me list ten things that are awesome about Austin and that helped attract me and Dana there.

  1. One of the strongest CS departments and theory groups in the world.  From 1984 until his death in 2002, UT Austin was home to Edsger Dijkstra, who not only discovered Dijkstra’s algorithm but also penned the immortal words that might as well be tattooed on my stomach: computer science is no more about computers than astronomy is about telescopes.  Today, Austin’s CS department is rapidly expanding, and just within theory, is home to David Zuckerman, Anna Gal, Vijaya Ramachandran, Brent Waters, Eric Price, and Greg Plaxton.  With me and Dana there as well, I can say with all due modesty that we intend to compete against any CS theory program anywhere in the world.
  2. Adam Klivans.  The closest I’ve had to a mentor in the exceedingly narrow field of theoretical computer science humor.
  3. An outstanding recent track record with CS theory PhD students.  Since the turn of the century, UT Austin has produced Sasha Sherstov, Anup Rao, Allison Bishop LewkoSeth Pettie, Vladimir Trifonov, Raghu Meka, and other stars of the CS theory world.  That record lets me without the slightest hesitation tell hotshot undergrads who want to do classical and/or quantum complexity theory to apply to Austin for grad school.
  4. The opportunity to build—or rather, rebuild—a UT presence in quantum computing.  While I’m excited to help build a new group—and I feel like it’s the right time in my career to do that—I can’t say that this is the first time UT Austin will have a significant presence in quantum computing.  Way back in the late 70s and early 80s, UT was home to most of the (proto) “quantum computing research” that existed on earth.  It’s there that John Archibald Wheeler philosophized about “It from Bit,” that Bryce deWitt popularized the Many-Worlds Interpretation and Hugh Everett gave his only public lecture on the subject, that David Deutsch did a four-year postdoc in which he formed the seeds of the idea of quantum computing, and that Wojciech Zurek, William Wootters, and Benjamin Schumacher (who between them, founded decoherence theory, proved the No-Cloning Theorem, and coined the term “qubit”) did their PhDs.  I’m honored to tread in their footsteps.
  5. Money.  Texas, as it turns out, has a lot of it.  Now, the conventional wisdom would be that Texas’ wealth is mostly irrelevant to academic scientists, because it’s controlled by reactionary oilmen for whom curiosity-driven research is not exactly the top priority.  That might have been true about the administrations of George W. Bush or Rick Perry.  But Texas’ current governor, Greg Abbott, while still a right-wing Republican, also pushed through an aggressive $4-billion measure called the Governor’s University Research Initiative, one of whose central goals is to recruit leading scientists to Texas.
  6. Weather.  To a first approximation, Austin is lovely and pleasant during the academic year (even as the planet warms, this should remain true for at least a few more decades)—and while I’d sort of vaguely imagined all of Texas as a giant desert, Austin turns out to be lush and green and full of swimming holes.  The summers, of course, are hot enough to fuse hydrogen.  But for academics like me and Dana, it turns out that there’s an elegant solution to that, one unavailable for dealing with New England winters.  That solution is to leave town, to use June, July, and August for miscellaneous academic travel.
  7. Quality of life.  If we’re being honest, I’m not someone likely to spend much time at indie-rock festivals, or whatever quirky cultural stuff it is that’s made Austin the fastest-growing city in the US.  But here’s something I do care about: even though highway traffic in Austin is bad and getting worse, that need not affect my life too much.  Research indicates that, for roughly the price of our current 2-bedroom condo in Cambridge, we could get a lovely 4-bedroom with a yard that’s in walking distance to the UT campus, as well as to stores, restaurants, good schools, and parks.
  8. Schools.  I had a pretty miserable experience growing up.  I don’t know if Lily (or any future children Dana and I have) will be anything like I was, but given that she’s in an “at-risk population” for nerdiness, I’d love to live in a place with nerd education options that don’t stink.  Austin, happily, has two renowned math/science magnet schools—Kealing Middle School and LASA High School—which admit based on test scores.  (By contrast, in most parts in the US, such programs either don’t exist or admit purely by lottery.)  Austin also has the only elementary school, anywhere, whose admissions director told me that sure, they’d let a student skip a grade if it made sense academically.
  9. Steven Weinberg.  I confess: it probably affected me more than it should that arguably the greatest scientist now walking the earth, a legend of physics who could go wherever the hell he wanted, has chosen to spend the past thirty-plus years at UT Austin.  On our last visit there, Dana, my parents, and I had the honor of having dinner with Weinberg.  After we’d exchanged stories about Telluride House at Cornell, where Weinberg and I both lived as undergrads (and which apparently changed little between the 1950s and the 1990s), Weinberg sung the praises of Austin for hours.  (Admittedly, part of why Weinberg enjoys Austin so much is that there it’s easy to be on a first-name basis with the mayor, tech billionaires, and all the other leaders of the city—an advantage that might be Nobel-laureate-specific!)
  10. Adventure.  By background and temperament, I’m probably one of the “least Texan” Americans imaginable: a nerdy east-coast Jewish intellectual who enjoys snow, can’t much tolerate spicy food, is bored by cowboy movies and fears physical confrontation.  Indeed, until I actually visited the place, my only real associations with Texas were tumbleweeds blowing across a parched desert hellscape, oilmen with giant belt buckles guffawing about so-called global warming, crazed football hooligans filling city-sized stadiums, shotguns, rattlesnakes, and George W. Bush.  But then, the contrast between the over-the-top image and the pleasant reality of Austin had the paradoxical effect of making moving to Texas feel like an adventure—an adventure with an acceptable risk profile.  Like, if I’m going to uproot at all, why not to a place that’s strange and different and interesting?

Even given the above, some people will ask about things they’d consider obvious dealbreakers for moving to Texas.  In particular, what about the infamous new law that essentially forces UT Austin to let students carry concealed firearms to class?  Well, I oppose that law.  Indeed, when I haven’t been angering the social-justice left, I’ve been angering the right by (for example) blogging about my strong support for gun control.  To me, it seems like a terrible idea for the Texas state legislature, which provides only 14% of the UT system’s budget, to force on UT a gun policy that its faculty and students overwhelmingly hate.  And I admired Steven Weinberg’s announcement that he intends to defy the law in his classroom, and fight it out in court if necessary.  (Weinberg also gave, as one reason to oppose the law, how much harder it will make it for UT to recruit faculty.)

But at the same time … Dana is Israeli.  For her, it’s perfectly normal to go outside and see 18-year-old girls chatting and laughing with huge-ass machine guns slung over their shoulders.  Having spent a month of each year in Tel Aviv, seeing passersby with guns has become, if not exactly normal to me, then not something I fear 2% as much as I fear crashing my car.  And indeed, if one takes a statistical approach to risk, Austin has a much lower per-capita violent crime rate than Boston does.

And yes, I know, the US and Israel have completely different gun cultures: in Israel, for example, the only people carrying around semiautomatics are trained and monitored conscripts; there’s no concept of a private “right” to such a weapon.  And yes, the principle matters.  But if one is unwilling to move to any place that has any laws one disagrees with, one should probably look into faculty positions on offshore barges or Jupiter.

Austin itself, of course, is only slightly less liberal than Portland, the blueberry in the tomato soup as Rick Perry so memorably put it.  Even so, the maps insist that Austin is in Texas, which means that while there one will probably encounter Texans.  (A friend, on hearing that Dana took a quick liking to Austin when she visited, quipped that it was probably because Austin reminded her of Israel: “hot and surrounded by hostile territory.”)

Now, the actual Texans who I’ve met so far have been frighteningly warm and hospitable.  But the question stands: what will I do if, while living there, I meet (let’s suppose) some sun-calloused cattle ranchers who consider me an arrogant, effete coastal liberal who patronizes them in blog posts like this one?  What if they tell me to scram, head back east, and never mess with Texas again?

Well, I’ve already decided what I’d like to do in this hypothetical situation.  I’d like to invite the ranchers over to my place for some barbecued beers and ice-cold steaks, or whatever it is you eat in Texas, and tell them all about quantum query algorithms, and ask them about cattle feed, and try to find common ground, just like I tried to find common ground with the other end of the political spectrum—with the folks who called me a clueless, patriarchal, entitled white male douchebro who silenced their already-marginalized voices by not agreeing with everything they said.  For I’ve increasingly come to the conviction that, while you might fail to find common ground with someone, you’ve got to try, you’ve got to steelman their argument and learn whatever you can from it.  I once, for example, thought about the Religious Right as purely contemptible, deserving only unthinking snark, and I was completely wrong.  Even when I was right on the underlying issues, I was wrong on the epistemology.  In Texas, hopefully I’ll have a chance to do better.

In summary:

  1. Wherever you’re at in your career, if you’d like to do quantum information and/or theoretical computer science research on the wild frontier—if QMA, QCMA, and BQP/qpoly strike you as little more than wayward calves to be lassoed in—then please consider joining us at UT Austin.  To be concrete: we’ll be looking for distinguished senior faculty to hire under the Governor’s University Research Initiative, we’ll be looking for tenure-track junior faculty in both CS and ECE, we’ll be looking for postdocs, we’ll be looking for grad students, and finally we’ll be looking for undergrads who’d like to join the innovative Turing Scholars honors program.
  2. If you’d just like to come for a week and give a seminar, we’ll have money for that too.
  3. Hook ’em Hadamards!

Totally Unrelated Update (Feb. 29): Michael Mitzenmacher has asked me to announce that nominations are open for the SIGACT Distinguished Service Prize. More information is available here.

The universe has a high (but not infinite) Sleep Number

February 12th, 2016

As everyone knows, this was a momentous week in the history of science.  And I don’t need to tell you why: the STOC and CCC accepted paper lists finally came out.

Haha, kidding!  I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).

By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined.  By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton).  But this was detected, in possibly the highest-precision measurement ever made.

As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space?  I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2.  Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close.  Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3).  Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter.  Would you feel that?  Not sure.  At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point.  (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.)  Now, the black holes themselves were orbiting about 200 miles from each other before they merged.  So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!

In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight.  People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.”  But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.

As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong.  Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.

“Why does the universe exist?” … finally answered (or dissolved) in this blog post!

February 6th, 2016

In my previous post, I linked to seven Closer to Truth videos of me spouting about free will, Gödel’s Theorem, black holes, etc. etc.  I also mentioned that there was a segment of me talking about why the universe exists that for some reason they didn’t put up.  Commenter mjgeddes wrote, “Would have liked to hear your views on the existence of the universe question,” so I answered in another comment.

But then I thought about it some more, and it seemed inappropriate to me that my considered statement about why the universe exists should only be available as part of a comment thread on my blog.  At the very least, I thought, such a thing ought to be a top-level post.

So, without further ado:

My view is that, if we want to make mental peace with the “Why does the universe exist?” question, the key thing we need to do is forget about the universe for a while, and just focus on the meaning of the word “why.”  I.e., when we ask a why-question, what kind of answer are we looking for, what kind of answer would make us happy?

Notice, in particular, that there are hundreds of other why-questions, not nearly as prestigious as the universe one, yet that seem just as vertiginously unanswerable.  E.g., why is 5 a prime number?  Why does “cat” have 3 letters?

Now, the best account of “why”—and of explanation and causality—that I know about is the interventionist account, as developed for example in Judea Pearl’s work.  In that account, to ask “Why is X true?” is simply to ask: “What could we have changed in order to make X false?”  I.e., in the causal network of reality, what are the levers that turn X on or off?

This question can sometimes make sense even in pure math.  For example: “Why is this theorem true?” “It’s true only because we’re working over the complex numbers.  The analogous statement about real numbers is false.”  A perfectly good interventionist answer.

On the other hand, in the case of “Why is 5 prime?,” all the levers you could pull to make 5 composite involve significantly more advanced machinery than is needed to pose the question in the first place.  E.g., “5 is prime because we’re working over the ring of integers.  Over other rings, like Z[√5], it admits nontrivial factorizations.”  Not really an explanation that would satisfy a four-year-old (or me, for that matter).

And then we come to the question of why anything exists.  For an interventionist, this translates into: what causal lever could have been pulled in order to make nothing exist?  Well, whatever lever it was, presumably the lever itself was something—and so you see the problem right there.

Admittedly, suppose there were a giant red button, somewhere within the universe, that when pushed would cause the entire universe (including the button itself) to blink out of existence. In that case, we could say: the reason why the universe continues to exist is that no one has pushed the button yet. But even then, that still wouldn’t explain why the universe had existed.

Here’s some video of me spouting about Deep Questions

February 4th, 2016

In January 2014, I attended an FQXi conference on Vieques island in Puerto Rico.  While there, Robert Lawrence Kuhn interviewed me for his TV program Closer to Truth, which deals with science and religion and philosophy and you get the idea.  Alas, my interview was at the very end of the conference, and we lost track of the time—so unbeknownst to me, a plane full of theorists was literally sitting on the runway waiting for me to finish philosophizing!  This was the second time Kuhn interviewed me for his show; the first time was on a cruise ship near Norway in 2011.  (Thankless hero that I am, there’s nowhere I won’t travel for the sake of truth.)

Anyway, after a two-year wait, the videos from Puerto Rico are finally available online.  While my vignettes cover what, for most readers of this blog, will be very basic stuff, I’m sort of happy with how they turned out: I still stutter and rock back and forth, but not as much as usual.  For your viewing convenience, here are the new videos:

I had one other vignette, about why the universe exists, but they seem to have cut that one.  Alas, if I knew why the universe existed in January 2014, I can’t remember any more.

One embarrassing goof: I referred to the inventor of Newcomb’s Paradox as “Simon Newcomb.”  Actually it was William Newcomb: a distant relative of Simon Newcomb, the 19th-century astronomer who measured the speed of light.

At their website, you can also see my older 2011 videos, and videos from others who might be known to readers of this blog, like Marvin Minsky, Roger Penrose, Rebecca Newberger Goldstein, David ChalmersSean Carroll, Max Tegmark, David Deutsch, Raphael Bousso, Freeman DysonNick BostromRay Kurzweil, Rodney Brooks, Stephen Wolfram, Greg Chaitin, Garrett Lisi, Seth Lloyd, Lenny Susskind, Lee Smolin, Steven Weinberg, Wojciech Zurek, Fotini Markopoulou, Juan Maldacena, Don Page, and David Albert.  (No, I haven’t yet watched most of these, but now that I linked to them, maybe I will!)

Thanks very much to Robert Lawrence Kuhn and Closer to Truth (and my previous self, I guess?) for providing Shtetl-Optimized content so I don’t have to.


Update: Andrew Critch of CFAR asked me to post the following announcement.

We’re seeking a full time salesperson for the Center for Applied Rationality in Berkeley, California. We’ve streamlined operations to handle large volume in workshop admissions, and now we need that volume to pour in. Your role would be to fill our workshops, events, and alumni community with people. Last year we had 167 total new alumni. This year we want 120 per month. Click here to find out more.

Marvin Minsky

January 26th, 2016

Yesterday brought the sad news that Marvin Minsky passed away at age 88.  I never met Minsky (I wish I had); I just had one email exchange with him back in 2002, about Stephen Wolfram’s book.  But Minsky was my academic great-grandfather (through Manuel Blum and Umesh Vazirani), and he influenced me in many other ways.  For example, in his and Papert’s 1968 book Perceptrons—notorious for “killing neural net research for a decade,” because of its mis- or over-interpreted theorems about the representational limitations of single-layer neural nets—the way Minsky and Papert proved those theorems was by translating questions about computation into questions about the existence or nonexistence of low-degree polynomials with various properties, and then answering the latter questions using MATH.  Their “polynomial method” is now a mainstay of quantum algorithms research (having been brought to the subject by Beals et al.), and in particular, has been a mainstay of my own career.  Hardly Minsky’s best-known contribution to human knowledge, but that even such a relatively minor part of his oeuvre could have legs half a century later is a testament to his impact.

I’m sure readers will have other thoughts to share about Minsky, so please do so in the comments section.  Personal reminiscences are especially welcome.

Happy Third Birthday Lily!

January 21st, 2016

Non-Lily-Related Updates (Jan. 22)

Uri Bram posted a cute little article about whether he was justified, as a child, to tell his parents that he wouldn’t clean up his room because doing so would only increase the universe’s entropy and thereby hasten its demise. The article quotes me, Sean Carroll, and others about that important question.

On Wednesday I gave a TCS+ online seminar about “The Largest Possible Quantum Speedups.” If you’re interested, you can watch the YouTube video here.


cake

sculpture

art

art2

(I promised a while ago that I’d upload some examples of Lily’s MOMA-worthy modern artworks.  So, here are two!)

A few quotable quotes:

Daddy, when you were little, you were a girl like me!

I’m feeling a bit juicy [thirsty for juice].

Saba and Safta live in Israel. They’re mommy’s friends! [Actually they’re mommy’s parents.]

Me: You’re getting bigger every day!
Lily: But I’m also getting smaller every day!

Me: Then Goldilocks tasted the third bowl, which was Baby Bear’s, and it was just right!  So she ate it all up.  Then Goldilocks went…
Lily: No, then Goldilocks ate some cherries in the kitchen before she went to the bedroom.  And blueberries.
Me: Fine, so she ate cherries and blueberries.  Then she went to the bedroom, and she saw that there were three beds…
Lily: No, four beds!
Me: Fine, four beds.  So she laid in the first bed, but she said, “this bed is too hard.”
Lily: No, it was too comfortable!
Me: Too comfortable?  Is she some kind of monk?

Me [pointing to a taxidermed black bear in a museum]: What’s that?
Lily: A bear!
Me: Is it Winnie the Pooh?
Lily: No, it’s a different kind of bear.
Me [pointing to a tan bear in the next case]: So what about that one? Is that Winnie?
Lily: Yes! That’s Winnie the Pooh!
[Looking at it more closely] No, it’s a different kind of Winnie.

Lily: Why is it dark outside?
Me: Because it’s night time.
Lily: Why is it night time?
Me: Because the sun went to the other side of the world.
Lily: It went to China!
Me: Yes! It did in fact go to China.
Lily: Why did the sun go to China?
Me: Well, more accurately, it only seemed to go there, because the world that we’re on is spinning.
Lily: Why is the world spinning?
Me: Because of the conservation of angular momentum.
Lily: Why is the … consibation of amomomo?
Me: I suppose because of Noether’s Theorem, and the fact that our laws of physics are symmetric under spatial rotations.
Lily: Why is…
Me: That’s enough for today Lily!

Edging in: the biggest science news of 2015

January 3rd, 2016

For years, I was forced to endure life with my nose up against the glass of the Annual Edge Question.  What are you optimistic about?  Ooh! ooh! Call on me!  I’m optimistic about someday being able to prove my pessimistic beliefs (like P≠NP).  How is the Internet changing the way you think?  Ooh, ooh! I know! Google and MathOverflow are saving me from having to think at all!  So then why are they only asking Steven Pinker, Freeman Dyson, Richard Dawkins, David Deutsch, some random other people like that?

But all that has changed.  This year, I was invited to participate in Edge for the first time.  So, OK, here’s the question:

What do you consider the most interesting recent [scientific] news?  What makes it important?

My response is here.  I wasn’t in love with the question, because of what I saw as an inherent ambiguity in it: the news that’s most interesting to me, that I have a comparative advantage in talking about, and that people probably want to hear me talk about (e.g., progress in quantum computing), is not necessarily what I’d regard as the most important in any objective sense (e.g., climate change).  So, I decided to write my answer precisely about my internal tension in what I should consider most interesting: should it be the recent progress by John Martinis and others toward building a quantum computer?  Or should it be the melting glaciers, or something else that I’m confident will affect the future of the world?  Or possibly the mainstream attention now being paid to the AI-risk movement?  But if I really want to nerd out, then why not Babai’s graph isomorphism algorithm?  Or if I actually want to be honest about what excited me, then why not the superquadratic separations between classical and quantum query complexities for a total Boolean function, by Ambainis et al. and my student Shalev Ben-David?  On the other hand, how can I justify even caring about such things while the glaciers are melting?

So, yeah, my response tries to meditate on all those things.  My original title was “How nerdy do you want it?,” but John Brockman of Edge had me change it to something blander (“How widely should we draw the circle?”), and made a bunch of other changes from my usual style.  Initially I chafed at having an editor for what basically amounted to a blog post; on the other hand, I’m sure I would’ve gotten in trouble much less often on this blog had I had someone to filter my words for me.

Anyway, of course I wasn’t the only person to write about the climate crisis.  Robert Trivers, Laurence Smith, and Milford Wolpoff all wrote about it as well (Trivers most chillingly and concisely), while Max Tegmark wrote about the mainstreaming of AI risk.  John Naughton even wrote about Babai’s graph isomorphism breakthrough (though he seems unaware that the existing GI algorithms were already extremely fast in practice, and therefore makes misleading claims about the new algorithm’s practical applications).  Unsurprisingly, no one else wrote about breakthroughs in quantum query complexity: you’ll need to go to my essay for that!  A bit more surprisingly, no one besides me wrote about progress in quantum computing at all (if we don’t count the loophole-free Bell test).

Anyway, on reflection, 2015 actually was a pretty awesome year for science, no matter how nerdy you want it or how widely you draw the circle.  Here are other advances that I easily could’ve written about but didn’t:

I’ve now read all (more or less) of this year’s Edge responses.  Even though some of the respondents pushed personal hobbyhorses like I’d feared, I was impressed by how easy it was to discern themes: advances that kept cropping up in one answer after another and that one might therefore guess are actually important (or at least, are currently perceived to be important).

Probably at the top of the list was a new gene-editing technique called CRISPR: Randolph Neese, Paul Dolan, Eric Topol, Mark Pagel, and Stuart Firestein among others all wrote about this, and about its implications for creating designer humans.

Also widely-discussed was the discovery that most psychology studies fail to replicate (I’d long assumed as much, but apparently this was big news in psychology!): Nicholas Humphrey, Stephen Kosslyn, Jonathan Schooler, Ellen Winner, Judith Rich Harris, and Philip Tetlock all wrote about that.

Then there was the Pluto flyby, which Juan Enriquez, Roger Highfield, and Nicholas Christakis all wrote about.  (As Christakis, Master of Silliman College at Yale, was so recently a victim of a social-justice mob, I found it moving how he simply ignored those baying for his head and turned his attention heavenward in his Edge answer.)

Then there was progress in deep learning, including Google’s Deep Dream (those images of dogs in nebulae that filled your Facebook wall) and DeepMind (the program that taught itself how to play dozens of classic video games).  Steve Omohundro, Andy Clark, Jamshed Bharucha, Kevin Kelly, David Dalrymple, and Alexander Wissner-Gross all wrote about different aspects of this story.

And recent progress in SETI, which Yuri Milner (who’s given $100 million for it) and Mario Livio wrote about.

Unsurprisingly, a bunch of high-energy physicists wrote about high-energy physics at the LHC: how the Higgs boson was found (still news?), how nothing other than the Higgs boson was found (the biggest news?), but how there’s now the slightest hint of a new particle at 750 GeV.  See Lee Smolin, Garrett Lisi, Sean Carroll, and Sarah Demers.

Finally, way out on the Pareto frontier of importance and disgustingness was the recently-discovered therapeutic value of transplanting one person’s poop into another person’s intestines, which Joichi Ito, Pamela Rosenkranz, and Alan Alda all wrote about (it also, predictably, featured in a recent South Park episode).

Without further ado, here are 27 other answers that struck me in one way or another:

  • Steven Pinker on happy happy things are getting better (and we can measure it)
  • Freeman Dyson on the Dragonfly astronomical observatory
  • Jonathan Haidt on how prejudice against people of differing political opinions was discovered to have surpassed racial, gender, and religious prejudice
  • S. Abbas Raza on Piketty’s r>g
  • Rebecca Newberger Goldstein, thoughtful as usual, on the recent study that said it’s too simple to say female participation is lower in STEM fields—rather, female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort
  • Bill Joy on recent advances on reducing CO2 emissions
  • Paul Steinhardt on recent observations saying that, not only were the previous “B-modes from inflation” just galactic dust, but there are no real B-modes to within the current detection limits, and this poses a problem for inflation (I hadn’t heard about this last part)
  • Aubrey de Grey on new antibiotics that are grown in the soil rather than in lab cultures
  • John Tooby on the evolutionary rationale for germline engineering
  • W. Tecumseh Fitch on the coming reality of the “Jurassic Park program” (bringing back extinct species through DNA splicing—though probably not dinosaurs, whose DNA is too degraded)
  • Keith Devlin on the new prospect of using massive datasets (from MOOCs, for example) to actually figure out how students learn
  • Richard Muller on how air pollution in China has become one of the world’s worst problems (imagine every child in Beijing being force-fed two packs of cigarettes per day)
  • Ara Norenzayan on the demographic trends in religious belief
  • James Croak on amazing advances in battery technology (which were news to me)
  • Buddhini Samarasinghe on (among other things) the power of aspirin to possibly prevent cancer
  • Todd Sacktor on a new treatment for Parkinson’s
  • Charles Seife on the imminent availability of data about pretty much everything in our lives
  • Susan Blackmore on “that dress” and what it revealed about the human visual system
  • Brian Keating on experiments that should soon tell us the neutrinos’ masses (again, I hadn’t heard about these)
  • Michael McCullough on something called “reproductive religiosity theory,” which posits that the central purpose of religions is to enforce social norms around mating and reproduction (for what it’s worth, I’d always regarded that as obvious; it’s even expounded in the last chapter of Quantum Computing Since Democritus)
  • Greg Cochran on the origin of Europeans
  • David Buss on the “mating crisis among educated women”
  • Ed Regis on how high-fat diets are better (except, isn’t this the principle behind Atkins, and isn’t this pretty old news by now?)
  • Melanie Swan on blockchain-based cryptography, such as Bitcoin (though it wasn’t entirely clear to me what point Swan was making about it)
  • Paul Davies on LIGO getting ready to detect its first gravitational waves
  • Samuel Arbesman on how weather prediction has gotten steadily better (rendering our culture’s jokes about the perpetually-wrong weatherman outdated, with hardly anyone noticing)
  • Alison Gopnik on how the ubiquity of touchscreen devices like the iPad means that toddlers can now master computers, and this is something genuinely new under the sun (I can testify from personal experience that she’s onto something)

Then there were three answers for which the “progress” being celebrated, seemed to me to be progress racing faster into WrongVille:

  • Frank Tipler on how one can conclude a priori that there must be a Big Crunch to our future (and hence, the arena for Tiplerian theology) in order to prevent the black hole information paradox from arising, all recent cosmological evidence to the contrary be damned.
  • Ross Anderson on an exciting conference whose participants aim to replace quantum mechanics with local realistic theories.  (Anderson, in particular, is totally wrong that you can get Bell inequality violation from “a combination of local action and global correlation,” unless the global correlation goes as far as a ‘t-Hooft-like superdeterministic conspiracy.)
  • Gordon Kane on how the big news is that the LHC should soon see superparticles.  (This would actually be fine except that Kane omits the crucial context, that he’s been predicting superparticles just around the corner again and again for the past twenty years and they’ve never shown up)

Finally, two responses by old friends that amused me.  The science-fiction writer Rudy Rucker just became aware of the discovery of the dark energy back in 1998, and considers that to be exciting scientific news (yes, Rudy, so it was!).  And Michael Vassar —the Kevin Bacon or Paul Erdös of the rationalist world, the guy who everyone‘s connected to somehow—writes something about a global breakdown of economic rationality, $20 bills on the sidewalk getting ignored, that I had trouble understanding (though the fault is probably mine).

6.S899 Student Project Showcase!

December 22nd, 2015

As 2015 winds down, I thought I’d continue my tradition of using this blog to showcase some awesome student projects from my graduate class.  (For the previous project showcases from Quantum Complexity Theory, see here, here, and here.  Also see here for the showcase from Philosophy and Theoretical Computer Science.)

This fall, I taught 6.S899, a one-time “Seminar on Physics and Computation” that focused on BosonSampling, complexity and quantum gravity, and universality of physical systems.  There were also lots of guest lectures and student presentations.  Unfortunately, we didn’t do any notes or recordings.

Fortunately, though, the students did do projects, which were literature reviews some of which ventured into original research, and all nine have agreed to share their project reports here!  So enjoy, thanks so much to the students for making it a great class, and happy holidays.


Update (Dec. 23): Here are two conference announcements that I’ve been asked to make: Innovations in Theoretical Computer Science (ITCS) 2016, January 14-16 in Cambridge MA, and the Fifth Women in Theory Workshop, at the Simons Institute in Berkeley, May 22-25, 2016.

Ask an unbounded question, get an uncomputable answer

December 11th, 2015

Just when I thought I could relax, as the waters slowly receded from the latest D-Tsunami, my inbox and Facebook feed once again lit up with inquiries—this time, asking me to confirm or deny that “A Paradox at the Heart of Mathematics Makes a Physics Problem Unanswerable.”

Uh-oh!

Luckily for my blood pressure, though, this one turned out to refer to something that more-or-less deserves the hype.  In particular, it’s about a phenomenal 146-page paper by Cubitt, Perez-Garcia, and Wolf, which just appeared this week in Nature (in condensed form, of course).  Incidentally, yeah, his name really is Toby Cubitt, pronounced like “qubit.”  He’s a good guy.

To those in quantum computing, Cubitt et al.’s breakthrough is old news, having already been on the arXiv for almost a year (we’ve also had a talk at MIT about it).  The arXiv has created a funny phenomenon, where you learn something new and cool, assimilate it, move on, and then a year later, everyone is suddenly asking you have you seen this thing, is it for real, etc. etc., just because the thing got some rubber stamp like acceptance to Nature that caused the press to pick it up.  Like, dude, I was into the undecidability of the spectral gap way before it went mainstream.

One more amusing anecdote before we dive into the math.  In his Nature News piece popularizing Cubitt et al.’s result, the writer Davide Castelvecchi quotes Rebecca Goldstein, the brilliant novelist and biographer of Kurt Gödel, as saying: “Turing thought more clearly about the relationship between physics and logic than Gödel did.”  Here’s what happened: Nature News wrote to Rebecca to ask what Gödel’s own thoughts were about the relation between undecidability and physics.  Rebecca passed the request along to me.  So I wrote back to her, arguing that they might just as well ask what Turing thought, since the Cubitt et al. result is “really” about Turing-undecidability (with Gödel-undecidability just an automatic corollary), and at any rate:

I also think that Turing thought more clearly about the relationship between logic and physics than Gödel did (indeed, Gödel himself said that it was only Turing‘s analysis of the notion of computability, in terms of actual physical machines that one could imagine building, that convinced him that computability had been properly defined).

Rebecca passed that back to Nature News, agreeing with it, and then at some point the quote became hers.  Far from being miffed about this, I consider having my forgettable words attributed to a genius like Rebecca to be one of the great honors of my life.  (By pure coincidence, she and I are having lunch next week; hopefully this will butter her up.)

So, OK, let me restate Cubitt et al.’s great theorem in less pop-sciencey terms than Nature News used.  (You could also just read the paper‘s intro, which is exceedingly clear, but what the hell—I’m here to serve.)

Suppose you have two-dimensional material made of a bunch of stationary particles, each with local Hilbert space dimension d, which are arranged on an L×L square grid (so, there are L2 particles in all).  And suppose there’s some fixed d2-dimensional Hamiltonian h, with a local copy hi,j=h acting on each neighboring pair of particles (i,j).  (I.e., the material is translationally invariant, with the same laws of physics acting throughout.)  Let H be the total Hamiltonian: that is, the sum of the hi,j‘s over all the neighboring (i,j)’s.

Then a huge fraction of all of physics—quantum field theory, condensed-matter physics, you name it—can be summarized as, you’re trying to figure out the eigenvalues and eigenvectors of H.  The lowest eigenvalue, λ0, tells you your material’s ground energy, while the higher eigenvalues, λ12,…, tell you the next discrete energy levels that the material can jump up to.  The corresponding eigenvectors tell you which quantum states the material is sitting in when it has these energies: the ground state v0, and the excited states v1,v2,…  Those, in turn, determine basically everything you could want to know about the material: whether it superconducts, etc. etc.

Of course, the eigenvalues and eigenvectors will depend on the lattice size L.  Equally obviously, for any fixed L, you could in principle compute all the eigenvalues and eigenvectors by just diagonalizing some huge-ass matrix.  (That matrix being H.)  But physicists are usually more interested in the limiting behavior as L goes to infinity.  One of their most basic distinctions is: the material is gapped if λ10, the difference between the first excited energy and the ground energy, converges to some positive value or even grows with L as L→∞.  It’s gapless if λ10 converges to 0 as L→∞.  (Actually, Cubitt et al. use more technical definitions of both of these concepts, but we’ll ignore that.)

Cubitt et al.’s theorem now says the following: for some fixed, constant local dimension d, there is no algorithm that takes as input the local Hamiltonian h (say, as a d2×d2 matrix of algebraic numbers), and that decides whether the material is gapped or gapless.  Indeed, you can reduce the halting problem to that problem, in such a way that the material will be gapped if your Turing machine halts, or gapless if it runs forever.

As an immediate corollary, there’s some 2D material—characterized by a translationally-invariant local Hamiltonian h on particles of local dimension d—such that whether the material is gapped or gapless is independent of the axioms of ZF set theory, or whatever else your favorite axioms might be.  (Proof: build a Turing machine M that halts if and only if it finds an inconsistency in set theory, then run Cubitt et al.’s reduction from the halting problem.  By Gödel, if set theory is consistent then it can’t prove whether M halts or not.)

Cubitt et al. never bother to work out the local dimension d that suffices for them, but it could be worked out, and it’s probably at least in the tens of thousands.  Thus, their result leaves open the possibility that there’s an algorithm to decide gaplessness for 2D lattices of qubits (i.e., the special case d=2), or other “reasonably low-dimensional” quantum systems.  We simply don’t know right now.  Another tantalizing open question is whether there’s an algorithm to decide gaplessness for one-dimensional spin chains—again, even in the special case d=2.  Right now, the best we have in that direction is a difficult recent result of Bravyi and Gosset, which gives an algorithm to decide gaplessness for one-dimensional, frustration-free chains of qubits.  (Here “frustration-free,” an amusing term that does not well describe this subject as a whole, means that you can minimize the energy H by minimizing the energies of each hi,j individually.  Or, if you think of H as a SAT instance, it’s satisfiable.)

But while the exact value of d where uncomputability kicks in is still up for grabs, it’s extremely important that d is some fixed, universal constant, independent of the Turing machine.  Indeed, as Cubitt et al. point out in their paper, this is the only feature that makes their new result not a trivial corollary of the uncomputability of Wang tiling.  The latter is a famous result from 1966, which says that there’s no algorithm that takes as input a finite set of tiles, and that tells you whether, using unlimited copies of each tile, you could cover the entire plane (or equivalently, arbitrarily large finite regions of the plane).  I.e., this is yet another “natural” math problem that secretly encodes the halting problem.

The fact that d is fixed also means that, in order to encode larger and larger Turing machines into the local Hamiltonian h (as you must, if you want to embed the halting problem), you need to use more and more bits of precision (!) in the ~d4 real numbers that define h.  This then raises a question: how do you actually extract a description of a Turing machine from the binary expansions of the real numbers that define your Hamiltonian?  To do this, Cubitt et al. use Kitaev’s phase estimation algorithm—which, interestingly, is the only part of their construction that uses quantum mechanics in any way.  One thing that I’d love to understand better is whether the phase estimation is really essential here, or whether the analogous classical question, with the “Hamiltonian” given by a probability distribution over classical constraints, could also be proved to be undecidable for some fixed value of d—thereby showing that Cubitt et al.’s discovery had nothing to do with quantum mechanics.

(It’s possible that the answer to this is obvious; I didn’t think about it deeply.  Note that if the “classical Hamiltonian” is also deterministic, then the problem must be decidable for every fixed d, since there are only finitely many possible h’s, and we could cache all the answers in a lookup table.)

Anyway, it’s now my professional duty, as the prickly, curmudgeonly blogger I am, to end the post by shooing you away from two tempting misinterpretations of the Cubitt et al. result.

First, the result does not say—or even suggest—that there’s any real, finite physical system whose behavior is Gödel- or Turing-undecidable.  Thus, it gives no support to speculations like Roger Penrose’s, about “hypercomputing” that would exceed the capabilities of Turing machines.  The reason, again, is that as soon as you fix a lattice size L, everything becomes computable.  The Cubitt et al. result applies only to questions about the limiting behavior, as the number of particles goes to infinity.  But we already knew lots of examples of physical systems for which predicting their behavior in some infinite limit is at least as hard as the halting problem: for instance, the Wang tiles discussed earlier, or Post rewrite systems, or even Turing machines themselves.  Local Hamiltonians are a profound, nontrivial addition to that list—one that will be particularly striking to physicists, many of whom calculate the spectral gaps of at least 50 Hamiltonians between dinner and dessert.  But in some sense, there was no a-priori reason why a problem this general, about physical systems of unbounded size, ought to have been computable.

Second, the result does not say that any particular question physicists want an answer to—for example, the million-dollar Yang-Mills mass gap problem—is Gödel-undecidable.  “All it says,” is that the possibility that some real-world question of that kind could be undecidable isn’t totally closed off.  The Nature News piece stresses this latter implication a lot—as, admittedly, do Cubitt et al. themselves.  But to put things in perspective: four logicians proved around 1970 that there’s no algorithm to decide whether an arbitrary polynomial equation has an integer solution, thereby giving a negative solution to Hilbert’s Tenth Problem.  Yet with few exceptions, “working number theorists” barely even noticed this development, nor was (say) Andrew Wiles dissuaded from proving Fermat’s Last Theorem, by the absence of a general algorithm to do things like what he was trying to do.  (Indeed, the absence of a general algorithm was shown even earlier for equations like FLT, which have variables in the exponent.)  So I doubt the mathematical physicists who calculate spectral gaps for a living will be any more terrified than the number theorists were, to learn that they’ve been laboring their entire lives on the shores of the halting problem.  “Good for us, then!” they could rightly reply.  “Maybe our jobs won’t be so easy to automate.”

Update (Dec. 20): My colleague Seth Lloyd calls my attention to a PRL paper of his from 1993, which also discusses the construction of physical systems that are gapped if a given Turing machine halts and gapless if it runs forever.  So this basic idea has been around for a while.  As I explained in the post, the main contribution of the Cubitt et al. paper is just to get undecidability into “the sort of system physicists could plausibly care about” (or for which they could’ve plausibly hoped for an analytic solution): in this case, 2D translationally-invariant nearest-neighbor Hamiltonians with bounded local dimension.

Google, D-Wave, and the case of the factor-10^8 speedup for WHAT?

December 9th, 2015

Update (Dec. 16):  If you’re still following this, please check out an important comment by Alex Selby, the discoverer of Selby’s algorithm, which I discussed in the post.  Selby queries a few points in the Google paper: among other things, he disagrees with their explanation of why his classical algorithm works so well on D-Wave’s Chimera graph (and with their prediction that it should stop working for larger graphs), and he explains that Karmarkar-Karp is not the best known classical algorithm for the Number Partitioning problem.  He also questions whether simulated annealing is the benchmark against which everything should be compared (on the grounds that “everything else requires fine-tuning”), pointing out that SA itself typically requires lots of tuning to get it to work well.

Update (Dec. 11): MIT News now has a Q&A with me about the new Google paper. I’m really happy with how the Q&A turned out; people who had trouble understanding this blog post might find the Q&A easier. Thanks very much to Larry Hardesty for arranging it.

Meanwhile, I feel good that there seems to have been actual progress in the D-Wave debate! In previous rounds, I had disagreed vehemently with some of my MIT colleagues (like Ed Farhi and Peter Shor) about the best way to respond to D-Wave’s announcements. Today, though, at our weekly group meeting, there was almost no daylight between any of us. Partly, I’m sure, it’s that I’ve learned to express myself better; partly it’s that the “trigger” this time was a serious research paper by a group separate from D-Wave, rather than some trash-talking statement from Geordie Rose. But mostly it’s that, thanks to the Google group’s careful investigations, this time pretty much anyone who knows anything agrees about all the basic facts, as I laid them out in this blog post and in the Q&A. All that remains are some small differences in emotional attitude: e.g., how much of your time do you want to spend on a speculative, “dirty” approach to quantum computing (which is far ahead of everyone else in terms of engineering and systems integration, but which still shows no signs of an asymptotic speedup over the best classical algorithms, which is pretty unsurprising given theoretical expectations), at a time when the “clean” approaches might finally be closing in on the long-sought asymptotic quantum speedup?

Another Update: Daniel Lidar was nice enough to email me an important observation, and to give me permission to share it here.  Namely, the D-Wave 2X has a minimum annealing time of 20 microseconds.  Because of this, the observed running times for small instance sizes are artificially forced upward, making the growth rate in the machine’s running time look milder than it really is.  (Regular readers might remember that exactly the same issue plagued previous D-Wave vs. classical performance comparisons.)  Correcting this would certainly decrease the D-Wave 2X’s predicted speedup over simulated annealing, in extrapolations to larger numbers of qubits than have been tested so far (although Daniel doesn’t know by how much).  Daniel stresses that he’s not criticizing the Google paper, which explicitly mentions the minimum annealing time—just calling attention to something that deserves emphasis.


In retrospect, I should’ve been suspicious, when more than a year went by with no major D-Wave announcements that everyone wanted me to react to immediately. Could it really be that this debate was over—or not “over,” but where it always should’ve been, in the hands of experts who might disagree vehemently but are always careful to qualify speedup claims—thereby freeing up the erstwhile Chief D-Wave Skeptic for more “””rewarding””” projects, like charting a middle path through the Internet’s endless social justice wars?

Nope.

As many of you will have seen by now, on Monday a team at Google put out a major paper reporting new experiments on the D-Wave 2X machine.  (See also Hartmut Neven’s blog post about this.)  The predictable popularized version of the results—see for example here and here—is that the D-Wave 2X has now demonstrated a factor-of-100-million speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is “truly a quantum computer.”  In the comment sections of one my previous posts, D-Wave investor Steve Jurvetson even tried to erect a victory stele, by quoting Karl Popper about falsification.

In situations like this, the first thing I do is turn to Matthias Troyer, who’s arguably the planet’s most balanced, knowledgeable, trustworthy interpreter of quantum annealing experiments. Happily, in collaboration with Ilia Zintchenko and Ethan Brown, Matthias was generous enough to write a clear 3-page document putting the new results into context, and to give me permission to share it on this blog. From a purely scientific standpoint, my post could end right here, with a link to their document.

Then again, from a purely scientific standpoint, the post could’ve ended even earlier, with the link to the Google paper itself!  For this is not a case where the paper hides some crucial issue that the skeptics then need to ferret out.  On the contrary, the paper’s authors include some of the most careful people in the business, and the paper explains the caveats as clearly as one could ask.  In some sense, then, all that’s left for me or Matthias to do is to tell you what you’d learn if you read the paper!

So, OK, has the D-Wave 2X demonstrated a factor-108 speedup or not?  Here’s the shortest answer that I think is non-misleading:

Yes, there’s a factor-108 speedup that looks clearly asymptotic in nature, and there’s also a factor-108 speedup over Quantum Monte Carlo. But the asymptotic speedup is only if you compare against simulated annealing, while the speedup over Quantum Monte Carlo is only constant-factor, not asymptotic. And in any case, both speedups disappear if you compare against other classical algorithms, like that of Alex Selby. Also, the constant-factor speedup probably has less to do with quantum mechanics than with the fact that D-Wave built extremely specialized hardware, which was then compared against a classical chip on the problem of simulating the specialized hardware itself (i.e., on Ising spin minimization instances with the topology of D-Wave’s Chimera graph). Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, let alone to speedups over the best known classical algorithms that are also asymptotic or also of practical importance. Indeed, all of these points also remain uncertain for quantum annealing as a whole.

To expand a bit, there are really three separate results in the Google paper:

  1. The authors create Chimera instances with tall, thin energy barriers blocking the way to the global minimum, by exploiting the 8-qubit “clusters” that play such a central role in the Chimera graph.  In line with a 2002 theoretical prediction by Farhi, Goldstone, and Gutmann (a prediction we’ve often discussed on this blog), they then find that on these special instances, quantum annealing reaches the global minimum exponentially faster than classical simulated annealing, and that the D-Wave machine realizes this advantage.  As far as I’m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine, at least within the 8-qubit clusters.  On the other hand, the authors point out that there are other classical algorithms, like that of Selby (building on Hamze and de Freitas), which group together the 8-bit clusters into 256-valued mega-variables, and thereby get rid of the energy barrier that kills simulated annealing.  These classical algorithms are found empirically to outperform the D-Wave machine.  The authors also match the D-Wave machine’s asymptotic performance (though not the leading constant) using Quantum Monte Carlo, which (despite its name) is a classical algorithm often used to find quantum-mechanical ground states.
  2. The authors make a case that the ability to tunnel past tall, thin energy barriers—i.e., the central advantage that quantum annealing has been shown to have over classical annealing—might be relevant to at least some real-world optimization problems.  They do this by studying a classic NP-hard problem called Number Partitioning, where you’re given a list of N positive integers, and your goal is to partition the integers into two subsets whose sums differ from each other by as little as possible.  Through numerical studies on classical computers, they find that quantum annealing (in the ideal case) and Quantum Monte Carlo should both outperform simulated annealing, by roughly equal amounts, on random instances of Number Partitioning.  Note that this part of the paper doesn’t involve any experiments on the D-Wave machine itself, so we don’t know whether calibration errors, encoding loss, etc. will kill the theoretical advantage over simulated annealing.  But even if not, this still wouldn’t yield a “true quantum speedup,” since (again) Quantum Monte Carlo is a perfectly-good classical algorithm, whose asymptotics match those of quantum annealing on these instances.
  3. Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about 108 times faster than Quantum Monte Carlo running on a single-core classical computer.  But, extremely interestingly, they also find that this speedup does not grow with problem size; instead it simply saturates at ~108.  In other words, this is a constant-factor speedup rather than an asymptotic one.  Now, obviously, solving a problem “only” 100 million times faster (rather than asymptotically faster) can still have practical value!  But it’s crucial to remember that this constant-factor speedup is only observed for the Chimera instances—or in essence, for “the problem of simulating the D-Wave machine itself”!  If you wanted to solve something of practical importance, you’d first need to embed it into the Chimera graph, and it remains unclear whether any of the constant-factor speedup would survive that embedding.  In any case, while the paper isn’t explicit about this, I gather that the constant-factor speedup disappears when one compares against (e.g.) the Selby algorithm, rather than against QMC.

So then, what do I say to Steve Jurvetson?  I say—happily, not grudgingly!—that the new Google paper provides the clearest demonstration so far of a D-Wave device’s capabilities.  But then I remind him of all the worries the QC researchers had from the beginning about D-Wave’s whole approach: the absence of error-correction; the restriction to finite-temperature quantum annealing (moreover, using “stoquastic Hamiltonians”), for which we lack clear evidence for a quantum speedup; the rush for more qubits rather than better qubits.  And I say: not only do all these worries remain in force, they’ve been thrown into sharper relief than ever, now that many of the side issues have been dealt with.  The D-Wave 2X is a remarkable piece of engineering.  If it’s still not showing an asymptotic speedup over the best known classical algorithms—as the new Google paper clearly explains that it isn’t—then the reasons are not boring or trivial ones.  Rather, they seem related to fundamental design choices that D-Wave made over a decade ago.

The obvious question now is: can D-Wave improve its design, in order to get a speedup that’s asymptotic, and that holds against all classical algorithms (including QMC and Selby’s algorithm), and that survives the encoding of a “real-world” problem into the Chimera graph?  Well, maybe or maybe not.  The Google paper returns again and again to the subject of planned future improvements to the machine, and how they might clear the path to a “true” quantum speedup. Roughly speaking, if we rule out radical alterations to D-Wave’s approach, there are four main things one would want to try, to see if they helped:

  1. Lower temperatures (and thus, longer qubit lifetimes, and smaller spectral gaps that can be safely gotten across without jumping up to an excited state).
  2. Better calibration of the qubits and couplings (and thus, ability to encode a problem of interest, like the Number Partitioning problem mentioned earlier, to greater precision).
  3. The ability to apply “non-stoquastic” Hamiltonians.  (D-Wave’s existing machines are all limited to stoquastic Hamiltonians, defined as Hamiltonians all of whose off-diagonal entries are real and non-positive.  While stoquastic Hamiltonians are easier from an engineering standpoint, they’re also the easiest kind to simulate classically, using algorithms like QMC—so much so that there’s no consensus on whether it’s even theoretically possible to get a true quantum speedup using stoquastic quantum annealing.  This is a subject of active research.)
  4. Better connectivity among the qubits (thereby reducing the huge loss that comes from taking problems of practical interest, and encoding them in the Chimera graph).

(Note that “more qubits” is not on this list: if a “true quantum speedup” is possible at all with D-Wave’s approach, then the 1000+ qubits that they already have seem like more than enough to notice it.)

Anyway, these are all, of course, things D-Wave knows about and will be working on in the near future. As well they should! But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they’ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup. We just can’t say for sure that they won’t see one.

In the meantime, while it’s sometimes easy to forget during blog-discussions, the field of experimental quantum computing is a proper superset of D-Wave, and things have gotten tremendously more exciting on many fronts within the last year or two.  In particular, the group of John Martinis at Google (Martinis is one of the coauthors of the Google paper) now has superconducting qubits with orders of magnitude better coherence times than D-Wave’s qubits, and has demonstrated rudimentary quantum error-correction on 9 of them.  They’re now talking about scaling up to ~40 super-high-quality qubits with controllable couplings—not in the remote future, but in, like, the next few years.  If and when they achieve that, I’m extremely optimistic that they’ll be able to show a clear quantum advantage for something (e.g., some BosonSampling-like sampling task), if not necessarily something of practical importance.  IBM Yorktown Heights, which I visited last week, is also working (with IARPA funding) on integrating superconducting qubits with many-microsecond coherence times.  Meanwhile, some of the top ion-trap groups, like Chris Monroe’s at the University of Maryland, are talking similarly big about what they expect to be able to do soon. The “academic approach” to QC—which one could summarize as “understand the qubits, control them, keep them alive, and only then try to scale them up”—is finally bearing some juicy fruit.

(At last week’s IBM conference, there was plenty of D-Wave discussion; how could there not be? But the physicists in attendance—I was almost the only computer scientist there—seemed much more interested in approaches that aim for longer-laster qubits, fault-tolerance, and a clear asymptotic speedup.)

I still have no idea when and if we’ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on.  But it’s now looking like only a matter of years until Gil Kalai, and the other quantum computing skeptics, will be forced to admit they were wrong—which was always the main application I cared about anyway!

So yeah, it’s a heady time for QC, with many things coming together faster than I’d expected (then again, it was always my personal rule to err on the side of caution, and thereby avoid contributing to runaway spirals of hype).  As we stagger ahead into this new world of computing—bravely, coherently, hopefully non-stoquastically, possibly fault-tolerantly—my goal on this blog will remain what it’s been for a decade: not to prognosticate, not to pick winners, but merely to try to understand and explain what has and hasn’t already been shown.


Update (Dec. 10): Some readers might be interested in an economic analysis of the D-Wave speedup by commenter Carl Shulman.

Another Update: Since apparently some people didn’t understand this post, here are some comments from a Y-Combinator thread about the post that might be helpful:

(1) [T]he conclusion of the Google paper is that we have probable evidence that with enough qubits and a big enough problem it will be faster for a very specific problem compared to a non-optimal classical algorithm (we have ones that are for sure better).

This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave’s computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn’t help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can’t beat an even better classical algorithm (Selby’s) at all, even in a way that won’t scale.

Scott’s central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it’s possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott’s central gripe with D-Wave’s approach is that they don’t have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them even bigger doesn’t seem like a solution.

(2) DWave machines are NOT gate quantum computers; they call their machine quantum annealing machines. It is not known the complexity class of problems that can be solved efficiently by quantum annealing machines, or if that class is equivalent to classical machines.

The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a ‘quantum speedup’. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.

What the paper does seem to be showing is that the machine in question is actually fundamentally quantum in nature; it’s just not clear yet that that the type of quantum computer it is is an improvement over classical ones.

(3) [I]t isn’t called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain and it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It’s hard to imagine what they might be when we’re still so early in the process, and still have no good idea what either the practical or theoretical limits are.

(4) The popular perception of quantum computers as “doing things in parallel” is very misleading. A quantum computer lets you perform computation on a superposed state while maintaining that superposition. But that only helps if the structure of the problem lets you somehow “cancel out” the incorrect results leaving you with the single correct one. [There’s hope for the world! –SA]