Archive for the ‘Announcements’ Category

Two announcements

Saturday, April 7th, 2018

Before my next main course comes out of the oven, I bring you two palate-cleansing appetizers:

  1. My childhood best friend Alex Halderman, whose heroic exploits helping to secure the world’s voting systems have often been featured on this blog, now has a beautifully produced video for the New York Times, entitled “I Hacked An Election.  So Can The Russians.”  Here Alex lays out the case for an audited paper trail—i.e., for what the world’s cybersecurity experts have been unanimously flailing their arms about for two decades—in terms so simple and vivid that even Congresspeople should be able to understand them.  Please consider sharing the video if you support this important cause.
  2. Jakob Nordstrom asked me to advertise the 5th Swedish Summer School in Computer Science, to be held August 5-11, 2018, in the beautiful Stockholm archipelago at Djuronaset.  This year the focus is on quantum computing, and the lecturers are two of my favorite people in the entire field: Ronald de Wolf (giving a broad intro to QC) and Oded Regev (lecturing on post-quantum cryptography).  The school is mainly for PhD students, but is also open to masters students, postdocs, and faculty.  If you wanted to spend one week getting up to speed on quantum, it’s hard for me to imagine that you’d find any opportunity more excellent.  The application deadline is April 20, so apply now if you’re interested!

Three updates

Monday, February 5th, 2018
  1. I was extremely sorry to learn about the loss of Joe Polchinski, a few days ago, to brain cancer.  Joe was a leading string theorist, one of the four co-discoverers of the AMPS firewall paradox, and one of the major figures in the Simons It from Qubit collaboration that I’ve been happy to be part of since its inception.  I regret that I didn’t get to know Joe as well as I should have, but he was kind to me in all of our interactions.  He’ll be missed by all who knew him.
  2. Edge has posted what will apparently be its final Annual Edge Question: “What is the last question?”  They asked people to submit just a single, one sentence question “for which they’ll be remembered,” with no further explanation or elaboration.  You can read mine, which not surprisingly is alphabetically the first.  I tried to devise a single question that gestured toward the P vs. NP problem, and the ultimate physical limits of computation, and the prospects for superintelligent AI, and the enormity of what could be Platonically lying in wait for us within finite but exponentially search spaces, and the eternal nerd’s conundrum, of the ability to get the right answers to clearly-stated questions being so ineffectual in the actual world.  I’m not thrilled with the result, but reading through the other questions makes it clear just how challenging it is to ask something that doesn’t boil down to: “When will the rest of the world recognize the importance of my research topic?”
  3. I’m now reaping the fruits of my decision to take a year-long sabbatical from talking to journalists.  Ariel Bleicher, a writer for Quanta magazine, asked to interview me for an article she was writing about the difficulty of establishing quantum supremacy.  I demurred, mentioning my sabbatical, and pointed her to others she could ask instead.  Well, last week the article came out, and while much of it is quite good, it opens with an extended presentation of a forehead-bangingly wrong claim by Cristian Calude: namely, that the Deutsch-Jozsa problem (i.e. computing the parity of two bits) can be solved with one query even by a classical algorithm, so that (in effect) one of the central examples used in introductory quantum computing courses is a lie.  This claim is based on a 2006 paper wherein, with all the benefits of theft over honest toil, Calude changes the query model so that you can evaluate not just the original oracle function f, but an extension of f to the complex numbers (!).  Apparently Calude justifies this by saying that Deutsch also changed the problem, by allowing it to be solved with a quantum computer, so he gets to change the problem as well.  The difference, of course, is that the quantum query complexity model is justified by its relevance for quantum algorithms, and (ultimately) by quantum mechanics being true of our world.  Calude’s model, by contrast, is (as far as I can tell) pulled out of thin air and justified by nothing.  Anyway, I regard this incident as entirely, 100% my fault, and 0% Ariel’s.  How was she to know that, while there are hundreds of knowledgeable quantum computing experts to interview, almost all of them are nice and polite?  Anyway, this has led me to a revised policy: while I’ll still decline interviews, news organizations should feel free to run completed quantum computing pieces by me for quick fact checks.

Classifieds thread

Sunday, December 24th, 2017

In addition to the emails from journalists, I also get a large number of emails seeking interactions with me—a discussion of cryptocurrencies, help in planning a political campaign, whatever—that could probably be had just as well, or better, with some other reader of this blog.  So inspired by Slate Star Codex, my lodestar of blog-greatness, I’ve decided to host Shtetl-Optimized‘s first ever classifieds thread.  This is your place to post any announcement, ad, offer, proposal, etc. that you think would be of particular interest to fellow Shtetl-Optimized readers.  As usual, I reserve the right to remove anything too spammy or otherwise unsuitable (“C@$H 4 G0LD!!!”), but will generally be pretty permissive.

Oh yes: Merry Christmas to those who celebrate it, from a spot roughly equal driving distance (about an hour 20 minutes) from Nazareth and Bethlehem!


Update: OK, let me start the ball rolling, or rather the photon propagating. Reader Piotr Migdal wrote to tell me about a quantum optics puzzle game that he created. I tried it and it’s excellent, and best of all clear: unlike virtually every other “quantum game” I’ve tried, it took me only a minute to figure this one out. (Admittedly, it’s less of a quantum game than an “optics game,” in the sense that the effects it teaches about also appear with laser beams and other many-photon coherent states, which you don’t really need QM for, even though QM provides their ultimate explanation. But whatever: it’s fun!) Piotr has lots of other great stuff on his website.

Journalist moratorium

Sunday, December 17th, 2017

For over a decade, one of the main ways I’ve tried to advance the cause of Enlightenment has been talking to journalists writing popular articles on quantum computing (or P vs. NP, or the universe as a computer simulation, or whatever).  Because of my blog, journalists knew how to reach me, and because I’m a ham, I always agreed to be interviewed.  Well, I told myself I was doing it as my way of giving back to the field, so that my smarter colleagues would have more time for research.

Unfortunately, this task has sort of taken over my life.  It used to be once a month, then it became once a week, and by now it’s pretty much every day.  Comment on this claim by IBM, that press release by Rigetti, this embargoed Nature paper by a group in Australia.  And when you do, it would be great if you could address this itemized list of 12 questions, with more questions coming later depending on what the editor needs.

On Friday we were on a family outing, with Dana driving and me in the front passenger seat, typing out another reply to a journalist on my phone.  Because of my engrossment in my Enlightenment duties, I neglected to tell Dana where the exit was, which then made us a half hour late for a scheduled museum tour and nearly ruined the day.

So then and there, I swore an oath to my family: that from now until January 1, 2019, I will be on vacation from talking to journalists.  This is my New Years resolution, except that it starts slightly before New Years.  Exceptions can be made when and if there’s a serious claim to have achieved quantum computational supremacy, or in other special cases.  By and large, though, I’ll simply be pointing journalists to this post, as a public commitment device to help me keep my oath.

I should add that I really like almost all of the journalists I talk to, I genuinely want to help them, and I appreciate the extreme difficulty that they’re up against: of writing a quantum computing article that avoids the Exponential Parallelism Fallacy and the “n qubits = 2n bits” fallacy and passes the Minus Sign Test, yet also satisfies an editor for whom even the so-dumbed-down-you-rip-your-hair-out version was already too technical.  And things have gotten both more exciting and more confusing in the last few years, with even the experts disagreeing about what should count as a “real quantum speedup,” or how much we should expect quantum computers to help with optimization or machine learning problems.  And of course, if journalists are trying to sort this out, then they should talk to someone who knows a bit about it, and I lack the strategic false modesty to deny being such a person.  Like, someone who calls me to fact-check a quantum computing piece should be rewarded for having done something right!  Alas, these considerations are how I let talking to journalists take over my life, so I can no longer treat them as dispositive.

For journalists looking for what to do, my suggestion is to talk to literally anyone else in the field.  E.g., look at the speakers from the past 20 years of QIP conferences—pretty much any of them could answer quantum computing questions as well as I can!  I’m tempted to name one or two specific colleagues to whom everyone should direct all their inquiries for the next year, but I can’t think of anyone I hate enough.


Unrelated Update: There’s at least one striking respect in which a human baby is like a dog, cat, or other domesticated animal. Namely, these are entities for which you can look into their eyes, and wonder whether they have any awareness whatsoever of the most basic facts of their situation. E.g., do they “know” which individual person is looking at them? Whether it’s morning or night? Which room they’re currently in? And yet, as soon as it comes to the entity’s food sources, all these doubts vanish. Yes, the baby / dog / cat clearly does understand exactly which person is supposed to feed it, and at what time of day, and often even where the food is stored. Implications for the mind/body problem (mind/stomach problem?) are left as exercises for the reader.


Unrelated Update #2: As many of you have probably seen, the cruel and monstrous tax bill awaits only Twitler’s signature, but at least the PhD student tuition tax was taken out, so American higher education lives another day. So, does this mean academics’ apoplectic fears were overblown? No, because public opposition, based on widely disseminated information about what the new tax would do to higher education, probably played an important role in causing the provision to be removed. Keep up the fight.

ITCS’2018 and more

Wednesday, December 13th, 2017

My good friend Yael Tauman Kalai asked me to share the following announcement (which is the only part of this post that she’s responsible for):

Dear Colleagues,

We are writing to draw your attention to the upcoming ITCS (Innovations in Theoretical Computer Science ) conference, which will be held in Cambridge, Massachusetts, USA from January 11-14, 2018, with a welcome reception on January 11, 2018 at the Marriott Hotel in Kendall Square.  Note that the conference will run for 4 full days (ThursdaySunday).

The deadline for early registration and hotel block are both December 21, 2017.

ITCS has a long tradition of holding a “graduating bits” event where graduating students and postdocs give a short presentation about their work. If you fit the bill, consider signing up — this is a great chance to showcase your work and it’s just plain fun. Graduating bits will take place on Friday, January 12 at 6:30pm.

In addition, we will have an evening poster session at the Marriott hotel on Thursday, January 11 from 6:30-8pm (co-located with the conference reception).

For details on all this and information on how to sign up, please check out the ITCS website:  https://projects.csail.mit.edu/itcs/


In unrelated news, apologies that my entire website was down for a day! After noticing that my blog was often taking me like two minutes to load (!), I upgraded to a supposedly faster Bluehost plan. Let me know if you notice any difference in performance.


In more unrelated news, congratulations to the people of Alabama for not only rejecting the medieval molester (barely), but—as it happens—electing a far better Senator than the President that the US as a whole was able to produce.


One last update: my cousin Alix Genter—who was previously in the national news (and my blog) for a bridal store’s refusal to sell her a dress for a same-sex wedding—recently started a freelance academic editing business. Alix writes to me:

I work with scholars (including non-native English speakers) who have difficulty writing on diverse projects, from graduate work to professional publications. Although I have more expertise in historical writing and topics within gender/sexuality studies, I am interested in scholarship throughout the humanities and qualitative social sciences.

If you’re interested, you can visit Alix’s website here. She’s my cousin, so I’m not totally unbiased, but I recommend her highly.


OK, one last last update: my friend Dmitri Maslov, at the National Science Foundation, has asked me to share the following.

NSF has recently posted a new Dear Colleague Letter (DCL) inviting proposal submissions under RAISE mechanism, https://www.nsf.gov/pubs/2018/nsf18035/nsf18035.jsp.  Interdisciplinarity is a key in this new DCL.  The proposals can be for up to $1,000,000 total.  To apply, groups of PIs should contact cognizant Program Directors from at least three of the following NSF divisions/offices: DMR, PHY, CHE, DMS, ECCS, CCF, and OAC, and submit a whitepaper by February 16, 2018.  It is a somewhat unusual call for proposals in this respect.  I would like the Computer Science community to actively participate in this call, because I believe there may be a lot of value in collaborations breaking the boundaries of the individual disciplines.

Quickies

Monday, December 4th, 2017

Updates (Dec. 5): The US Supreme Court has upheld Trump’s latest travel ban. I’m grateful to all the lawyers who have thrown themselves in front of the train of fascism, desperately trying to slow it down—but I could never, ever have been a lawyer myself. Law is fundamentally a make-believe discipline. Sure, there are times when it involves reason and justice, possibly even resembles mathematics—but then there are times when the only legally correct thing to say is, “I guess that, contrary to what I thought, the Establishment Clause of the First Amendment does let you run for president promising to discriminate against a particular religious group, and then find a pretext under which to do it. The people with the power to decide that question have decided it.” I imagine that I’d last about half a day before tearing up my law-school diploma in disgust, which is surely a personality flaw on my part.

In happier news, many of you may have seen that papers by the groups of Chris Monroe and of Misha Lukin, reporting ~50-qubit experiments with trapped ions and optical lattices respectively, have been published back-to-back in Nature. (See here and here for popular summaries.) As far as I can tell, these papers represent an important step along the road to a clear quantum supremacy demonstration. Ideally, one wants a device to solve a well-defined computational problem (possibly a sampling problem), and also highly-optimized classical algorithms for solving the same problem and for simulating the device, which both let one benchmark the device’s performance and verify that the device is solving the problem correctly. But in a curious convergence, the Monroe group and Lukin group work suggests that this can probably be achieved with trapped ions and/or optical lattices at around the same time that Google and IBM are closing in on the goal with superconducting circuits.


As everyone knows, the flaming garbage fire of a tax bill has passed the Senate, thanks to the spinelessness of John McCain, Lisa Murkowski, Susan Collins, and Jeff Flake.  The fate of American higher education will now be decided behind closed doors, in the technical process of “reconciling” the House bill (which includes the crippling new tax on PhD students) with the Senate bill (which doesn’t—that one merely guts a hundred other things).  It’s hard to imagine that this particular line item will occassion more than about 30 seconds of discussion.  But, I dunno, maybe calling your Senator or Representative could help.  Me, I left a voicemail message with the office of Texas Senator Ted Cruz, one that I’m confident Cruz and his staff will carefully consider.

Here’s talk show host Seth Meyers (scroll to 5:00-5:20):

“By 2027, half of all US households would pay more in taxes [under the new bill].  Oh my god.  Cutting taxes was the one thing Republicans were supposed to be good at.  What’s even the point of voting for a Republican if they’re going to raise your taxes?  That’s like tuning in to The Kardashians only to see Courtney giving a TED talk on quantum computing.”


Speaking of which, you can listen to an interview with me about quantum computing, on a podcast called Data Skeptic. We discuss the basics and then the potential for quantum machine learning algorithms.


I got profoundly annoyed by an article called The Impossibility of Intelligence Explosion by François Chollet.  Citing the “No Free Lunch Theorem”—i.e., the (trivial) statement that you can’t outperform brute-force search on random instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.  In this case, Chollet then goes on to argue that most intelligence doesn’t reside in individuals but rather in culture; that there are hard limits to intelligence and to its usefulness; that we know of those limits because people with stratospheric intelligence don’t achieve correspondingly extraordinary results in life [von Neumann? Newton? Einstein? –ed.]; and finally, that recursively self-improving intelligence is impossible because we, humans, don’t recursively improve ourselves.  Scattered throughout the essay are some valuable critiques, but nothing comes anywhere close to establishing the impossibility advertised in the title.  Like, there’s a standard in CS for what it takes to show something’s impossible, and Chollet doesn’t even reach the same galaxy as that standard.  The certainty that he exudes strikes me as wholly unwarranted, just as much as (say) the near-certainty of a Ray Kurzweil on the other side.

I suppose this is as good a place as any to say that my views on AI risk have evolved.  A decade ago, it was far from obvious that known methods like deep learning and reinforcement learning, merely run with much faster computers and on much bigger datasets, would work as spectacularly well as they’ve turned out to work, on such a wide variety of problems, including beating all humans at Go without needing to be trained on any human game.  But now that we know these things, I think intellectual honesty requires updating on them.  And indeed, when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying.”  (Related: Eliezer Yudkowsky’s There’s No Fire Alarm for Artificial General Intelligence.)

Who knows how much of the human cognitive fortress might fall to a few more orders of magnitude in processing power?  I don’t—not in the sense of “I basically know but am being coy,” but really in the sense of not knowing.

To be clear, I still think that by far the most urgent challenges facing humanity are things like: resisting Trump and the other forces of authoritarianism, slowing down and responding to climate change and ocean acidification, preventing a nuclear war, preserving what’s left of Enlightenment norms.  But I no longer put AI too far behind that other stuff.  If civilization manages not to destroy itself over the next century—a huge “if”—I now think it’s plausible that we’ll eventually confront questions about intelligences greater than ours: do we want to create them?  Can we even prevent their creation?  If they arise, can we ensure that they’ll show us more regard than we show chimps?  And while I don’t know how much we can say about such questions that’s useful, without way more experience with powerful AI than we have now, I’m glad that a few people are at least trying to say things.

But one more point: given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later.  Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI.  Sure, superintelligence is scary, but superstupidity has already been given its chance and been found wanting.


Speaking of superintelligences, I strongly recommend an interview of Ed Witten by Quanta magazine’s Natalie Wolchover: one of the best interviews of Witten I’ve read.  Some of Witten’s prouncements still tend toward the oracular—i.e., we’re uncovering facets of a magnificent new theoretical structure, but it’s almost impossible to say anything definite about it, because we’re still missing too many pieces—but in this interview, Witten does stick his neck out in some interesting ways.  In particular, he speculates (as Einstein also did, late in life) about whether physics should be reformulated without any continuous quantities.  And he reveals that he’s recently been rereading Wheeler’s old “It from Bit” essay, because: “I’m trying to learn about what people are trying to say with the phrase ‘it from qubit.'”


I’m happy to report that a group based mostly in Rome has carried out the first experimental demonstration of PAC-learning of quantum states, applying my 2006 “Quantum Occam’s Razor Theorem” to reconstruct optical states of up to 6 qubits.  Better yet, they insisted on adding me to their paper!


I was at Cornell all of last week to give the Messenger Lectures: six talks in all (!!), if you include the informal talks that I gave at student houses (including Telluride House, where I lived as a Cornell undergrad from 1998 to 2000).  The subjects were my usual beat (quantum computing, quantum supremacy, learnability of quantum states, firewalls and AdS/CFT, big numbers).  Intimidatingly, the Messenger Lectures are the series in which Richard Feynman presented The Character of Physical Law in 1964, and in which many others (Eddington, Oppenheimer, Pauling, Weinberg, …) set a standard that my crass humor couldn’t live up to in a trillion years.  Nevertheless, thanks so much to Paul Ginsparg for hosting my visit, and for making it both intellectually stimulating and a trip down memory lane, with meetings with many of the professors from way back when who helped to shape my thinking, including Bart Selman, Jon Kleinberg, and Lillian Lee.  Cornell is much as I remember it from half a lifetime ago, except that they must’ve made the slopes twice as steep, since I don’t recall so much huffing and puffing on my way to class each morning.

At one of the dinners, my hosts asked me about the challenges of writing a blog when people on social media might vilify you for what you say.  I remarked that it hasn’t been too bad lately—indeed that these days, to whatever extent I write anything ‘controversial,’ mostly it’s just inveighing against Trump.  “But that is scary!” someone remarked.  “You live in Texas now!  What if someone with a gun got angry at you?”  I replied that the prospect of enraging such a person doesn’t really keep me awake at night, because it seems like the worst they could do would be to shoot me.  By contrast, if I write something that angers leftists, they can do something far scarier: they can make me feel guilty!


I’ll be giving a CS colloquium at Georgia Tech today, then attending workshops in Princeton and NYC the rest of the week, so my commenting might be lighter than usual … but yours need not be.

Superposition your mouse over these five exciting QC links!

Friday, November 3rd, 2017

(1) My TEDx talk from Dresden, entitled “What Quantum Computing Isn’t,” is finally up on YouTube.  For regular Shtetl-Optimized readers, there’s unlikely to be much that’s new here: it’s basically 15 minutes of my usual spiel, packaged for mass consumption.  But while it went over well with the live audience, right now the only comment on the video is—I quote—“uuuuuuuuuuuuuuu,” from user “imbatman8472.”  So if you feel so inclined, go over there, watch it, and try to start a more contentful discussion!  Thanks so much to Andrés Goens, and everyone else in Dresden, for inviting me there and hosting a great visit.

(2) On December 4-6, there’s going to be a new conference in Mountain View, called Q2B (Quantum Computing for Business).  There, if it interests you, you can hear about the embryonic QC industry, from some of the major players at Google, IBM, Microsoft, academia, and government, as well as some of the QC startups (like IonQ) that have blossomed over the last few years.  Oh yes, and D-Wave.  The keynote speaker will be John Preskill; Google’s John Martinis and IBM’s Jerry Chow will also be giving talks.  I regret that another commitment will prevent me from attending myself, but I hope to attend next year’s iteration.  (Full disclosure: I’m a scientific adviser to QC Ware, the firm that’s organizing the conference.)

(3) On October 24, the House Science Committee heard three hours of testimony—you can watch it all here—about the need for quantum information research and the danger of the US falling behind China.  In what I believe is my first entry in the Congressional record, I’m quoted (for something totally incidental) at 1:09.  John Preskill was mostly just delighted that the witness, Jim Kurose, referred to me as a “physicist.”

(4) For several years, people have been asking me whether Bitcoin is resistant against quantum attack.  Now there’s finally an expert analysis, by Aggarwal et al., that looks into exactly that question.  Two-sentence summary: the proof-of-work is probably fine, although Grover’s algorithm can of course be used against it, which might eventually necessitate adjusting the difficulty parameter to account for that, and/or migrating from a pure preimage search task to collision-finding, where my result with Yaoyun Shi showed that quantum computers offer “only” an n2/3 black-box speedup over classical computers, rather than a square-root speedup.  The scheme for signing the transactions, which is currently based on elliptic curve cryptography, is the real danger point, but again one could address that by migrating to a post-quantum signature scheme.  My main comment about the matter is that, if I’d invested in Bitcoin when I first learned about it, I’d be rich now.

(5) In the first significant victory for my plan to spend a whole sabbatical year just writing up unwritten papers, I’ve got a new paper out today: Shadow Tomography of Quantum States.  Comments extremely welcome!

Grad students and postdocs and faculty sought

Saturday, October 28th, 2017

I’m eagerly seeking PhD students and postdocs to join our Quantum Information Center at UT Austin, starting in Fall 2018.  We’re open to any theoretical aspects of quantum information, although if you wanted to work with me personally, then areas close to computer science would be the closest fit.  I’m also able to supervise PhD students in physics, but am not directly involved with admissions to the physics department: this is a discussion we would have after you were already admitted to UT.

I, along with my theoretical computer science colleagues at UT Austin, am also open to outstanding students and postdocs in classical complexity theory. My wife, Dana Moshkovitz, tells me that she and David Zuckerman in particular are looking for a postdoc in the areas of pseudorandomness and derandomization (and for PhD students as well).

If you want to apply to the UTCS PhD program, please visit here.  The deadline is December 15.  If you specify that you want to work on quantum computing and information, and/or with me, then I’ll be sure to see your application.  Emailing faculty at this stage doesn’t help; we won’t “estimate your chances” or even look at your qualifications until we can see all the applications together.

If you want to apply for a postdoc with me, here’s what to do:

  • Email me introducing yourself (if I don’t already know you), and include your CV, your thesis (if you already have one), and up to 3 representative papers.  Do this even if you already emailed me before.
  • Arrange for two recommendation letters to be emailed to me.

Let’s set a deadline for postdoc applications of, I dunno, December 15?

In addition to the above, I’m happy to announce that the UT CS department is looking to hire a new faculty member in quantum computing and information—most likely a junior person.  The UT physics department is also looking to hire quantum information faculty members, with a focus on a senior-level experimentalist right now.  If you’re interested in these opportunities, just email me; I can put you in touch with the relevant people.

All in all, this is shaping up to be the most exciting era for quantum computing and information in Austin since a group of UT students, postdocs, and faculty including David Deutsch, John Wheeler, Wojciech Zurek, Bill Wootters, and Ben Schumacher laid much of the intellectual foundation of the field in the late 1970s and early 1980s.  We hope you’ll join us.  Hook ’em Hadamards!


Unrelated Announcements: Avi Wigderson has released a remarkable 368-page book, Mathematics and Computation, for free on the web.  This document surveys pretty much the entire current scope of theoretical computer science, in a way only Avi, our field’s consummate generalist, could do.  It also sets out Avi’s vision for the future and his sociological thoughts about TCS and its interactions with neighboring fields.  I was a reviewer on the manuscript, and I recommend it to anyone looking for a panoramic view of TCS.

In other news, my UT friend and colleague Adam Klivans, and his student Surbhi Goel, have put out a preprint entitled Learning Depth-Three Neural Networks in Polynomial Time.  (Beware: what the machine learning community calls “depth three,” is what the TCS community would call “depth two.”)  This paper learns real-valued neural networks in the so-called p-concept model of Kearns and Schapire, and thereby evades a 2006 impossibility theorem of Klivans and Sherstov, which showed that efficiently learning depth-2 threshold circuits would require breaking cryptographic assumptions.  More broadly, there’s been a surge of work in the past couple years on explaining the success of deep learning methods (methods whose most recent high-profile victory was, of course, AlphaGo Zero).  I’m really hoping to learn more about this direction during my sabbatical this year—though I’ll try and take care not to become another deep learning zombie, chanting “artificial BRAINSSSS…” with outstretched arms.

2^n is exponential, but 2^50 is finite

Sunday, October 22nd, 2017

Unrelated Update (Oct. 23) I still feel bad that there was no time for public questions at my “Theoretically Speaking” talk in Berkeley, and also that the lecture hall was too small to accomodate a large fraction of the people who showed up. So, if you’re someone who came there wanting to ask me something, go ahead and ask in the comments of this post.


During my whirlwind tour of the Bay Area, questions started pouring in about a preprint from a group mostly at IBM Yorktown Heights, entitled Breaking the 49-Qubit Barrier in the Simulation of Quantum Circuits.  In particular, does this paper make a mockery of everything the upcoming quantum supremacy experiments will try to achieve, and all the theorems about them that we’ve proved?

Following my usual practice, let me paste the abstract here, so that we have the authors’ words in front of us, rather than what a friend of a friend said a popular article reported might have been in the paper.

With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality.  To assess, refine and advance the design and control of these devices, one needs a means to test and evaluate their fidelity. This in turn requires the capability of computing ideal quantum state amplitudes for devices of such sizes and larger.  In this study, we present a new approach for this task that significantly extends the boundaries of what can be classically computed.  We demonstrate our method by presenting results obtained from a calculation of the complete set of output amplitudes of a universal random circuit with depth 27 in a 2D lattice of 7 × 7 qubits.  We further present results obtained by calculating an arbitrarily selected slice of 237 amplitudes of a universal random circuit with depth 23 in a 2D lattice of 8×7 qubits.  Such calculations were previously thought to be impossible due to impracticable memory requirements. Using the methods presented in this paper, the above simulations required 4.5 and 3.0 TB of memory, respectively, to store calculations, which is well within the limits of existing classical computers.

This is an excellent paper, which sets a new record for the classical simulation of generic quantum circuits; I congratulate the authors for it.  Now, though, I want you to take a deep breath and repeat after me:

This paper does not undercut the rationale for quantum supremacy experiments.  The truth, ironically, is almost the opposite: it being possible to simulate 49-qubit circuits using a classical computer is a precondition for Google’s planned quantum supremacy experiment, because it’s the only way we know to check such an experiment’s results!  The goal, with sampling-based quantum supremacy, was always to target the “sweet spot,” which we estimated at around 50 qubits, where classical simulation is still possible, but it’s clearly orders of magnitude more expensive than doing the experiment itself.  If you like, the goal is to get as far as you can up the mountain of exponentiality, conditioned on people still being able to see you from the base.  Why?  Because you can.  Because it’s there.  Because it challenges those who think quantum computing will never scale: explain this, punks!  But there’s no point unless you can verify the result.

Related to that, the paper does not refute any prediction I made, by doing anything I claimed was impossible.  On the contrary (if you must know), the paper confirms something that I predicted would be possible.  People said: “40 qubits is the practical limit of what you can simulate, so there’s no point in Google or anyone else doing a supremacy experiment with 49 qubits, since they can never verify the results.”  I would shrug and say something like: “eh, if you can do 40 qubits, then I’m sure you can do 50.  It’s only a thousand times harder!”

So, how does the paper get up to 50 qubits?  A lot of computing power and a lot of clever tricks, one of which (the irony thickens…) came from a paper that I recently coauthored with Lijie Chen: Complexity-Theoretic Foundations of Quantum Supremacy Experiments.  Lijie and I were interested in the question: what’s the best way to simulate a quantum circuit with n qubits and m gates?  We noticed that there’s a time/space tradeoff here: you could just store the entire amplitude vector in memory and update, which would take exp(n) memory but also “only” about exp(n) time.  Or you could compute the amplitudes you cared about via Feynman sums (as in the proof of BQP⊆PSPACE), which takes only linear memory, but exp(m) time.  If you imagine, let’s say, n=50 and m=1000, then exp(n) might be practical if you’re IBM or Google, but exp(m) is certainly not.

So then we raised the question: could one get the best of both worlds?  That is, could one simulate such a quantum circuit using both linear memory and exp(n) time?  And we showed that this is almost possible: we gave an algorithm that uses linear memory and dO(n) time, where d is the circuit depth.  Furthermore, the more memory it has available, the faster our algorithm will run—until, in the limit of exponential memory, it just becomes the “store the whole amplitude vector” algorithm mentioned above.  I’m not sure why this algorithm wasn’t discovered earlier, especially since it basically just amounts to Savitch’s Theorem from complexity theory.  In any case, though, the IBM group used this idea among others to take full advantage of the RAM it had available.

Let me make one final remark: this little episode perfectly illustrates why theoretical computer scientists like to talk about polynomial vs. exponential rather than specific numbers.  If you keep your eyes on the asymptotic fundamentals, rather than every factor of 10 or 1000, then you’re not constantly shocked by events, like a dog turning its head for every passing squirrel.  Before you could simulate 40 qubits, now you can simulate 50.  Maybe with more cleverness you could get to 60 or even 70.  But … dude.  The problem is still exponential time.

We saw the same “SQUIRREL!  SQUIRREL!” reaction with the people who claimed that the wonderful paper by Clifford and Clifford had undercut the rationale for BosonSampling experiments, by showing how to solve the problem in “merely” ~2n time rather than ~mn, where n is the number of photons and m is the number of modes.  Of course, Arkhipov and I had never claimed more than ~2n hardness for the problem, and Clifford and Clifford’s important result had justified our conservatism on that point, but, y’know … SQUIRREL!

More broadly, it seems to me that this dynamic constantly occurs in the applied cryptography world.  OMIGOD a 128-bit hash function has been broken!  Big news!  OMIGOD a new, harder hash function has been designed!  Bigger news!  OMIGOD OMIGOD OMIGOD the new one was broken too!!  All of it fully predictable once you realize that we’re on the shores of an exponentially hard problem, and for some reason, refusing to go far enough out into the sea (i.e., pick large enough security parameters) that none of this back-and-forth would happen.

I apologize, sincerely, if I come off as too testy in this post.  No doubt it’s entirely the fault of a cognitive defect on my end, wherein ten separate people asking me about something get treated by my brain like a single person who still doesn’t get it even after I’ve explained it ten times.

Coming to Nerd Central

Sunday, October 8th, 2017

While I’m generally on sabbatical in Tel Aviv this year, I’ll be in the Bay Area from Saturday Oct. 14 through Wednesday Oct. 18, where I look forward to seeing many friends new and old.  On Wednesday evening, I’ll be giving a public talk in Berkeley, through the Simons Institute’s “Theoretically Speaking” series, entitled Black Holes, Firewalls, and the Limits of Quantum Computers.  I hope to see at least a few of you there!  (I do have readers in the Bay Area, don’t I?)

But there’s more: on Saturday Oct. 14, I’m thinking of having a first-ever Shtetl-Optimized meetup, somewhere near the Berkeley campus.  Which will also be a Slate Star Codex meetup, because Scott Alexander will be there too.  We haven’t figured out many details yet, except that it will definitively involve getting fruit smoothies from one of the places I remember as a grad student.  Possible discussion topics include what the math, CS, and physics research communities could be doing better; how to advance Enlightenment values in an age of recrudescent totalitarianism; and (if we’re feeling really ambitious) the interpretation of quantum mechanics.  If you’re interested, shoot me an email, let me know if there are times that don’t work; then other Scott and I will figure out a plan and make an announcement.

On an unrelated note, some people might enjoy my answer to a MathOverflow question about why one should’ve expected number theory to be so rife with ridiculously easy-to-state yet hard-to-prove conjectures, like Fermat’s Last Theorem and the Goldbach Conjecture.  As I’ve discussed on this blog before, I’ve been deeply impressed with MathOverflow since the beginning, but never more so than today, when a decision to close the question as “off-topic” was rightfully overruled.  If there’s any idea that unites all theoretical computer scientists, I’d say it’s the idea that what makes a given kind of mathematics “easy” or “hard” is, itself, a proper subject for mathematical inquiry.