Archive for the ‘Complexity’ Category

Links, proofs, talks, jokes

Tuesday, July 30th, 2019

For those who haven’t yet seen it, Erica Klarreich has a wonderful article in Quanta on Hao Huang’s proof of the Sensitivity Conjecture. This is how good popular writing about math can be.

Klarreich quotes my line from this blog, “I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.” However, even if God doesn’t know a simpler proof, that of course doesn’t rule out the possibility that Don Knuth does! And indeed, a couple days ago Knuth posted his own variant of Huang’s proof on his homepage—in Knuth’s words, fleshing out the argument that Shalev Ben-David previously posted on this blog—and then left a comment about it here, the first comment by Knuth that I know about on this blog or any other blog. I’m honored—although as for whether the variants that avoid the Cauchy Interlacing Theorem are actually “simpler,” I guess I’ll leave that between Huang, Ben-David, Knuth, and God.

In Communications of the ACM, Samuel Greengard has a good, detailed article on Ewin Tang and her dequantization of the quantum recommendation systems algorithm. One warning (with thanks to commenter Ted): the sentence “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n” is mistaken, though it gestures in the direction of a truth. In the black-box setting, we can rigorously prove all sorts of separations: sqrt(n) vs. n (for Grover search), exponential (for period-finding), and more. In the non-black-box setting, we can’t prove any such separations at all.

Last week I returned to the US from the FQXi meeting in the Tuscan countryside. This year’s theme was “Mind Matters: Intelligence and Agency in the Physical World.” I gave a talk entitled “The Search for Physical Correlates of Consciousness: Lessons from the Failure of Integrated Information Theory” (PowerPoint slides here), which reprised my blog posts critical of IIT from five years ago. There were thought-provoking talks by many others who might be known to readers of this blog, including Sean Carroll, David Chalmers, Max Tegmark, Seth Lloyd, Carlo Rovelli, Karl Friston … you can see the full schedule here. Apparently video of the talks is not available yet but will be soon.

Let me close this post by sharing two important new insights about quantum mechanics that emerged from my conversations at the FQXi meeting:

(1) In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming.

(2) It’s true that, as a matter of logic, the Schrödinger equation does not imply the Born Rule. Having said that, if the Schrödinger equation were leading a rally, and the crowd started a chant of “BORN RULE! BORN RULE! BORN RULE!”—the Schrödinger equation would just smile and wait 13 seconds for the chant to die down before continuing.

John Wright joins UT Austin

Wednesday, July 3rd, 2019

I’m delighted to announce that quantum computing theorist John Wright will be joining the computer science faculty at UT Austin in Fall 2020, after he finishes a one-year postdoc at Caltech.

John made an appearance on this blog a few months ago, when I wrote about the new breakthrough by him and Anand Natarajan: namely, that MIP* (multi-prover interactive proofs with entangled provers) contains NEEXP (nondeterministic double-exponential time). Previously, MIP* had only been known to contain NEXP (nondeterministic single exponential time). So, this is an exponential expansion in the power of entangled provers over what was previously known and believed, and the first proof that entanglement actually increases the power of multi-prover protocols, rather than decreasing it (as it could’ve done a priori). Even more strikingly, there seems to be no natural stopping point: MIP* might soon swallow up arbitrary towers of exponentials or even the halting problem (!). For more, see for example this Quanta article, or this post by Thomas Vidick, or this short story [sic] by Henry Yuen.

John grew up in Texas, so he’s no stranger to BBQ brisket or scorching weather. He did his undergrad in computer science at UT Austin—my colleagues remember him as a star—and then completed his PhD with Ryan O’Donnell at Carnegie Mellon, followed by a postdoc at MIT. Besides the work on MIP*, John is also well-known for his 2015 work with O’Donnell pinning down the sample complexity of quantum state tomography. Their important result, a version of which was independently obtained by Haah et al., says that if you want to learn an unknown d-dimensional quantum mixed state ρ to a reasonable precision, then ~d2 copies of ρ are both necessary and sufficient. This solved a problem that had personally interested me, and already plays a role in, e.g., my work on shadow tomography and gentle measurements.

Our little quantum information center at UT Austin is growing rapidly. Shyam Shankar, a superconducting qubits guy who previously worked in Michel Devoret’s group at Yale, will also be joining UT’s Electrical and Computer Engineering department this fall. I’ll have two new postdocs—Andrea Rocchetto and Yosi Atia—as well as new PhD students. We’ll continue recruiting this coming year, with potential opportunities for students, postdocs, faculty, and research scientists across the CS, physics, and ECE departments as well as the Texas Advanced Computing Center (TACC). I hope you’ll consider applying to join us.

With no evaluative judgment attached, I can honestly say that this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand.

Meanwhile, this was an unprecedented year for CS hiring at UT Austin more generally. John Wright is one of at least four new faculty (probably more) who will be joining us. It’s a good time to be in CS.

A huge welcome to John, and hook ’em Hadamards!

(And for US readers: have a great 4th! Though how could any fireworks match the proof of the Sensitivity Conjecture?)

Sensitivity Conjecture resolved

Tuesday, July 2nd, 2019

The Sensitivity Conjecture, which I blogged about here, says that, for every Boolean function f:{0,1}n→{0,1}, the sensitivity of f—that is, the maximum, over all 2n input strings x∈{0,1}n, of the number of input bits such that flipping them changes the value of f—is at most polynomially smaller than a bunch of other complexity measures of f, including f’s block sensitivity, degree as a real polynomial, and classical and quantum query complexities. (For more, see for example this survey by Buhrman and de Wolf. Or for quick definitions of the relevant concepts, see here.)

Ever since it was posed by Nisan and Szegedy in 1989, this conjecture has stood as one of the most frustrating and embarrassing open problems in all of combinatorics and theoretical computer science. It seemed so easy, and so similar to other statements that had 5-line proofs. But a lot of the best people in the field sank months into trying to prove it. For whatever it’s worth, I also sank … well, at least weeks into it.

Now Hao Huang, a mathematician at Emory University, has posted a 6-page preprint on his homepage that finally proves the Sensitivity Conjecture, in the form s(f)≥√deg(f). (I thank Ryan O’Donnell for tipping me off to this.) Within the preprint, the proof itself is about a page and a half.

Whenever there’s an announcement like this, ~99% of the time either the proof is wrong, or at any rate it’s way too complicated for outsiders to evaluate it quickly. This is one of the remaining 1% of cases. I’m rather confident that the proof is right. Why? Because I read and understood it. It took me about half an hour. If you’re comfortable with concepts like induced subgraph and eigenvalue, you can do the same.

From pioneering work by Gotsman and Linial in 1992, it was known that to prove the Sensitivity Conjecture, it suffices to prove the following even simpler combinatorial conjecture:

Let S be any subset of the n-dimensional Boolean hypercube, {0,1}n, which has size 2n-1+1. Then there must be a point in S with at least ~nc neighbors in S.

Here c>0 is some constant (say 1/2), and two points in S are “neighbors” if and only they differ in a single coordinate. Note that if S had size 2n-1, then the above statement would be false—as witnessed, for example, by the set of all n-bit strings with an even number of 1’s.

Huang proceeds by proving the Gotsman-Linial Conjecture. And the way he proves Gotsman-Linial is … well, at this point maybe I should just let you read the damn preprint yourself. I can’t say it more simply than he does.

If I had to try anyway, I’d say: Huang constructs a 2n×2n matrix, called An, that has 0’s where there are no edges between the corresponding vertices of the Boolean hypercube, and either 1’s or -1’s where there are edges—with a simple, weird pattern of 1’s and -1’s that magically makes everything work. He then lets H be an induced subgraph of the Boolean hypercube of size 2n-1+1. He lower-bounds the maximum degree of H by the largest eigenvalue of the corresponding (2n-1+1)×(2n-1+1) submatrix of An. Finally, he lower-bounds that largest eigenvalue by … no, I don’t want to spoil it! Read it yourself!

Paul Erdös famously spoke of a book, maintained by God, in which was written the simplest, most beautiful proof of each theorem. The highest compliment Erdös could give a proof was that it “came straight from the book.” In this case, I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.

Indeed, the question is: how could such an elementary 1.5-page argument have been overlooked for 30 years? I don’t have a compelling answer to that, besides noting that “short” and “elementary” often have little to do with “obvious.” Once you start looking at the spectral properties of this matrix An, the pieces snap together in precisely the right way—but how would you know to look at that?

By coincidence, earlier today I finished reading my first PG Wodehouse novel (Right Ho, Jeeves!), on the gushing recommendation of a friend. I don’t know how I’d missed Wodehouse for 38 years. His defining talent is his ability to tie together five or six plot threads in a way that feels perfect and inevitable even though you didn’t see it coming. This produces a form of pleasure that’s nearly indistinguishable from the pleasure one feels in reading a “proof from the book.” So my pleasure centers are pretty overloaded today—but in such depressing times for the world, I’ll take pleasure wherever I can get it.

Huge congratulations to Hao!

Added thought: What this really is, is one of the purest illustrations I’ve seen in my career of the power and glory of the P≠NP phenomenon. We talk all the time about how proofs are easier to verify than to find. In practice, though, it can be far from obvious that that’s true. Consider your typical STOC/FOCS paper: writing it probably took the authors several months, while fully understanding the thing from scratch would probably take … also several months! If there’s a gap, it’s only by a factor of 4 or 5 or something. Whereas in this case, I don’t know how long Huang spent searching for the proof, but the combined search efforts of the community add up to years or decades. The ratio of the difficulty of finding to the difficulty of completely grasping is in the hundreds of thousands or millions.

Another added thought: Because Hao actually proves a stronger statement than the original Sensitivity Conjecture, it has additional implications, a few of which Hao mentions in his preprint. Here’s one he didn’t mention: any randomized algorithm to guess the parity of an n-bit string, which succeeds with probability at least 2/3 on the majority of strings, must make at least ~√n queries to the string, while any such quantum algorithm must make at least ~n1/4 queries. For more, see the paper Weak Parity by me, Ambainis, Balodis, and Bavarian (Section 6).

Important Update: Hao Huang himself has graciously visited the comment section to satisfy readers’ curiosity by providing a detailed timeline of his work on the Sensitivity Conjecture. (tl;dr: he was introduced to the problem by Mike Saks in 2012, and had been attacking it on and off since then, until he finally had the key insight this past month while writing a grant proposal. Who knew that grant proposals could ever be useful for anything?!?)

Another Update: In the comments section, my former student Shalev Ben-David points out a simplification of Huang’s argument, which no longer uses Cauchy’s interlacing theorem. I thought there was no way this proof could possibly be made any simpler, and I was wrong!

Quantum Sabinacy

Monday, July 1st, 2019

Sabine Hossenfelder—well-known to readers of Shtetl-Optimized for opposing the building of a higher-energy collider, and various other things—has weighed in on “quantum supremacy” in this blog post and this video. Sabine consulted with me by phone before doing the video and post, and despite what some might see as her negative stance, I agree with what she has to say substantially more than I disagree.

I do, however, have a few quibbles:

1. We don’t know that millions of physical qubits will be needed for useful simulations of quantum chemistry.  It all depends on how much error correction is needed and how good the error-correcting codes and simulation algorithms become. Like, sure, you can generate pessimistic forecasts by plugging numbers in to the best known codes and algorithms. But “the best known” is a rapidly moving target—one where there have already been orders-of-magnitude improvements in the last decade.

2. To my mind, there’s a big conceptual difference between a single molecule that you can’t efficiently simulate classically, and a programmable computer that you can’t efficiently simulate classically.  The difference, in short, is that only for the computer, and not for the molecule, would it ever make sense to say it had given you a wrong answer! In other words, a physical system becomes a “computer” when, and only when, you have sufficient understanding of, and control over, its state space and time evolution that you can ask the system to simulate something other than itself, and then judge whether it succeeded or failed at that goal.

3. The entire point of my recent work, on certified randomness generation (see for example here or here), is that sampling random bits with a NISQ-era device could have a practical application. That application is … I hope you’re sitting down for this … sampling random bits! And then, more importantly and nontrivially, proving to a faraway skeptic that the bits really were randomly generated.

4. While I was involved in some of the first proposals for NISQ quantum supremacy experiments (such as BosonSampling), I certainly can’t take sole credit for the idea of quantum supremacy!  The term, incidentally, was coined by John Preskill.

5. The term “NISQ” (Noisy Intermediate Scale Quantum) was also coined by John Preskill.  He had no intention of misleading investors—he just needed a term to discuss the quantum computers that will plausibly be available in the near future.  As readers of this blog know, there certainly has been some misleading of investors (and journalists, and the public…) about the applications of near-term QCs. But I don’t think you can lay it at the feet of the term “NISQ.”

NP-complete Problems and Physics: A 2019 View

Sunday, June 2nd, 2019

If I want to get back to blogging on a regular basis, given the negative amount of time that I now have for such things, I’ll need to get better at dispensing with pun-filled titles, jokey opening statements, etc. etc., and resigning myself to a less witty, more workmanlike blog.

So in that spirit: a few weeks ago I gave a talk at the Fields Institute in Toronto, at a symposium to celebrate Stephen Cook and the 50th anniversary (or actually more like 48th anniversary) of the discovery of NP-completeness. Thanks so much to the organizers for making this symposium happen.

You can watch the video of my talk here (or read the PowerPoint slides here). The talk, on whether NP-complete problems can be efficiently solved in the physical universe, covers much the same ground as my 2005 survey article on the same theme (not to mention dozens of earlier talks), but this is an updated version and I’m happier with it than I was with most past iterations.

As I explain at the beginning of the talk, I wasn’t going to fly to Toronto at all, due to severe teaching and family constraints—but my wife Dana uncharacteristically urged me to go (“don’t worry, I’ll watch the kids!”). Why? Because in her view, it was the risks that Steve Cook took 50 years ago, as an untenured assistant professor at Berkeley, that gave birth to the field of computational complexity that Dana and I both now work in.

Anyway, be sure to check out the other talks as well—they’re by an assortment of random nobodies like Richard Karp, Avi Wigderson, Leslie Valiant, Michael Sipser, Alexander Razborov, Cynthia Dwork, and Jack Edmonds. I found the talk by Edmonds particularly eye-opening: he explains how he thought about (the objects that we now call) P and NP∩coNP when he first defined them in the early 60s, and how it was similar to and different from the way we think about them today.

Another memorable moment came when Edmonds interrupted Sipser’s talk—about the history of P vs. NP—to deliver a booming diatribe about how what really matters is not mathematical proof, but just how quickly you can solve problems in the real world. Edmonds added that, from a practical standpoint, P≠NP is “true today but might become false in the future.” In response, Sipser asked “what does a mathematician like me care about the real world?,” to roars of approval from the audience. I might’ve picked a different tack—about how for every practical person I meet for whom it’s blindingly obvious that “in real life, P≠NP,” I meet another for whom it’s equally obvious that “in real life, P=NP” (for all the usual reasons: because SAT solvers work so well in practice, because physical systems so easily relax as their ground states, etc). No wonder it took 25+ years of smart people thinking about operations research and combinatorial optimization before the P vs. NP question was even explicitly posed.


Unrelated Announcement: The Texas Advanced Computing Center (TACC), a leading supercomputing facility in North Austin that’s part of the University of Texas, is seeking to hire a Research Scientist focused on quantum computing. Such a person would be a full participant in our Quantum Information Center at UT Austin, with plenty of opportunities for collaboration. Check out their posting!

Not yet retired from research

Friday, April 19th, 2019

Last night, two papers appeared on the quantum physics arXiv that my coauthors and I have been working on for more than a year, and that I’m pretty happy about.

The first paper, with Guy Rothblum, is Gentle Measurement of Quantum States and Differential Privacy (85 pages, to appear in STOC’2019). This is Guy’s first paper that has anything to do with quantum, and also my first paper that has anything to do with privacy. (What do I care about privacy? I just share everything on this blog…) The paper has its origin when I gave a talk at the Weizmann Institute about “shadow tomography” (a task where you have to measure quantum states very carefully to avoid destroying them), and Guy was in the audience, and he got all excited that the techniques sounded just like what they use to ensure privacy in data-mining, and I figured it was just some wacky coincidence and brushed him off, but he persisted, and it turned out that he was 100% right, and our two fields were often studying the same problems from different angles and we could prove it. Anyway, here’s the abstract:

In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is α-gentle for small α is also O(α)-DP, and any product measurement that is ε-DP is also O(ε√n)-gentle.

Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state ρ, as well as known two-outcome measurements E1,…,Em, shadow tomography asks us to estimate Pr[Ei accepts ρ], for every i∈[m], by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using O((log m)2(log d)2) copies of ρ, compared to Aaronson’s previous bound of ~O((log m)4(log d)). Our protocol has the advantages of being online (that is, the Ei‘s are processed one at a time), gentle, and conceptually simple.

Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms.

The second paper, with Robin Kothari, UT Austin PhD student William Kretschmer, and Justin Thaler, is Quantum Lower Bounds for Approximate Counting via Laurent Polynomials. Here’s the abstract:

Given only a membership oracle for S, it is well-known that approximate counting takes Θ(√(N/|S|)) quantum queries. But what if a quantum algorithm is also given “QSamples”—i.e., copies of the state |S⟩=Σi∈S|i⟩—or even the ability to apply reflections about |S⟩? Our first main result is that, even then, the algorithm needs either Θ(√(N/|S|)) queries or else Θ(min{|S|1/3,√(N/|S|)}) reflections or samples. We also give matching upper bounds.

We prove the lower bound using a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents. We lower-bound Laurent polynomial degree using two methods: a new “explosion argument” that pits the positive- and negative-degree parts of the polynomial against each other, and a new formulation of the dual polynomials method.

Our second main result rules out the possibility of a black-box Quantum Merlin-Arthur (or QMA) protocol for proving that a set is large. More precisely, we show that, even if Arthur can make T quantum queries to the set S⊆[N], and also receives an m-qubit quantum witness from Merlin in support of S being large, we have Tm=Ω(min{|S|,√(N/|S|)}). This resolves the open problem of giving an oracle separation between SBP, the complexity class that captures approximate counting, and QMA.

Note that QMA is “stronger” than the queries+QSamples model in that Merlin’s witness can be anything, rather than just the specific state |S⟩, but also “weaker” in that Merlin’s witness cannot be trusted. Intriguingly, Laurent polynomials also play a crucial role in our QMA lower bound, but in a completely different manner than in the queries+QSamples lower bound. This suggests that the “Laurent polynomial method” might be broadly useful in complexity theory.

I need to get ready for our family’s Seder now, but after that, I’m happy to answer any questions about either of these papers in the comments.

Meantime, the biggest breakthrough in quantum complexity theory of the past month isn’t either of the above: it’s the paper by Anand Natarajan and John Wright showing that MIP*, or multi-prover interactive proof systems with entangled provers, contains NEEXP, or nondeterministic doubly-exponential time (!!). I’ll try to blog about this later, but if you can’t wait, check out this excellent post by Thomas Vidick.

Congratulations!

Friday, April 5th, 2019

Congrats to Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, who won the 2018 Turing Award for their work on deep learning (i.e., what used to be called neural nets). This might be the first Turing Award ever given for something where no one really understands why it works … and it’s years overdue.

Congrats to Avi Wigderson for winning the Knuth Prize. When I was asked to write a supporting nomination letter, my first suggestion was to submit a blank sheet of paper—since for anyone in theoretical computer science, there’s nothing that needs to be said about why Avi should win any awards we have. I hope Avi remains a guiding light of our community for many years to come.

And congrats to Mark Braverman for winning the Alan T. Waterman Award, one that I have some personal fondness for, along with materials scientist Jennifer Dionne. As Sasha Razborov once put it, after he (Sasha), I, and others recoiled from the task of proving the Linial-Nisan Conjecture, that polylog-wise independent distributions are indistinguishable from uniform by AC0 circuits, a “braver man” stepped in to do the job.

Beware of fake FOCS site!

Wednesday, April 3rd, 2019

As most of you in theoretical computer science will know, the submission deadline for the 2019 FOCS conference is this Friday, April 5. The FOCS’2019 program committee chair, my UT Austin colleague David Zuckerman, has asked me to warn everyone that a fake submission site was set up at aconf.org—apparently as a phishing scam—and is one of the first results to come up when you google “FOCS 2019.” Do not submit there! The true URL is focs2019.cs.jhu.edu; accept no substitutes!

Anyway, I’ve been thrashing for several weeks—just barely escaping spaghettification at the Email Event Horizon—but I hope to be back shortly with your regularly scheduled programming.

Four updates

Tuesday, February 12th, 2019

A few weeks ago, I was at QIP’2019 in Boulder, CO. This week I was at SQuInT’2019 in Albuquerque, NM. There were lots of amazing talks—feel free to ask in the comments section.

There’s an interview with me at the website “GigaOm,” conducted by Byron Reese and entitled Quantum Computing: Capabilities and Limits. I didn’t proofread the transcript and it has some errors in it, but hopefully the meaning comes through. In other interview news, if you were interested in my podcast with Adam Ford in Melbourne but don’t like YouTube, Adam has helpfully prepared transcripts of the two longest segments: The Ghost in the Quantum Turing Machine and The Winding Road to Quantum Supremacy.

The New York Times ran an article entitled The Hard Part of Computer Science? Getting Into Class, about the surge in computer science majors all over the US, and the shortage of professors to teach them. The article’s go-to example of a university where this is happening is UT Austin, and there’s extensive commentary from my department chair, Don Fussell.

The STOC’2019 accepted papers list is finally out. Lots of cool stuff!

The NP genie

Tuesday, December 11th, 2018

Hi from the Q2B conference!

Every nerd has surely considered the scenario where an all-knowing genie—or an enlightened guru, or a superintelligent AI, or God—appears and offers to answer any question of your choice.  (Possibly subject to restrictions on the length or complexity of the question, to prevent glomming together every imaginable question.)  What do you ask?

(Standard joke: “What question should I ask, oh wise master, and what is its answer?”  “The question you should ask me is the one you just asked, and its answer is the one I am giving.”)

The other day, it occurred to me that theoretical computer science offers a systematic way to generate interesting variations on the genie scenario, which have been contemplated less—variations where the genie is no longer omniscient, but “merely” more scient than any entity that humankind has ever seen.  One simple example, which I gather is often discussed in the AI-risk and rationality communities, is an oracle for the halting problem: what computer program can you write, such that knowing whether it halts would provide the most useful information to civilization?  Can you solve global warming with such an oracle?  Cure cancer?

But there are many other examples.  Here’s one: suppose what pops out of your lamp is a genie for NP questions.  Here I don’t mean NP in the technical sense (that would just be a pared-down version of the halting genie discussed above), but in the human sense.  The genie can only answer questions by pointing you to ordinary evidence that, once you know where to find it, makes the answer to the question clear to every competent person who examines the evidence, with no further need to trust the genie.  Or, of course, the genie could fail to provide such evidence, which itself provides the valuable information that there’s no such evidence out there.

More-or-less equivalently (because of binary search), the genie could do what my parents used to do when my brother and I searched the house for Hanukkah presents, and give us “hotter” or “colder” hints as we searched for the evidence ourselves.

To make things concrete, let’s assume that the NP genie will only provide answers of 1000 characters or fewer, in plain English text with no fancy encodings.  Here are the candidates for NP questions that I came up with after about 20 seconds of contemplation:

  • Which pieces of physics beyond the Standard Model and general relativity can be experimentally confirmed with the technology of 2018? What are the experiments we need to do?
  • What’s the current location of the Ark of the Covenant, or its remains, if any still exist?  (Similar: where can we dig to find physical records, if any exist, pertaining to the Exodus from Egypt, or to Jesus of Nazareth?)
  • What’s a sketch of a resolution of P vs. NP, from which experts would stand a good chance of filling in the details?  (Similar for other any famous unsolved math problem.)
  • Where, if anywhere, can we point radio telescopes to get irrefutable evidence for the existence of extraterrestrial life?
  • What happened to Malaysia Flight 370, and where are the remains by which it could be verified?  (Similar for Amelia Earhart.)
  • Where, if anywhere, can we find intact DNA of non-avian dinosaurs?

Which NP questions would you ask the genie?  And what other complexity-theoretic genies would be interesting to consider?  (I thought briefly about a ⊕P genie, but I’m guessing that the yearning to know whether the number of sand grains in the Sahara is even or odd is limited.)


Update: I just read Lenny Susskind’s Y Combinator interview, and found it delightful—pure Lenny, and covering tons of ground that should interest anyone who reads this blog.