You down with SPP?

I’ve been in San Diego all week for the FCRC (Federated Computing Research Conference), which just wrapped up yesterday. I was here for Complexity’2007, but, lawless rebel that I am, I also crashed some of the talks at STOC’2007. Highlights:

  • Many of my friends wanted to skip the plenary talk on “Computer Science: Past, Present, and Future,” by past Computing Research Association Chair Ed Lazowska. But I urged them to go despite the title, since I’d met Lazowska when I interviewed at the University of Washington, and immediately concluded that this is the guy we want in charge of our field. As it turned out, Lazowska gave the most rousing defense of computer science research I’ve ever heard. Here’s what I remember: 2004 was the first year that human beings produced more transistors than grains of rice (~10 quintillion). Academic computer science research more than paid for itself over the last two decades by producing at least 15 billion-dollar industries. Computer scientists should be tackling the biggest issues in the world, including climate change and third-world poverty (Lazowska mentioned a project he’s involved with to put thousands of sensors under the ocean near the Northwest US, thereby “reducing oceanography to a computer science problem,” as well as a project of his student Tapan Parikh, to let illiterate farmers in India and Guatemala upload financial records via cellphones with intermittent access). Computer scientists should bring self-driving cars from prototype to reality, thereby saving some of the 45,000 people in the US alone who die in auto accidents every year. The future of theoretical computer science lies in transforming the other sciences (math, physics, economics, biology) via computational thinking. Had Watson and Crick been computer scientists, they would’ve realized immediately that the real import of their discovery had nothing to do with the biochemical details, and everything to do with the fact that DNA is a digital code. A piece of computer science (P vs. NP) is what many now consider the preeminent open problem in mathematics. Quantum computing might not work but certainly merits a huge effort. Our introductory CS courses suck. We’ve been doing a terrible job recruiting women. Update (6/23): Slides for Ed Lazowska’s talk, as well as another inspiring talk by Christos Papadimitriou, can be found here.
  • I gave a talk on my paper with Greg Kuperberg, on quantum versus classical proofs and advice.
  • I gave another talk on the paper “Quantum t-designs”, by my colleagues Andris Ambainis and Joe Emerson. Why? Because Joe couldn’t make it to San Diego, and Andris lost his passport. As I promised Andris, the vast majority of the talk was not delivered in my imitation of his voice.
  • Sergey Yekhanin gave a talk on his paper “Towards 3-query locally decodable codes of subexponential length,” which not only won the Danny Lewin Best Student Paper Award but also shared the STOC’07 Best Paper Award. Not to toot my own breakthrough-recognition horn, but … you saw it here first.
  • Ryan Williams, the pride of Alabama, won the Complexity Best Student Paper Award for his excellent paper “Time-space tradeoffs for counting NP solutions modulo integers.” This marks the second time Ryan has won this award, as well as the first time the award has been given twice to a former Cornell undergrad and resident of Telluride House in the late 1990′s (no … wait). So what did Ryan prove? Alright, suppose you have O(n1.8) time and no(1) memory, and you want to count the number of satisfying assignments of a Boolean formula, modulo a prime number p. Then there’s at most one prime p for which you can do this. Ryan has no idea which prime, and conjectures in any case that it doesn’t exist. I’m not making this up.
  • As I watched the conference regulars — Lance Fortnow, Bill Gasarch, Harry Buhrman (yes, Harry, you got another mention — happy?), Ken Regan, etc. — banter and drink coffee, I realized that the IEEE Conference on Computational Complexity desperately needs an official theme song. The song should have real complexity-theoretic content, but nevertheless be a little edgier than Find the Longest Path. So without further ado, I present to you a preliminary effort along these lines, due to Troy Lee and myself (aka “Nerdy by Nature”):

    You down with SPP (Yeah you know me)
    You down with SPP (Yeah you know me)
    You down with SPP (Yeah you know me)
    Who’s down with SPP (Every last attendee)

    (Note: BPP and ZPP also would’ve fit the meter, but those are really more appropriate for STOC than Complexity.)

    Update (6/20): We may have a winner, Aaron Sterling’s I Just Do Theory. (Thanks to Bill Gasarch for the pointer.)

26 Responses to “You down with SPP?”

  1. Dave Bacon Says:

    Yeah, you know me.

  2. the reader from Istanbul Says:

    Hello Scott, and anyone else who’s been to STOC;

    Is it true that a multiplication algorithm faster than Schönhage-Strassen has been presented there?

    And, does this make Shor’s algorithm have an even better complexity than before?

  3. todd. Says:

    No one properly called the “pride of Alabama” would be an Auburn fan.

  4. Anonymous Says:

    Is it true that a multiplication algorithm faster than Schönhage-Strassen has been presented there?
    Yes

  5. Jay Says:

    Did you mean to say:

    also, BPP and ZPP
    would’ve fit to a tee,
    but as you can see
    those are really
    more for STOC
    than Complexity

  6. KWRegan Says:

    I’m quite literally down with SPP. Anybody who can build an oracle separating SPP from UAP can be added as author(s) to the journal version (which I’ve been delinquent in sending) of the conference paper [CGRS04] mentioned there—updated link: http://www.cse.buffalo.edu/~regan/papers/pdf/CGRS04.pdf.

    UAP is the analogue of UP for alternating TMs, and is contained in SPP. If no taker by (say) August 15, then that kinda answers several conference referees who decried the lack of such an oracle, which I thought we were close to getting on several occasions but…

    Commenter #2, indeed Martin Fürer broke the barrier with an O(n logn 2^{log* n}) time algorithm, prominent on his home page. I missed his talk but spoke with him briefly—he thinks the 2^{log* n} should be replaceable by just log*n, but didn’t see a clear route to just O(nlogn) time.

  7. Anonymous Says:

    Wow, you saw Terence Tao at FCRC? That makes the whole
    experience worthwhile! (no, I’m not being sarcastic).

  8. Ryan Williams Says:

    Thanks for the shout-out, Scott.

    Why not just name a complexity class OPP? It seems there ought to be an interesting definition of what it means to solve Other People’s (Computational) Problems. Some interactive thing…

    As for comment #3, I guess no matter where one goes, one can’t get away from classless bammer fans… or at least their sympathizers =)

  9. Douglas Knight Says:

    Had Watson and Crick been computer scientists, they would’ve realized immediately that the real import of their discovery had nothing to do with the biochemical details, and everything to do with the fact that DNA is a digital code.

    My impression was that this was already the conventional wisdom. Certainly, the biochemical details provided support for the theory, but it was so well-accepted that the project really was about getting details, so that they could get started on breaking the code.

  10. Hugh Anchor Says:

    I refer the honorable blogger to the answer I gave in 2005, in the form of Smash The Polynomial Hierarchy.

  11. agm Says:

    The future of theoretical computer science lies in transforming the other sciences (math, physics, economics, biology) via computational thinking.

    I’ve heard that claim before. It makes a rousing and persuasive argument for collaboration, but at the same time it’s a pretty bad argument as a scientific approach. While one must acknowledge the exceptions, computer scientists, by dint of training and experience, are typically not any better prepared to investigate deep, fundamental scientific questions than scientists are typically prepared to come in and show CS people how the field of CS should develop. It’s a crappy research paradigm to tell someone in another field other than your own that you know their craft better than they do.

  12. Scott Says:

    It’s a crappy research paradigm to tell someone in another field other than your own that you know their craft better than they do.

    That’s been the paradigm for much of mathematics for the last few hundred years (“the Queen and Servant of the Sciences”), and it’s worked OK for them… :-)

  13. Jonathan Vos Post Says:

    Re: Douglas Knight’s Comment #9, posted on June 18th, 2007 at 09:32.

    I believe that the analogy “DNA is a digital code” was hideously misleading. Throw away the epicycles, and there is really no such thing as a “gene” in the new paradigm, and DNA (really the DNA/RNA/Protein/metabolism system) is spectacularly weird Complex System Network of Networks with emergent properties only now being noticed by massive computational integration of multiple laboratory methodologies. ENCODE looked at half a BILLION data from actual organisms, to get a fuzzy look at 1% of the human genome. It anything but digital, in vivo.

    In the sense of Mendel, there are such things as mathematical rules about discrete units of inheritance. But the last straightforward link to DNA is broken.

    To recapitulate (with vast oversimplification) the epic historical fiction of the key concept [reference Nature, 25 May 2006, p.400]:

    1860s: Gregor Mendel, Austrian monk, plays with pea plants, fudges data, publishes in most obscure place (slowing down recognition): basic rules of inheritance defined; traits determined by deterministic units passed from one generation to the next, God knows how.

    1909: Wilhelm Johanssen, Danish botanist, coins word “gene” for the unit associated with an inherited trait, admitting that the physical basis is unknown.

    1930: Thomas Morgan (enjoying the monastic atmosphere of Caltech) analyzes why time flies like an arrow but fruit flies like a banana, and concludes that genes sit on chromosomes, an idea popularized as beads on strings. Turns out as accurate as image of atoms being electron planets orbiting nuclei.

    1941: George Beadle and Edward Tatum launch the model that one gene makes one enzyme. The classical enzymology yields a PhD for Isaac Asimov, and by 1977 (when seen through the not-yet-named fields of Artificial Life and Nanotechnology) a neither granted nor denied PhD for Jonathan Vos Post though all key equations later published in refereed journals and international proceedings.

    1944: “What is Life?” nonfiction book adapted from blog (I mean lecture series) by Erwin Schrödinger. Francis Crick later cited “What is Life?” as the best theoretical description, before the actual discovery of DNA, of how genetic storage would work. In the book, Schrödinger introduced the idea of an “aperiodic crystal” that contained genetic information in its configuration of covalent chemical bonds.

    1944: Oswald Avery, Colin Macleod, and Maclyn McCarty show that genes are made of DNA. This raises more questions than it answers.

    1953: James Watson and Francis Crick find the golden spiral stairway to heaven, publishing the structure of DNA in a sneaky race against Pauling (they ply Pauling, Jr., with sherry to find out what Linus, Sr., is up to) and denying the essential contribution of Rosalyn Franklin and Wilkins and others; the central dogma of molecular biology comes from this: information flows from DNA to RNA to protein. That’s all ye need to know.

    1970: Reverse transcriptase was discovered by Howard Temin at the University of Wisconsin-Madison, and independently by David Baltimore, who later is Caltech President. Information can flow from RNA to DNA, against Dogma, and crucial to AIDS.

    1977: Richard Roberts and Philip Sharp discover that genes can be split into segments, leading to the idea that one gene can make several proteins.

    1993: The first microRNA is identified in the worm Caenorhabditis elegans; the worm turns.

    2003: GeneSweep: Human geneticists yell at each other late into the night, hammering a compromise definition of protein-coding genes, in order to decide who won the bet on the number of human genes. The winner is announced, but the consensus is that we have no idea what the real answer is. ["gene = locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions and/or other functional sequence regions."]. Yeah, right.

    2006: the paradigm begins to emerge that human genes are one long continuum.

    2007: ENCODE supports the new paradigm much more than the old.

    2008: Scott Aaronson makes the theoretical breakthrough on Quantum Complexity of the Human Genome, which eventually nets him the Nobel Prize.

    see:

    Encyclopedia Of DNA: New Findings Challenge
    Established Views On Human Genome

    http://www.sciencedaily.com/releases/2007/06/070613131932.htm

    Source: NIH/National Human Genome Research Institute
    Date: June 13, 2007
    Keywords:
    Human Biology, Evolutionary Biology, Biology, Genes,
    Genetics, Virology

    Science Daily — An international research consortium
    just published a set of papers that promise to reshape
    our understanding of how the human genome functions.
    The findings challenge the traditional view of our
    genetic blueprint as a tidy collection of independent
    genes, pointing instead to a complex network in which
    genes, along with regulatory elements and other types
    of DNA sequences that do not code for proteins,
    interact in overlapping ways not yet fully understood.

    In a group paper published in the June 14 issue of
    Nature and in 28 companion papers published in the
    June issue of Genome Research, the ENCyclopedia Of DNA
    Elements (ENCODE) consortium, which is organized by
    the National Human Genome Research Institute (NHGRI),
    part of the National Institutes of Health (NIH),
    reported results of its exhaustive, four-year effort
    to build a parts list of all biologically functional
    elements in 1 percent of the human genome. Carried out
    by 35 groups from 80 organizations around the world,
    the research served as a pilot to test the feasibility
    of a full-scale initiative to produce a comprehensive
    catalog of all components of the human genome crucial
    for biological function.

  14. Not even right Says:

    The future of theoretical computer science lies in transforming the other sciences (math, physics, economics, biology) via computational thinking.
    I am not a computer scientist but I studied Physics. Can anyone give an example that illustrates the above?

  15. Michael Bacon Says:

    Well, at least David Deutsch sees computational thinking as critical to physics. Here is a quote from an interview he gave some time ago:

    “What I want to work on is what is fundamental: to understand the important issues of the foundations of physics, what quantum theory means, what it is telling us about the structure of reality, and so on. To do that, it turns out one has to express the laws of physics and explanations of physical processes in terms of computation and information flow. And that is true, regardless of whether you’re thinking of a computer or any physical process. The universality that exists in the world means that when you’re studying the general theory of how information can flow around inside a quantum computer, you are automatically studying the general theory of how information can flow, period. That in turn means you’re discussing physics in general, period, because any physical process can be regarded as information processing. Any kind of experiment you can think of doing—where you prepare some physical system in a certain way, according to a certain system or algorithm, and you let it do something and then measure it according to some algorithm and get a result, either a number or a yes or no—all that is information processing. The structure of the universe or of physical reality is based on information flow and the study of it amounts to the study of information flow. And the best formalism and language and theory for understanding that is the theory of computation—but computation as implemented in the deepest-known physical laws. That means the quantum theory of computation.”

  16. Anonymous Says:

    How about generating provably good meshes for finite element methods.

  17. Thomas Larsson Says:

    “It’s a crappy research paradigm to tell someone in another field other than your own that you know their craft better than they do.”

    That’s been the paradigm for much of mathematics for the last few hundred years (”the Queen and Servant of the Sciences”), and it’s worked OK for them…

    Like in “Dirac didn’t understand delta functions before Schwartz invented distributions” or “Glashow-Salam-Weinberg didn’t understand gauge theories in the 1960′s because they were not fluent in fiber bundles”?

    Show me somebody who believes that and I will show you a mathematician.

  18. Anwar Shiekh Says:

    Schönhage-Strassen algorithm

    http://en.wikipedia.org/wiki/Schönhage-Strassen_algorithm

    “In 2007, Martin Fürer announced a new algorithm that should have a better asymptotical complexity”

    Martin Fürer, “Faster integer multiplication”, To appear in the STOC 2007 Proceedings. Preprint available at the author web page.
    http://www.cse.psu.edu/~furer/

  19. Paul Beame Says:

    …and Einstein could have worked out the geometry for general relativity without any help from Riemann…and string theorists have not relied at all on the work of Calabi, Yau and other algebraic geometers…

  20. Sonya Lowry Says:

    “The future of theoretical computer science lies in transforming the other sciences (math, physics, economics, biology) via computational thinking.”

    Comments like this cause me great concern. I think most of the CS community views our contribution as an enhancement to the other sciences. We do have the ability to create new ways of looking at the problems belonging to other scientific communities. But, it is not our charge to take possession of those problems.

    I currently work with astronomers extensively and have found that my most frustrating experiences are almost entirely caused when an astronomer turned programmer, without the benefit of a true CS education, displays such arrogance as to insist that he has sufficient understanding of my domain that he can better resolve the fundamentally CS problems we face.

    For example, one recently insisted he had resolved a problem I had previously refused to waste my limited time on. The problem in question obviously reduced to the halting problem, but he couldn’t see that. He simply did not have the education to begin to take on real CS problems but his arrogance would not allow him to see that.

    My fear is that we, as a community might adopt such an arrogance and so repel the very communities who would most benefit from a collaboration. In the end, we too benefit from the advances they make.

  21. Nils Runeberg Says:

    Hi Sonya,

    I disagree with you. I believe that the most profound intellectual contribution that CS can make to other science is exactly in telling other scientists that computation (just like energy, mass, time, space) is a natural phenomenon that can’t be ignored when trying to come up with a model of the real world.
    See what happens with Crypto: if one ignores the cost of computation then Shannon gives us a complete theory with “tight results.”

    Unfortunately, most of the times, CS is seen as instrumental to their work (designing programs to analyze data) and not even that, as you remarked. This is the price to pay for being the newcomers.

  22. Thorne Says:

    Hello! This is completely off topic, but I hope you will forgive the liberty. I’m just over from your website, where I was led by a google search of the terms “two to the omega uncountable”. The page that came up was your wonderful piece entitled The Pancake at the Bottom Although I must admit that most maths pretty much turn me off, certain aspects of theoretical mathematics (although often beyond my comprehension unless someone with a love and passion for it finds a way to express the beauty of the concepts to me), are too much fun!
    I digress. The point is this: I am writing a blog post in which I intend to include the above referenced piece. Your copyright shall remain, and I’ll provide a link to your page as well. Although I do this sort of thing occasionally, I loved your piece so much that I wanted to stop by and let you know. Of course if you object, do let me know and I’ll reference your page rather than publish it at my site, but I hope that won’t be necessary.
    Thanks and feel free to stop by and see what I’ve done with you!!
    Thorne

  23. Not even right Says:

    Well, at least David Deutsch sees computational thinking as critical to physics. Here is a quote from an interview he gave some time ago:…

    I thank Michael for the explanation. So, in the future, we may have computation based on Special Relativity or have so called Biochemical Computation. :)

  24. Not even right Says:

    Clicked on the hyperlink “40th Anniversary Conference” above and found many interesing talks by the famous speakers. One was by G Brassard. The end of his abstract says,”We propose to turn the table round, start from these theorems and possibly others, upgrade them as axioms, and ask how much of quantum mechanics they can derive.” If he meant to derive quantum mechanics from quantum computation, then he can certainly derive as much quantum mechanics from quantum computation as he wants. If he meant to derive quantum mechanics from classical computation, I see no reasons why he will be successful.

  25. Michael Bacon Says:

    I would have thought that we already have computation based on Special Relativity. But what would really be cool would be quantum computation with quantum data that can traverse closed timelike curves. See a paper by one of our favorite bloggers at: http://arxiv.org/PS_cache/quant-ph/pdf/0309/0309189v3.pdf.

  26. KWRegan Says:

    Link fix for the last item (one shouldn’t use URLs with …PS.cache…) here.