Do theoretical computer scientists despise practitioners? (Answer: no, that’s crazy)

A roboticist and Shtetl-Optimized fan named Jon Groff recently emailed me the following suggestion for a blog entry:

I think a great idea for an entry would be the way that in fields like particle physics the theoreticians and experimentalists get along quite well but in computer science and robotics in particular there seems to be a great disdain for the people that actually do things from the people that like to think about them. Just thought I’d toss that out there in case you are looking for some subject matter.

After I replied (among other things, raising my virtual eyebrows over his rosy view of the current state of theoretician/experimentalist interaction in particle physics), Jon elaborated on his concerns in a subsequent email:

[T]here seems to be this attitude in CS that getting your hands dirty is unacceptable. You haven’t seen it because you sit a lofty heights and I tend to think you always have. I have been pounding out code since ferrite cores. Yes, Honeywell 1648A, so I have been looking up the posterior of this issue rather than from the forehead as it were. I guess my challenge would be to find a noteworthy computer theoretician somewhere and ask him:
1) What complete, working, currently functioning systems have you designed?
2) How much of the working code did you contribute?
3) Which of these systems is still operational and in what capacity?
Or say, if the person was a famous robotics professor or something you may ask:
1) Have you ever actually ‘built’ a ‘robot’?
2) Could you, if called upon, design and build an easily tasked robot safe for home use using currently available materials and code?

So I wrote a second reply, which Jon encouraged me to turn into a blog post (kindly giving me permission to quote him).  In case it’s of interest to anyone else, my reply is below.


Dear Jon,

For whatever it’s worth, when I was an undergrad, I spent two years working as a coder for Cornell’s RoboCup robot soccer team, handling things like the goalie.  (That was an extremely valuable experience, one reason being that it taught me how badly I sucked at meeting deadlines, documenting my code, and getting my code to work with other people’s code.)   Even before that, I wrote shareware games with my friend Alex Halderman (now a famous computer security expert at U. of Michigan); we made almost $30 selling them.  And I spent several summers working on applied projects at Bell Labs, back when that was still a thing.  And by my count, I’ve written four papers that involved code I personally wrote and experiments I did (one on hypertext, one on stylometric clusteringone on Boolean function query properties, one on improved simulation of stabilizer circuits—for the last of these, the code is actually still used by others).  While this is all from the period 1994-2004 (these days, if I need any coding done, I use the extremely high-level programming language called “undergrad”), I don’t think it’s entirely true to say that I “never got my hands dirty.”

But even if I hadn’t had any of those experiences, or other theoretical computer scientists hadn’t had analogous ones, your questions still strike me as unfair.  They’re no more fair than cornering a star coder or other practical person with questions like, “Have you ever proved a theorem?  A nontrivial theorem?  Why is BPP contained in P/poly?  What’s the cardinality of the set of Turing-degrees?”  If the coder can’t easily answer these questions, would you say it means that she has “disdain for theorists”?  (I was expecting some discussion of this converse question in your email, and was amused when I didn’t find any.)

Personally, I’d say “of course not”: maybe the coder is great at coding, doesn’t need theory very much on a day-to-day basis and doesn’t have much free time to learn it, but (all else equal) would be happy to know more.  Maybe the coder likes theory as an outsider, even has friends from her student days who are theorists, and who she’d go to if she ever did need their knowledge for her work.  Or maybe not.  Maybe she’s an asshole who looks down on anyone who doesn’t have the exact same skill-set that she does.  But I certainly couldn’t conclude that from her inability to answer basic theory questions.

I’d say just the same about theorists.  If they don’t have as much experience building robots as they should have, don’t know as much about large software projects as they should know, etc., then those are all defects to add to the long list of their other, unrelated defects.  But it would be a mistake to assume that they failed to acquire this knowledge because of disdain for practical peoplerather than for mundane reasons like busyness or laziness.

Indeed, it’s also possible that they respect practical people all the more, because they tried to do the things the practical people are good at, and discovered for themselves how hard they were.  Maybe they became theorists partly because of that self-discovery—that was certainly true in my case.  Maybe they’d be happy to talk to or learn from a practical roboticist like yourself, but are too shy or too nerdy to initiate the conversation.

Speaking of which: yes, let’s let bloom a thousand collaborations between theorists and practitioners!  Those are the lifeblood of science.  On the other hand, based on personal experience, I’m also sensitive to the effect where, because of pressures from funding agencies, theorists have to try to pretend their work is “practically relevant” when they’re really just trying to discover something cool, while meantime, practitioners have to pretend their work is theoretically novel or deep, when really, they’re just trying to write software that people will want to use.  I’d love to see both groups freed from this distorting influence, so that they can collaborate for real reasons rather than fake ones.

(I’ve also often remarked that, if I hadn’t gravitated to the extreme theoretical end of computer science, I think I might have gone instead to the extreme practical end, rather than to any of the points in between.  That’s because I hate the above-mentioned distorting influence: if I’m going to try to understand the ultimate limits of computation, then I should pursue that wherever it leads, even if it means studying computational models that won’t be practical for a million years.  And conversely, if I’m going to write useful software, I should throw myself 100% into that, even if it means picking an approach that’s well-understood, clunky, and reliable over an approach that’s new, interesting, elegant, and likely to fail.)

Best,
Scott

124 Responses to “Do theoretical computer scientists despise practitioners? (Answer: no, that’s crazy)”

  1. Carlo A. Says:

    A related comment is that, in computer science, everybody is someone else’s theoretician. This may be true also in other sciences, but it is a particularly pronounced phenomenon in CS, where science and engineering have coexisted since its inception.

    I realized to be someone else’s theoretician during my PhD on formal methods. Part of my work was developing, and sometimes implementing, decision procedures for fragments of temporal logic and automata that can be useful to model real-time systems. Many of the colleagues in the engineering school considered me a theoretician, since I routinely dealt with formal logic and I proved some theorems. Conversely, my friends in the science faculty who worked on complexity theory considered me a very applied guy, and were sometimes looking for “applications” of their theoretical results among my formal models of timed systems. :-)

  2. Vadim Says:

    Being a “real-world” programmer, I don’t see how all the programming knowledge in the world would help someone be a better CS theorist. The primitives needed to implement an algorithm pretty much come down to having variables, basic operators to manipulate those variables, and loops. All of the other stuff that a programmer has to know are things like programming paradigms, libraries, frameworks, the environment they’re developing for, interoperability with other code, source control, development methodologies, etc. In other words, things that are only relevant to someone who spends their time actually writing code for a particular purpose.

    I have no idea how this applies to robotics theorists or whatever else, but if a complexity theorist came to me and asked me to teach them how to be better programmers, I’d have to think they’re preparing for a career change.

  3. Scott Says:

    Carlo #1: That’s an excellent point. I’ve talked to mathematicians who consider me an “applied” person. And conversely, when I visit experimental quantum computing groups, the person who they consider their “in-house theorist” is usually someone who I’d always thought of as “that experimentalist.”

  4. Oleg S. Says:

    Speaking of collaboration bloom, I’ve got 2 questions:

    1. Is anyone aware of quantum software for simulation of biomolecules? Not the solid state stuff, but proteins and small molecules in water environment. If quantum computers could be used simulate enzymatic reactions, that would be great, but if BQP would offer something else and new, that would be absolutely awesome.

    2. Does anyone know a simple and clear handbook/course on how to recognize and solve NP-complete problem? That would be a must-read for computational biologist.

  5. JG Says:

    As Rabi said, “Physics is an experimental science.” I think if you asked Turing he might have said that about Computing. Vadim #2, if you look at the great advances the successful theoreticians have the ability to cross over into the experimental which means they have the grasp of the tools to do so. A purely theoretical CS guy that has little grasp of the tools is one who has not experimented, and hence can make no significant advance. Scott’s favorite PC, the DWave, is a good example where there is a nice theoretical model but the application of experimental tools revealed it slower than its classically simulated self.

  6. wolfgang Says:

    Scott,

    I am puzzled looking at your C code – I thought you told us at one point that Basic was your language of choice ….

  7. Richard Says:

    Vadim #2

    You say “All of the other stuff that a programmer has to know are things like programming paradigms, libraries, frameworks, the environment they’re developing for, interoperability with other code, source control, development methodologies, etc.”

    However you should bear in mind that there are CS theorists who cover exactly these issues from an abstract mathematical perspective.

    The following link gives an example:

    http://www.cs.man.ac.uk/~banach/retrenchment/

    (Note – this is not my page – but rather that of a friend who happens also to be called Richard).

  8. rrtucci Says:

    So who is this Jon Groff? Has he shown any interest in quantum computing programming? Hmm.

  9. GDS Says:

    Well I don’t know about computer science, but I would take issue with Mr. Groff’s characterization of the state of physics. To say that theoreticians and experimentalists “get along quite well,” is understating it. I would say that most theoreticians would consider experimentalists to be their “best friends,” perhaps even rivaling canines for that title.

  10. Scott Says:

    Vadim #2:

      Being a “real-world” programmer, I don’t see how all the programming knowledge in the world would help someone be a better CS theorist.

    Being a theorist, I don’t think it’s quite as simple as you say. Maybe it’s easier for others, but I doubt I ever would’ve developed an intuition for algorithms, if I hadn’t first spent years programming. And two of my papers (the ones on Boolean function query properties and stabilizer circuits) came about because I was trying to code something, and doing so made me realize that there “ought” to be better algorithms than the ones I was implementing. On the other hand, it’s also possible that once you do put in time coding, you internalize habits of mind that then remain with you even years or decades after you stop coding. (Like learning to ride a bike.)

  11. Scott Says:

    JG #5: Off-topic, but I’d say that D-Wave is not at all a “good example where there is a nice theoretical model but the application of experimental tools revealed it slower than its classically simulated self.” In fact, it’s almost exactly the opposite! From the very beginning, the theorists couldn’t understand why you ought to expect a computational speedup from the approach D-Wave was taking (basically, quantum annealing without error-correction, and where the temperature is bigger than the spectral gap). The gamble D-Wave took was that the theory didn’t matter, and that there would be a speedup in practice even if it wasn’t at all clear that there was any speedup in theory.

  12. JG Says:

    Scott #11 trying to steer back OT I guess I meant that from their perspective the theory was that if entanglement was experimentally shown they had a quantum computer and hence speedup. Everyone elses perspective didnt matter. I guess that begs the question was that an example of ‘theoreticians’ with really bad experimental acumen or ‘experimenters’ with a disdain of theory as Scott says.

  13. Sid K Says:

    I know that a single example doesn’t really mean much, but: Dave Bacon is someone who was a high-end CS theorist and is now a high-end computer programmer. I’m sure he didn’t switch because he was disdainful of the practical blokes.

  14. vzn Says:

    no question this bias/ dichotomy exists in TCS, can attest to it via 1st hand experience and have collected a lot of items on the subject (via participation on tcs stackexchange & blogging etc, and the meta forums sometimes esp reveal this), some reflection on this in my blog… but its the same in other scientific fields also eg mathematics or even in engineering & probably not any worse in CS (and actually/ maybe/ arguably *far* less worse/ exaggerated in CS). its really like two teams which compete with each other it seems but they are quite complementary. its vaguely similar to the developer/ tester (QA) difference in software engineering. like teenage girls so-called “frenemies” eh? :p
    maybe more detailed comments later….

  15. Vadim Says:

    JG, Richard, & Scott: good points and I should clarify that when I read Scott’s post and what Jon wrote, I took it to be about how theorists should expend the time and effort to learn programming in-depth (at least in-depth enough to design “complete, working, currently functioning systems”), with my point being that learning it in-depth is less about programming and more about tangential issues. I imagine that everyone doing CS research was, at the very least, exposed to a decent amount of code-writing while they were in school, and I’m sure they are better theorists for it. But, if every aeronautical engineer was also an active pilot and airplane mechanic, when would they have time to design planes?

  16. Scott Says:

    JG #12:

      I guess I meant that from their perspective the theory was that if entanglement was experimentally shown they had a quantum computer and hence speedup

    If that were actually their theory (of course the reality is a bit more complicated…), then it would’ve been a wrong theory, as any quantum computing theorist could quickly tell you. We know several examples of models (e.g., matchgates, stabilizer circuits) that generate plenty of entanglement, but no quantum speedup. So again, that wouldn’t have been an example of a failure of theory, but of a failure that would’ve come from ignoring theory.

  17. Bram Cohen Says:

    As someone who works on projects which are both of immediate practical utility and which have interesting theoretical ramifications, I have to say that problems where this happens are very, very rare, and to work on them I have to go scavenging in the wild for something where my combination of skills brings about a big advantage, and then I often wind up working on bizarre things like reliability and congestion control which are lacking in theoretical work partly because they’re just plain weird and ugly.

  18. JG Says:

    Vadim #15 I would be interested to know a bit more about your background and from whence you derived your outlook. Sad to say its not the case. Most CS grads write very little (relatively) code in the process, if they teach they resort to Scott’s latest HL language Undergrad (and the fact that that joke exists and is kinda funny is telling too eh?). In fact, most pilots that have a degree have one in Aeronautical Engineering, perhaps due to a lack of an available ‘Pilotin’ degree, but in that field I doubt you would *ever* find *any* separation between the theorists and experimenters, in fact I dont think they make the distinction in any way. Interesting survey though: How many AE’s have never flown anything themselves? I’d say 0 because at least when I was at Riddle long ago you had to fly to get the degree. VZN #14 what can I say, I like violent agreement, but I believe the discrimination is more severe especially amongst the lettered to the autodidactic. Example, you can get a job in the industry without a degree except if its a teaching position, then you get flat disdained. If a autodidact were to prepare a paper with relatively revolutionary concepts that had been properly cited and prepared it would never see the light of day in any respectable journal without parasitically attaching to a published individual. I have nothing to back this up except personal experience though. All this makes me sound like a bitter troll I’m sure but believe me, there are many worse things than this I deal with so its not a big deal. This has been going on throughout history but I worry that its causing a rift now thats retarding computer science and especially robotics. Robotics because like Aeronautical Engineering I dont think you can separate the theory from the experiment in any substantial fashion if you expect to achieve something like a conscious machine or even usable, ubiquitous, general purpose robotics.

  19. Darrell Burgan Says:

    At last a topic I can participate in fully. :-)

    I think that in the vast field of computer science, the theoretical and the practical meet up every single day at the architectural level. This is an area where people like me have to understand the latest from academia as well as understand how to take the latest theories and put them into practice. I really don’t see there being any conflict between them. Pure theory is as critically important as as pure engineering; without both, one simply cannot design complex or large scale systems.

    Examples? Who designs the architecture for modern supercomputers? Who designs the architecture for globally spanning information-intensive systems (think Google or Netflix)? How can these architectures be designed without considering both realms? They cannot.

    As to friction between the two, I have personally seen or heard disdain coming from both directions. I have personally seen brilliant PhD folks who couldn’t code their way out of a wet paper bag, and likewise seen brilliant coders who run screaming from the sight of mathematics. And plenty of scorn heaped upon the opposite side.

    I think a measure of smack talk is all in good fun as long as it is with a smile. We should all remember, though, that neither realm exists without the other.

  20. JG Says:

    Darrell #19 Glad you found this accessible. I have to say though, no. I have spanned an arc of 34 years and dozens upon dozens of locations and tasks and people and from the lowest coder to the loftiest gooroo there are just none who are really proficient that lack a firm theoretical grounding. I suppose there are some idiot savants somewhere that could spin out quicksorts and codecs and multithreaded cluster code without having any idea how or what they really did but I tend to think they are only in Hollywood. No, there is a direct correlation between theoretical knowledge and application in code and any developer who decries such acumen, well, she faces subdued whispers and silent terror from the keepers of the codebase.
    But among management that ceases. Management is encouraged to delegate to maximize their valuable time. Likewise I believe a tenured prof is also so encouraged, perhaps less overtly. Perhaps some believe, like Scott, that they have paid their dues and are tired of shoveling coal. That’s all well and good but programmers are not interchangeable any more than theoreticians. There are orders of magnitude difference in skill level between coders. Could it be that by using the ‘Undergrad’ language you are losing the syntactic sugar that could otherwise breathe the life of consciousness into otherwise dead code?

  21. vzn Says:

    more thoughts after collecting them… a great crosscutting expert/ authority to study in this area is sedgewick, author of several books on algorithms incl undergrad [learned TCS starting with his] but also very advanced book on Analysis of Algorithms using some very heavy duty-math.

    see also his talk putting the science back in computer science

    from this lecture
    “The algorithm designer who does not run experiments risks becoming lost in abstraction.” —Sedgewick

    another area where there is increasing cross pollination is empirical/experimental analysis of algorithms/ CS etc. & am expecting this to increase very substantially over the coming decades.

    as for quantum speedup, there is no existing theory that says it is possible or exists in general, its a core open problem of theory whether BQP =? BPP =? P, so am missing SAs point on that… is this basically blaming dwave for not outdoing cutting edge theory/ conquering/ outpacing theoretical terra incognita with experiment? :(

  22. RubeRad Says:

    Sounds like Jon Groff is a practitioner that despises theoreticians…

    “(these days, if I need any coding done, I use the extremely high-level programming language called “undergrad”)”

    That’s really funny! I’m going to steal that. Except, not being in academia I guess I’ll have to say “intern”.

  23. Vitruvius Says:

    Speaking as one who learned how to type by writing assembler on an 029 card punch for an 1800, one who has been programming for a living since 1971, one who has been programming some variant of Unix since 1974, and one who has an undergraduate electrical engineering degree and a graduate degree in theoretical computing science from that decade (from the first university in Canada with a department of computing or computer science, formed over 50 years ago, said department which remains one of the premier thereto); as one who has been working hand in hand with theoretical computing science professors (some from said department, some emeritus, some who are also principal coders of our software), guild software developers (including some 20 year old recent graduates), and professional thermodynamics engineers, for the last twenty years, both theorizing and coding almost every day; as one who is the principal architect and a principal coder of our globally networked thermodynamics engineering software, now selected for and in use by some DJIA transnationals for all their pressure-safety critical plant operations world-wide; and as one who presumably is now effectively qualified to meet or exceed Groff’s provisos (and who loves Descartian run-on sentences), I’d just like to note that: the way Groff over-generalizes and projects his personal experience into epidemic negative conclusions, he should become a so-called writer for The New York Times or The Guardian.

    Moving on to some of Scott’s opening remarks, which are as almost always brilliant, I’d like to heartily concur with his vision of a letting bloom a thousand collaborations between theorists and practitioners. In my first undergraduate year, we had an engineering practicum course taught by the dean, and he always stressed that if you aren’t using the best solution for the problem, you aren’t doing proper engineering, and it doesn’t matter whether that solution is highly theoretical or a heavy handed but highly effective hack. (Cost, on the other hand, always matters, he was adamant about that.) Ergo, regardless of what a theoretician who disdains guild-smiths or a guild-smith who disdains theoreticians is actually doing, they certainly aren’t doing proper engineering. Maybe they are doing something else important, but I can only speak to that which I know.

    I can see though, Scott, why you focused on theory: $30 for your first commercial endeavour? Why, I made over three times that on my first effort! But the real kicker came about a year later. My colleague (since 1958) and I were young libertarians at the time (I said were), and one of our fellow philosophers was a salesman for Burroughs. They had these L-series accounting mini-computers, and they had about 30 customers with payroll running on said machines. The source-code was long since lost, the programs were distributed on paper tape, and the tax-table updates had to be done twice a year. So we said we would do the job. We read the paper tapes into a 360/67, wrote a compiler for the L-series machines in Algol, reverse engineered and re-wrote the tax algorithm and tables in said compiler’s language, and then re-punched the tapes with the new code tacked on at the end, preceded by a “position to absolute location in memory” instruction on the paper tape to overwrite the old tax-table portion of the code and data that would have been just read in. That overwriting may have been a heavy-handed hack, writing a compiler in Algol may have been pretty theoretical (at least for us, in those days), and it may have taken us two weeks the first time, but six months later we entered the new tax tables, ran the compiler 30-odd times from a shell script, and punched the new tapes and printed the corresponding mailing labels in an afternoon. At $300 a pop, times over 30 pops, we left with c. $5,000 each, for an afternoon’s work, at the age of 18. In the end it was the lucre that caused me to turn down a doctoral position at the University of Toronto, after I was accepted, yet my colleague went on to study there with Cook and publish some notable results in the field, and later become chairman of his department. And we continue to collaborate to this day, almost every day.

    So: theory or practice? Both, please. Without both, I wouldn’t be able to celebrate the Brooklyn Funk Essential’s monetary classic, which I’m afraid I shan’t link to here because it’s just too NSFW for a nice Shtetl like this ;-)

  24. Bram Cohen Says:

    On the opposite side, I’m a firm believer that you don’t really know how elegant an algorithm is until you’ve walked through an implementation of it in detail, and when I came up with walksat the immediate motivation was that the implementation (which I was actually doing) was much nicer than gsat, so there’s definitely something to be said for viewing implementation not just as something to be done after devising an algorithm but an important part of understanding and gaining insight into it, even if you know the asymptotics in advance.

  25. James Cross Says:

    It is somewhat amusing what you think to be practical experience – coding for the RoboCup and shareware.

    Try a few decades in IT in a big corporation.

    Upper management makes technology decisions and sends them down. A few months later the decisions change.

    Business dithers for months on requirements then wants the software developed in weeks. The “system” is thrown together by developers who barely understand the requirements.

    Testers test without understanding the requirements. Most of the defects are environmental problems, bad data, or tester error. The real defects are missed.

    Deployments in the middle of the night with war rooms running for days trying to clean up the mess.

  26. Richard Says:

    @James #25

    To which I could add the saying that prevailed during my time in industry:

    Stages of a project

    1 Wild Enthusiasm
    2 Disillusionment
    3 Panic
    4 Search for the guilty
    5 Punish the Innocent
    6 Reward the uninvolved.

    You don’t know the pressure of the situation if you haven’t blown $10000 worth of (non-eraseable) PROMs in the middle of the night , with the customer due to arrive at 8 the next morning and found that the microcode build in them didn’t work!

  27. Scott Says:

    vzn #21:

      as for quantum speedup, there is no existing theory that says it is possible or exists in general, its a core open problem of theory whether BQP =? BPP =? P, so am missing SAs point on that… is this basically blaming dwave for not outdoing cutting edge theory/ conquering/ outpacing theoretical terra incognita with experiment?

    The mistake you make is to confuse knowledge with proven theorems. There are many different forms that theoretical knowledge can take: as a small example, if there’s no proof that approach A won’t work, and there is a proof that approach B won’t work, then approach A is preferable, even in the absence of proof that approach A will work.

    If you had a business plan that somehow depended on the Riemann Hypothesis being true, then I doubt even the most punctilious mathematician would bat an eye. It’s obvious that in business you can take justified gambles, and that RH is (to put it mildly) a justified gamble. Likewise, if you had a business plan that depended on factoring being classically hard … hey, wait, there are businesses that depend on precisely that conjecture, and while a few mathematicians and computer scientists raise doubts about them, on the whole there’s extremely strong support. If your business plan was to build universal quantum computers, and there was no doubt you could succeed, and the only “doubt” was about whether BPP=BQP, I likewise don’t think any reasonable expert would be deterred. They’d probably want to join your effort themselves.

    But let’s suppose, by contrast, that your business plan depended on practically-significant quantum speedups being possible, not merely using quantum adiabatic optimization (which itself is a wide-open conjecture), but using a noisy, stoquastic version of adiabatic optimization where the temperature is greater than the gap, and where it seems like everything you’re doing ought to be classically simulable using a quantum Monte Carlo algorithm—regardless of whether or not you’re getting any global entanglement (which, again, is an open question). In that case, one could say, not merely that there’s no rigorous proof that you’re going to see a speedup compared to the best classical algorithms, but that there are very good theoretical reasons to think you’re not going to see one. Do you understand the difference?

  28. Vadim Says:

    JG #18: I may well be that I’m colored by my experience, which is probably different from that of most other Shtetl-Optimized readers. I’m a non-traditional, adult (in my 30’s) CS undergrad student in BU’s evening program (getting close to finishing, woohoo). The program focuses on applied CS and is geared towards students looking for non-academic employment. Other than the requisite programming classes, I’ve had to do a good amount of programming in other classes. It would be impossible to get through everything without some programming proficiency. It’s a far cry from the software development I do professionally, mostly in the sense that all of the organization stuff mentioned by Richard (#26) is thankfully absent from my classwork. Their graduate program (which also has a non-academic employment focus) is similar; you have to complete programming pre-reqs before being admitted. Maybe top CS schools focusing on preparing students for academia don’t teach as much programming, but I really wouldn’t know.

  29. Scott Says:

    James Cross #25:

      It is somewhat amusing what you think to be practical experience – coding for the RoboCup and shareware.

      Try a few decades in IT in a big corporation.

      Upper management makes technology decisions and sends them down. A few months later the decisions change … [more tales of woe follow]

    Dude, I have nothing but sympathy for any software developer in the situation you describe. But at the same time, it’s not obvious that a few decades of it would make me a better theoretical computer scientist, any more than a few decades of being tortured on a rack. ;-) Dilbert-like bureaucracy is something that I do have experience with (yes, we have it in academia too), and I don’t see how it made me a better person in any way.

  30. Scott Says:

    Bram #24:

      I’m a firm believer that you don’t really know how elegant an algorithm is until you’ve walked through an implementation of it in detail

    I violently agree. Indeed, that’s a good way to describe my experience with my stabilizer circuits and Boolean function query properties papers: I found faster algorithms by trying to implement the known algorithms and realizing, in the course of doing that, that the existing algorithms were inelegant, and because of being inelegant, were probably theoretically suboptimal as well.

  31. Scott Says:

    JG #18:

      if they teach they resort to Scott’s latest HL language Undergrad (and the fact that that joke exists and is kinda funny is telling too eh?)

    I should point out that the pressure on me to use the “undergrad” language doesn’t merely come from laziness or lack of time on my part (though those are certainly contributors). It comes, even more, from the large number of smart, enthusiastic undergrads knocking on my door looking for projects, who often have great coding skills but little theory background. If I did all my coding myself, what projects would I give them?

  32. vzn Says:

    am assuming your last question directly at me was rhetorical but will wander out on a limb & defy that anyway.
    your moderate, at-times-extreme hostility to dwave goes on for many years and seems not to have been tempered much even by their major accomplishments esp in recent times. “qm gadfly”!
    my full admiration and acknowledgement that you are a world-class expert in this area … of erecting straw men & knocking them down….
    clearly Dwave has TWO major goals
    (a) build a QM computer
    (b) achieve “SPEEDUP”
    the devil is in the details. now, playing devils advocate (esp wrt this forum history). one may quibble over details but they are farther along based on a pragmatic/ practical basis than *anyone* in the entire *world*, *by*far*. they are succeeding to “various degrees” on *both* goals. they have passed major milestones. frankly their accomplishments over 10years are *extraordinary* esp in comparison to other labs. one can see how your blog tends to mix up (a) and (b) esp as time has progressed and the achievement of (a), one nothing but a dream like a mirage in the far distance, becomes unequivocal realized, now instead going by in the rear view mirror!
    clearly (b) is FAR more ambitious than (a) which itself is *wildly*ambitious* esp compared to the level of engineering prowess/ achievement exhibited elsewhere in comparison. DWave now has a *working*qm*computer*. that is a phenomenal goal achieved, realized.
    this blog & your continual “mosquito-ankle-biting” of their brilliant work exhibits “moving the goalposts” behavior which has long been noticed in the AI field, but is now being carried into QM/physics realm by your eminence.
    seems your blog/ philosophy/ ideology indeed does exhibit some of the broken splitting wrt theory/practice esp in the Dwave case that JG is calling out.

  33. fred Says:

    The more time you spend working in the real world on actual software *products* (i.e. not toys and prototypes, but code that’s used by dozens of banks), the more you will realize that CS-level algorithm design is sitting pretty low in the list of what actually really matters in the end.
    Sure, you don’t want to be using an exponential algorithm instead of a polynomial method, but that’s the obvious and pleasant part of the job (but rare).

    The real headaches for a software engineer are:

    * understanding what it is you’re trying to build. Gathering good requirements and understanding your customers’ need is extremely difficult. If you do get requirements, they’re often ambiguous or not self-consistent.

    * requirements will be constantly changing, so it’s important to come up with a solution that will give you enough elbow room to adapt easily at low cost (without having to re-architect), so there are major performance vs adaptability trade-offs to consider.

    * it always pays to stick to solutions that are as simple as possible. Solutions that work but are too complex will be huge maintenance cost burdens for many years. Losing some performance to increase simplicity is almost always worth it.

    * sometimes the requirements are so complex that just getting the expected results is difficult. The biggest optimization is going from “not working” to “working” :)
    In those situations, your best bet is to not rush to implement but to first focus on first building a robust automated testing/benchmark framework that will exercise old and new requirements, giving you a safety net to catch any regression when you need to refactor your code (to judge different the merits of various implementations). At that point the implementation itself doesn’t matter as much as the actual contract implied by the requirements (that’s where the value/intelligence is).

    * once the system is working as expected, that’s just the best case scenario! A real world system will often fail in a nearly infinite number of unexpected ways. Managing and designing for this will take at least as much resources as the best case scenario.

  34. Richard Says:

    @vzn #32

    “moving the goalposts” behavior which has long been noticed in the AI field, but is now being carried into QM/physics realm

    In the AI case it isn’t so much a question of moving the goalposts as of finding out where they were all along.

    It is rather like climbing a convex hill – you keep getting an illusory idea of where the top is – but when you get there you can see a bit farther and you realise you still have a way to go.

    However with D wave it is quite clear that they are focussed on doing the things that will get them funding from people who who don’t understand the problem.

  35. fred Says:

    vzn #32

    In the case of D-Wave, it’s great to see Scott sticking his neck out to voice his opinion clearly – he has nothing to gain from it (no investors to satisfy and nothing to sell), except keeping things real, and he’s demonstrated many times that he’s both open minded and skeptical (as every great scientist is).

    Besides, capitalism and science aren’t incompatible. Hype should always be balanced with a healthy dose of open criticism.

  36. JG Says:

    vzn #32, whoa, thats alot there. I agree with much of it but I have to take issue with the ‘brilliance’ of their work. Supercooling a bunch of Josephson junction SQUIDS and outcoupling them with high res AD converters is not especially brilliant. Scott has taken alot of heat for his ridicule of them as a theorist attacking those who Just Do It, which I think is why he published this and made me look like a total ignorant troll. He was addressing the issue by showing a grubby pragmatist trying to call him out I guess, like everyone that hounds him about DWave. I think if I was going to do a ‘Good Will Hunting’ on him thats what I’d come up with as the base motivation for him ‘liking’ my idea.
    Rube #22 Negative. Couldnt have done anything without theory. I stand on the shoulders of giants and pick up their dandruff flakes: http://neocoretechs.com/
    Any practitioner of CS that eschews theory is flat out STUPID. Cmon, you’re creating from electrons and pure thought, there is nothing *but* theory.
    vzn #21, Of course Sedgewick and Knuth, say no more, but they need some replacements now. Scott claims they got all the low hanging fruit. I say they climbed on wobbly boxes to get it and now we have the ‘hi lift’ of the digital sum total of mankinds knowledge available via a planetary network so that ripe old fruit way up there, why cant we pick it?
    Vitruvius #23 Guardian
    Scott #30. If there are any CS profs without similar story put them on probation until they do it.
    Scott #31. Cmon Scott, challenge them. Dont just leave them to shovel your coal now. What you would give them is your initial release and the project is to refine, optimize, and describe the theoretical and practical reasons behind their choices.
    All this polarization clouds the issue though. My point is that great minds, geniuses, can take the data and memory pipeline and integrate it from theory to experiment and see the way clear to do it and then manifest it in reality using an instrumentality. When you have a cabal of academic leaders who testify to their charges that its ok to cut out the parts of that they deem beneath themselves then you create an elitist, almost Luddite, hegemony that I believe has retarded computer science as evidenced by our lack of even the ability to define the parameters of a conscious program, let alone write one. That was my main point. Ok, lets stop talking about *me* now, I’m getting uncomfortable over here.

  37. fred Says:

    Theorist or practitioner, we should all recognize that we’re blessed to be in such a rich and dynamic field, spanning both esoteric cutting-edge mathematical mysteries and riding the latest exciting technical revolutions.
    If often takes nothing more than a 500$ laptop to get things going in a major way.

  38. Scott Says:

    JG #36:

      Cmon Scott, challenge them. Dont just leave them to shovel your coal now.

    I’ve never once given out a coding project, that didn’t end up involving significant decisions on the student’s part and discussions about theory and high-level goals (when it inevitably turned out that I hadn’t fully thought through what the program should do or which questions we wanted to know the answers to).

  39. JG Says:

    Fred#37. Actually, Fred, thats just Windoze, you can do it for <$100 with todays SBC's using unix. Electronics is the one field thats surpassed out 'Star Trek' metrics. Funny thing, the theorists said you couldnt make a usable gate below 7 microns, which is true in pure substrate, but the engineers added some dopants, shrugged, and kept going on down.The problem is the software for our ST metric. I think the problem with DWave is the software, based on what I see from screen shots in ads theres nothing new there. Looks like some python derivative in the code I've seen. Is it possible that the DWave *does* work but that the old, outdated, procedural/OO methods of programming and interacting with it are falling short somehow causing that which happens at the collapse of the wave function to be misinterpreted? Could it be that they were taught that that part takes care of itself and thats just coal shoveling and just use the Undergrad language and hence no workee?

  40. Scott Says:

    JG #39: No, I don’t think D-Wave’s problems are with the interface. On the contrary, experimentalists who I trust, and who are familiar with it, tell me that they think D-Wave did an extremely good job with systems integration, software, and other stuff like that. Good enough that, at this point, we can say with some confidence that the issue is not with the power steering, the dashboard, or the leather seats: it’s with the engine. I.e., it’s with the fact that stoquastic quantum annealing, with no error-correction and temperature larger than the gap, does not appear to provide a speedup.

  41. JG Says:

    At the risk of being grubby let me ask simple, pragmatic CS robotics questions:
    1)Why has academia not produced a usable version of a robot operating system upon which robotics are built? Partial answer, they have, its just that it requires 2GB RAM, multi gighertz multi core processors and associated tons of power as recommended by the ‘answer folks’. Commercial robotics seems to use it more as an add-on to interface with 3rd party sensors. Its way too complicated and janky for most hobbyists to even attempt, why is it so?
    2) Why is the market for commercial robotics primarily universities even in non litigious frenzy societies?
    3) There some pretty amazing collections of hardware and sensors running around like the Boston Dynamics/Google offerings, CMU ChIMP, the driverless cars, the MANTIS guns, etc. Storage and speed and sensor bandwidth dont seem to be the limitations. Why is the robotics software lagging so far behind these amazing engineering feats? It seems like the theorists are missing something, how can we help them un-miss it?

  42. JG Says:

    Scott #40 What I tried to say was that yes, they built all this nice power steering, leather seats, consoles, for an ion drive engine that only works in deep space. The engine ‘works’ but the rest of the car and the entire environment is wrong for it, yet oh so nice and right for the designers that want to drive on the roads they were taught and that accommodate the vehicle. To use this device requires an entire rethinking of the problem itself and tools and processes that bear no resemblance to what they created any more than a solar sail on a Cadillac.

  43. Vitruvius Says:

    Yes, Fred, managing and designing for the fact that real-world systems will often fail in a nearly infinite number of unexpected ways does take the most resources in any project that is to be successful in the long term. Consider Minix 3, for example, as a starting point. But I expect anyone other than those in the most junior positions to take that as the prime directive without further ado, otherwise they are let go.

    Now, it seems to me, James Cross, that Groff and Scott started a discussion here about theoretical v. practical computing science and the practitioners thereof. I fail to see what “IT in a big corporation” has to do with computing science in any fashion whatsoever. When I read your categorizations, and those of Richard, I can only advise you to tell your clients or employers to “bite me”.

    Oh sure, you lose a few bad ones that way, I’ve lost a half-dozen that way myself over the decades, and the first derivative with respect to time along the path to success is a bit smaller, but the end result is both more stable and more satisfying. Just yesterday afternoon, for example, I had a Fortune 100 company tell me they need a result before Tuesday, the code for said result which has not yet been developed or tested. Since I had already worked more than the requisite 160 hours for August, and since I consider it unacceptably presumptuous of them to assume that I would be willing to work for them over a statutory holiday (without even asking first), I simply decided then and there to do other things, and then told them that I already had other commitments previously scheduled for today through Monday (which was not a lie, as of the moment before I said it) and so I reported to them: request denied.

    You see, that’s the thing about the slow path to stable success: once you get there, they can’t do without you, so they have to accept your conditions, and thus you can impose whatever conditions are necessary to ensure that the job is done properly. And, since by then you can afford to retire anyway, if they do threaten to terminate on the grounds that they demand failure, you can with a smile tell them: go ahead, make my day.

    The only problem with my methodology is that, along the way to success, one has to delay a lot of gratification in order to be able to fire the customers and employers one doesn’t approve of. Apparently some folks lack such self control, acquiring spouses, offspring, fast cars, and large televisions at an appalling rate. Unfortunately, there’s nothing I can do to help people like that dig their graves less quickly; I don’t have access to a witless protection programme.

    I was explaining the above to a client a few years ago, and he commented that yeah, but “corporate America” is crazy (with my apologies to all non-crazy Americans, corporate or otherwise). When I point out that, yeah, but he is a vice-president of a very large concern in corporate America, and thus that it is his fault as much as anyone’s, I got a frown. He got a smile in return. Clearly, if forced to choose between not being insufferable and being happy, I’ll choose the latter ;-)

    In closing, since the topic has reared its ugly head again, I do think, like JG, that it’s possible that D-Wave does work, it’s just that I think it’s possible with the same probability that I think that astrology works, which, as Sean Carroll notes with respect to astrology, is: none.

  44. Scott Says:

    JG #42: What is your evidence that the ion drive engine “works in deep space”?

    Also, JG #41: I’m having a hard time drawing as clear a line as you do from the market for consumer robotics not being as large as you would like, to the fault for that lying with … academic theorists?? If the problem were the lack of a lightweight, hobbyist-friendly operating system for robotics, then surely some startup company could develop such an OS, without waiting for any academics to give them permission? Is it possible that the problem is that, with a few exceptions (e.g., the Roomba), the applications of consumer robotics that are feasible with current technology just aren’t as large as some robotics enthusiasts would like?

    (As for self-driving cars, many academics have been pushing extremely hard on that, including my late colleague Seth Teller. I’d guess that it’s less a question of if, than of when all the remaining technical as well regulatory problems are going to get solved.)

  45. JG Says:

    Scott #44, hmm I smell a trap here but ok, my evidence is that the prototypes built, when placed in a high vacuum chamber that approximates our understanding of deep space, exhibit thrust that will propel the vehicle according to classical Newtonian physics? Have we been to deep space to make sure? No. Do we know enough about shallow space to assume? yes. Just like quantum computing we have enough assumptions to build theories and experiments.
    Scott, where is the startup to get its personnel? Universities. Very few startups are started up by a team composed entirely of autodidacts. Its chicken and egg, and I think academia is chicken.
    “the applications of consumer robotics that are feasible with current technology just aren’t as large as some robotics enthusiasts would like?”
    Grrph! thats what I’m saying, current technology sucks and needs to be better, specifically, robotics theory, theories of machine consciousness, software theory and application. Like I said, the ATLAS robot is an amazing, 300 lb, invincible baby. Not just ‘enthusiasts’ but everyone! What about Rosie the Robot on the Jetsons? I pose the Rosie Challenge, build us a robot maid that cleans our poop, takes out our trash, drives to the store, goes into Fukushima and turns a valve. Make it affordable by a middle class family.
    We are getting older. The Japanese are terrified because by 2150 or something there will be 1 old Japanese lady left and a zillion robots at current birth rates. If we dont want to create an elderly, disadvantaged underclass in the near future we need robotics that can take care of grandma all the time and in any way.

  46. Scott Says:

    JG #45: OK, I wasn’t really asking about the ion drive engine; I was asking about D-Wave. What I meant was: what is your evidence that D-Wave’s current devices would yield a speedup compared to the best algorithms running on conventional computers, if only the software interfacing with the D-Wave machine were somehow better? (And in what way would it need to be better?)

  47. fred Says:

    Scott #44

    “the applications of consumer robotics that are feasible with current technology just aren’t as large as some robotics enthusiasts would like?”

    That’s an interesting point.

    The problem with “traditional” robotics is that trying to imitate humans/animals is pointless without good AI (it sure looks cool though).
    And working on AI certainly doesn’t require the hassle of having to deal with complicated/expensive hardware – you can just as well simulate the robot physical interactions in a computer “virtual world”.

    Recently the convergence of tiny powerful electrical motors and cheap processing boards has resulted in really impressive advances in dynamic system control.
    E.g. a self-balancing cube
    https://www.youtube.com/watch?v=n_6p-1J551Y

    and flying drone tech has become a much more viable avenue for the enthusiast (cheap and easy)
    https://www.youtube.com/watch?v=w2itwFJCgFQ

  48. anonymous Says:

    When I went to MIT as an undergrad, I was pissed off that MIT had no course, not even during IAP, on fixing a fricking car. This type of crap would never happend in a German university. Why didn’t MIT or Harvard ever offer Click and Clack the Tapper brothers a professorship? The guys are obviously brilliant and Scott would have enjoyed very much arguing with them. Instead, Click and Clack’s place was probably filled by Max Tegmark.

  49. James Cross Says:

    #29 Scott

    Yeah, I was fishing for some sympathy.

    Thanks.

    What is interesting is how software evolves in a large enterprise. In a strange way, it is not unlike biological evolution.

    Systems get created. They evolve frequently with major defects in design. Odds and ends get grafted on to the systems. The code (the DNA of the system) degenerates and sometime reorganizes. Eventually the system goes extinct to be replaced by other systems. The corporate IT ecosystem steadily changes generally serving the business or the corporation goes under.

    You would probably be shocked if you looked under the covers of the software that runs our largest corporations.

  50. JG Says:

    anon #48 Exactly, at Eckerd a CS prof had his students build him a kit car to teach subsystem integration and design. He got a car they got invaluable experience.
    Fred #47 Grr. Humaniform robots are a time and energy sink. Check out the CMU ChIMP to see you dont need dynamic stability but fine, folks love it, and like I said ATLAS has the legs and body to walk just fine and does until his crappy software fails.
    Scott #46. Hmm, how do I state it clearly? Its the wrong tool. Ok, this adiabatic process that tosses out the transverse tunneling field is no good at finding local minima, so forget about finding local minima. The standout feature of that thing is the tuneable couplings between the qubits, find a way to exploit that. Instead trying to make it converge on a solution why not find a way to allow it converge on a problem that is actually capable of solving. Obviously there are macro state quantum events here, some degree of entanglement and superposition and corresponding increased information density so build tools to maximize those attributes and forget about hot glass.

  51. Vitruvius Says:

    Every time the D-Wave matter comes up, I can’t help but think of Scott in the role of Banacek in episode two of season two of the eponymous series: If Max Is So Smart Why Don’t He Tell Us Where He Is?

  52. JG Says:

    Scott, OT I wanted to ask if you are going to give people $2.56 if they find errors in your book like Knuth used to do? Only thing is my friend had one and he said Mrs. Knuth called and asked him to cash it because it was messing up her accounting.

  53. Darrell Burgan Says:

    JG #20:

    Truth be told, the majority of programmers these days know very little of computer science theory. They are taught in very pragmatic terms how to get a computer to do stuff, and tend to focus more on the problem domain and less on the building blocks that make it all possible. And there’s nothing wrong with that, nor is it “shoveling coal”. They focus on applying computers to solve human needs, and really don’t care too much how they got to be the way they are.

    And there is no lack of rigor in that world. For example, modern Java EE architectures have a dizzying amount of complexity, particularly when you consider massive enterprise scale systems. Visualizing such systems requires the ability to visualize truly gigantic graphs of objects with exceptionally dynamic behavior. It is a deep subject all its own.

    The bottom line is that intellectual bandwidth comes in many flavors, and neither the theoretician nor the engineer has a monopoly on it. Myself, I find the overlap between the two to be the most interesting.

  54. Abel Says:

    Part of the reason situations like the one Jon mentions occur is a bit anthropic – somewhere there is a guy that knows a large amount of robotics theory and also spent some decent amount of time building robots, but it seems very believable that this somewhere is not in the position of being a well-known robotics theory professor – more likely, that position was taken by someone that instead spent their time getting to know a really large amount of robotics theory, and making sure they are able to teach it effectively.

    It might be worth for governments, academic institutions, private companies, etc. to change their incentive systems to reward width more, but it’s not clear that this would make things better by any of the metrics they care about, and it might very well mostly generate “fake” caring of the kind Scott mentions, rather than promoting/rewarding genuine curiosity.

    #43 Vitruvius: thanks for spreading your perspective – really believe that “craziness” (in corporations and elsewhere) boils down a lot to not enough people doing the standing up to bullshit in the way you mention.

  55. Scott Says:

    JG #52:

      I wanted to ask if you are going to give people $2.56 if they find errors in your book like Knuth used to do?

    Sorry, no. I do gratefully collect errata on my book—but on the other hand, I’ve already offered enough (much larger) financial rewards for solving various open problems and for disproving the possibility of quantum computing, to feel like I need to offer additional ones for errors in my book.

  56. gasarch Says:

    (Always feel odd being the 56th commenter- are people still reading these.)

    1) The notion of a business plan based on RH being true is just hilarious!

    2) Physics, CS, Math, have such a synergy that it would be silly for any of them to look down at any of them.

    3) I have seen concepts in very applied math used in very pure theory, as well as the more standard thing people say about how pure math is eventually useful.

    4) Someone (might have been Sipser) once said that a complexity theorists should do algorithms as a sanity check on lower bounds.

  57. vzn Says:

    #18

    If a autodidact were to prepare a paper with relatively revolutionary concepts that had been properly cited and prepared it would never see the light of day in any respectable journal without parasitically attaching to a published individual. I have nothing to back this up except personal experience though.

    uh personal experience in submitting “revolutionary concepts” to journals, and not seeing the light of day?
    #27,#56
    SA has repeatedly written on the idea/theme that not all conjectures are equal. ie P=NP is not the same as P!=NP & reiterates this wrt the Riemann hypothesis. let me buy into this rather tenuous and sketchy concept briefly. yes Riemann hypothesis is over ~1½ CENTURY old and no one has disproved it, it implies other results that can be proven unconditionally afaik, its implications have been widely explored, and significant math is now built on top of it as an assumption. now, how about QM computing ideas? these are less than decades, some less than *years* old. there is very little writing/research on adiabatic computing. there are *no* recognized/published conjectures written about what Dwave is exploring.

    Dwave might say, thats not a bug, its a feature! rose has asserted they chose to go into adiabatic computing because of all the crushing/ fiendish engineering complexity of applied qm computing, which theorists gloss over almost as a mere abstraction or technicality, it is the “easiest” to implement. in effect they are making a VERY CALCULATED GAMBLE… *exactly* as SA #27 writes about. it is quite strikingly similar in spirit and with many parallels to the grandfather of computing himself, BABBAGE! yes, they are going out on a limb. even they admit that! they are gambling in multiple ways. they are even gambling the theory will catch up to the implementation!

    did anyone read the brand new paper on error correction in DWave? that is exactly an example of theory attempting to catch up to the implementation, because most error correction theory is based on the circuit model. so SA asserting that there are various conjectures that Dwave is ignoring, or embracing, that is true! but your idea of weighting different conjectures as scientifically valid or invalid is itself a quite bogus idea when pushed to extremes. scientists can have very educated opinions, but they can also give each other room to breathe and adopt a “live and let live” attitude! think about it! scientific rivalries are as old as science, and the rose-aaronson rivalry might even rank somewhere up there with newton-leibnitz in the long run…. or how about einstein-bohr? wink….

    SA, maybe deign to consider in brief moments of wild reverie (not unlike those in various blog entries re AI etc!) that maybe Dwave is a premiere/ leading example of the interdependence & synergy between experimentalists and theorists you and all serious scientists espouse…! it just doesnt take exactly the form you might wish, eg a UNIVERSITY research lab etc, is based/ fueled on (yes) some marketing hype and capitalistic motives, etc!….

  58. Joe Fitzsimons Says:

    gasarch: Yes, some people make it down that far.

    As an aside, regarding the robot rants, there are at least some theorists who take pleasure in building hobby robots: I have one sitting in the corner of my living room. But then, maybe having a physics background exempts me…

  59. Tyler Says:

    Instead of being insulted about being called Professor Aaronson’s “high-level programming language”, I’ll take it as an honor that we’re his language of choice :)

  60. JG Says:

    vzn #57, not me personally, people I personally subjectively thought had such ideas. Fitz #58 Thats my conjecture; Rabi said Physics is an experimental science, I say computer science is an experimental science. Especially now with such ubiquity of hardware, knowledge bases, etc. Again with the DWave but here’s a thing: as an indicator of said ‘disdain’ look at the immediate polarization on the issue when marketing claims were made. The people that should have been figuring out the capabilities of this device and what problems it *could* solve for the benefit of all mankind, instead spent their time defending their established positions and debunking. Granted DWave needed to have a net thrown but to not take up the call and perhaps find a use for what is arguably the first commercial quantum computer, which does not perform the one specialized algorithm its designed for any better than its laptop simulation, granted, and you can’t get one at Buy N Large, but to not take the software version of that machine and devise new problems for it in order to advance the science seems like a shortfall born of disdain that can only retard advancement. When I was at WebMD, the largest of the .bomb startups, the PI was a guy named Lee Boynton. He had written a lot of Java. I mean, he had written a lot of Java; if you look in the code base for the Java tools and runtime his name is in there all over. When Micro$oft was gutting the company to get .NET a showcase he looked at the new C# spec and was talking about all the improvements they made over Java and how cool it was. This was a guy whose full time work for the last several years was literally being thrown out and here he is complementing the competition. I thought man, *thats* a scientist, *thats* an engineer, *thats* how we all should be..

  61. Dror Says:

    Personally I think that many times these are just two different of personalities: the theoretician, and the experimentalit.
    Being a theoretician, I am interested in proving theorems about mathematical models that some happen to be related to actual machines. I am very happy to “lift” a practical problem to an abstract model and then try to solve it. However, it is the abstract solution that I enjoy, rather then see it implemented afterwards.
    I don’t think that I, or any of my sort, should be ashamed for not being interested in the practical phase. These are just two different types of professions, often require different types of personality. Each is interested in his own doing. When we are combined together, we sometimes manage to make remarkable contributions to our field, which is very nice.

  62. Sandro Says:

    Speaking as a “practitioner”, I think practitioners have more disdain for academics and their work than vice versa. In my experience, academics develop a “disdain” only for practitioners that dismiss or ignore academic work, even work relevant to their practice, particularly after it’s brought to their attention. Perfectly understandable IMO.

    Some areas of CS experience exhibit this effect more markedly than others. For instance, simple stats like the ratio of projects employing safe, statically typed languages vs. dynamically typed or unsafe languages demonstrate it quite strongly.

    It’s less pronounced in other areas of CS, like data structure and algorithm analysis. The bidirectional disdain value is probably directly proportional to the number of flamewars a subject triggers. So type systems and syntax are probably at the top of the heap, algorithms are probably near the bottom.

  63. Richard Says:

    @JG #60

    Again with the DWave but here’s a thing: as an indicator of said ‘disdain’ look at the immediate polarization on the issue when marketing claims were made. The people that should have been figuring out the capabilities of this device and what problems it *could* solve for the benefit of all mankind, instead spent their time defending their established positions and debunking.

    The problem here is that D wave didn’t present their work that way. They presented their work (for the benefit of non-experts with money) as “the first quantum computer” without the disclaimers that

    1. This isn’t the type of quantum computer that can break RSA encryption.

    2. We’re don’t really think that this version will actually provide any practical performance benefits over classical machines.

    3. It may be that this type of machine will never achieve any practical performance benefits over classical machines, however it may be interesting to see what it can do.

    When you make commercial claims like that it is almost calculated to create a negative reaction amongst those who are struggling with the real problems of quantum computing. It is worth noting that the commercial quantum cryptography offerings have not generated the same kind of reaction amongst theorists – even though there are some serious theoretical objections to their products also.

  64. JG Says:

    Lots of theories here but lets do an experiment:
    It requires your honesty, so perhaps its flawed.
    You click over to CNN or MSNBC or if you want real news, Vice or BBC, as is your custom come these foggy mornings. As you sip your frothy chai you notice a headline:
    Man Creates Thinking Computer Program in His Basement
    BS, right, thats what I’d think, human nature because we, as ‘experts’ know its a hard problem.
    But you read on and maybe this guy has some minor cred and maybe he stumbles onto something.
    Do you read on, do you find out more? Or do you move on to ‘Gaza’ or something and forget about it?
    Probably you move on, or try to read whatever additional material there is out of curiosity, but I doubt anyone takes him seriously until there is a demonstrable thing that blows everyones mind. His Elevator Pitch.
    So suppose the guy did all that? Would he have the respect of academics if he was outside their circle? Would he get offers to publish in the Robotics/AI Journals? Obviously he has the right to be challenged and reviewed but would academia grant him a platform for peer review? I’m not implying any opinion of my own here but just ask to contemplate how you would react as a theoretician if you are one. I realize the media taints everything and for you to hear of it at all its already undergone the media degeneration, so take that into account, but ask yourself if the guy could garner notice in the journals instead?
    Do academicians even consider robotics a theoretical science anymore?
    Since the promise of AI seems to have fizzled with the TI Explorer (see, old am I) and AI broke down into these sub-disciplines of robotics/machine vision/machine cognition etc I think academia has shifted its perspective from robotics as science to robotics as engineering and so the theoreticians now leave all that to the engineers.
    here goes: I think a guy or girl or group outside of academia that built a robot that by all indications was ‘conscious’ and an ‘AI’ would end up like DWave pretty much all around in feedback from academia, the public (stunned and amazed) and corporate (obtaining a market differentiator) .
    Because like DWave, where there was a very specific test for that type of computer to function in the way they advertised, that it failed, say the ‘conscious AI’ robot failed the Turing test but was conscious as a dog is. Obviously, Rover would fail the Turing test, but a dog as an entity is ‘conscious’ albeit, breed dependently, not intelligent. Suppose this entity evolved limited communication with humans but, like a dog, just did things that we consider needing doing and generally benevolent and so never passes the Turing test? Theoreticians would call it a bug, experimenters a feature. Maybe thats the fundamental difference.

  65. vzn Says:

    JG #60 it is easy to “publish” at least *somewhere* these days [basically/nearly a never-before-seen "golden age" for that wrt eg cyberspace etc] and what is *hard* is to come up with “revolutionary ideas”…
    fyi some case study of cs theory vs applied in this tcs.se question core algorithms deployed. another case study wrt recent math/cs see also the erdos discrepancy problem advance. there was even some not-well-publicized cs applied action with the Zhang twin primes breakthru.

  66. Scott Says:

    JG #64: Your “thought experiments” have (in essence) already been done, and the results are in—so there’s no need whatsoever to speculate about these matters.

    Let’s take quantum computing as an example. Probably like many other scientifically-minded people, I did read about it in a newspaper (as a teenager, around 1996 or so), and I did think it was garbage. It just didn’t sound right that you could solve an NP-complete problem by “trying every possible solution in parallel.” So what did I do next? I went on the web, read more about it, and quickly discovered, firstly, that what the newspaper described wasn’t really how it worked at all, and the way Shor’s factoring algorithm actually worked (involving the Fourier transform, and interference between positive and negative amplitudes) was incredibly richer and more interesting. Secondly, I learned that the entire thing was “just” an application of textbook 1920s quantum mechanics—arguably the best-confirmed theory in the history of science—and the reason it sounded like bunk was simply that no one had ever properly explained textbook quantum mechanics to me. I ended up interested enough to have spent most of my career on this.

    And lest you think I may have been open to new ideas at age 15, but I haven’t been since—within just the past year and a half, my world has been seriously rocked, and my research agenda influenced, by Harlow and Hayden’s amazing connection between computational complexity and the black-hole firewall problem, and subsequent investigations by Lenny Susskind and others. I could start listing other examples of things that sounded wrong to me at first but ended up changing my worldview, but then this comment would go on for pages. :-)

    Now let’s contrast above the cases with the case of D-Wave. In 2007, readers of this blog start bombarding me with questions about a company in Vancouver that, until then, I knew about only vaguely. Just like with the previous examples, as soon as I started reading about it, my mind was filled with questions and things that didn’t make sense to me: “what makes them so confident they can get a speedup for NP-complete problems, if it’s not the slightest bit clear that even the zero-temperature adiabatic provides such a speedup, never mind what they’re doing? how do they hope to see a scalable quantum advantage without any error-correction? before even talking about commercial applications, why don’t they demonstrate simpler prerequisite things, like the ability to entangle 2 qubits? they claim to be ‘solving Sudoku puzzles using a quantum computer,’ but in what sense is the process that solves them quantum-mechanical at all, and how do they know that?” The crucial difference from the quantum computing and Harlow-Hayden examples is that, when I delved further into the subject, my questions weren’t thereby cleared up. People called me an “ankle-biter”; they said I must be consumed with jealousy that a startup company had succeeded where all the world’s academic QC experimentalists had not—but the one thing they didn’t do was answer my questions straightforwardly (“oh, you see, the point you need to understand is this“). Seven years, tens of millions of dollars, and hundreds of magazine articles later, they still haven’t answered these questions.

    Incidentally, obviously everyone would be thrilled were they convinced D-Wave had done the quantum-computing equivalent of “not passing the Turing Test, but at least building a dog.” (Meaning: building a special-purpose QC that doesn’t yet do anything useful, but at least is intractable to simulate classically?) But the question at issue is whether they’re closer to building a dog or to building Eugene Goostman.

    Finally, regarding outsiders with correct, revolutionary ideas being able to get prominent mathematicians’ and scientists’ attention: that “thought experiment,” too, has repeatedly been tried in the real world, and the results are in. The answer is that, while there is a barrier of skepticism, indifference, and sometimes even hostility to be overcome, again and again people with no credentials other than being right have been able to win, to a greater extent in science than in any other human endeavor. I could dwell on historical examples like Ramanujan, but for a more recent example, how about Yitang Zhang‘s breakthrough discovery last year of infinitely many pairs of primes with bounded gaps? Before that, he was a total unknown (even working at a Subway to make ends meet after immigrating from China to the US); now he’s a famous mathematician, and all the experts agree that his proof is correct.

    If you’d like a smaller but more personal example: when I was 15, I submitted my first paper (on hypertext organization) to SIGIR, the main conference for information retrieval. To say I was an “unknown” at the time is an understatement. Not only had I not even officially started undergrad, but the same week the SIGIR acceptance notifications were sent out, I learned that I’d been rejected from almost every college I’d applied to—the reasons being that I was too young, I didn’t have the right extracurriculars and social activities, and I’d clashed with teachers in high school. But as for that SIGIR submission? No one cared who I was. They just looked at the paper, and in this case, they ended up accepting it. It made a big impression on me.

  67. Douglas Knight Says:

    Scott, you comment was 800 words. That is going on for pages. Plus all the other comments you wrote on this post. Did anything good come out of this post? I think listing things that sounded wrong at first, but changed your worldview would be a much better use of your time.

  68. Peter Shor Says:

    JG@41:

    Let me try to answer a question with a question:

    1)Why has academia not produced a usable version of a robot operating system upon which robotics are built?

    Why are Boeing 787s being built by industry and not academia?

    It’s presumably a hard job, and there’s not enough funding in academia to do it.

  69. fred Says:

    Scott #66
    After reading your post I looked up Zhang’s achievement.
    I found an interesting comment (I often read here that progressing on many open problems would probably require new tools):

    http://www.simonsfoundation.org/quanta/20130519-unheralded-mathematician-bridges-the-prime-gap/

    “As details of his work have emerged, it has become clear that Zhang achieved his result not via a radically new approach to the problem, but by applying existing methods with great perseverance.”

  70. Rahul Says:

    I get the feeling that there’s two sub-groups in a typical comp-sci department today. Some who do more applied work like Graphics, Human Computer Interfaces, AI, Compilers, VLSI etc. and others doing the more math-like, fundamental work a la Scott. e.g. complexity theory etc.

    In my estimation the “Theory” guys are a much smaller segment. Not by impact but by sheer numbers. I think it’s this border between the “thinkers” and “doers” that is the cause of some confusion.

    What are good arguments against the “theory” guys being actually relocated to a math department? A lot of the complexity papers etc. actually read to me (an outsider) like math department papers.

    And there seems very little relevance of a practical, physical, device to a lot of the stuff the theory-guys do. And a huge overlap of skills and techniques with what a mathematician might do.

    Is it only legacy reasons that complexity theory remains in the Math and EE departments?

  71. Rahul Says:

    #70

    ……CS and EE Departments I meant. Not Math and EE. Typo.

  72. Scott Says:

    Douglas #67: I don’t know whether “anything good” came out of this post or my subsequent comments. But I can tell you what happened: I was taking Lily to the playground when I read JG’s trollish comment #64 on my phone (“[this thought experiment] requires your honesty, so perhaps its flawed”). The comment really got under my skin, and as I played with Lily, I kept thinking of more and more things that I wanted to say in response. So then I just had to unleash those things as soon as I got home to my computer.

  73. Scott Says:

    Rahul #70: There are more than 2 subgroups. The traditional subdivisions of CS are “systems, AI, and theory,” and then various other things like graphics, human-computer interaction, programming languages, databases, and computer security (though programming languages, databases and security are often included in systems, and AI includes many things that not everyone would consider to be AI, like statistical inference and vision and local search algorithms and engineering aspects of robotics). Note, in particular, that there are many areas of CS other than theory—especially in AI—that are also pretty far removed from immediate application. (At MIT, ironically, it’s the theory group that’s led to more startup revenue than all the other parts of CS combined, mostly because of RSA and Akamai.)

    So, I don’t think it’s really true to say that theory is an “outlier” within CS departments—it would be better to say that CS is an entire field of outliers. :-) (The HCI people can also feel like outliers within CS, as can the robotics engineers and the applied security people, i.e. “ethical hackers.”)

    There are places where theoretical computer science is included (or partly included) in math departments: here at MIT, for example, Mike Sipser, Peter Shor, Michel Goemans, Jon Kelner, and Ankur Moitra are all in the applied math department (though they’re also in the CSAIL lab, where they can interact with the rest of CS). It works just fine here, since applied math has a strong tradition of support for CS. But in other universities, it turns out from experience that putting CS theory in math departments can be at least as awkward a fit as putting it into CS departments. For while there are many mathematicians who are huge fans of TCS, there are also some who consider it “not real math” and want to get rid of it. Indeed, as I understand it, the presence of the latter group helped catalyze the formation of CS departments in the first place: back in the 60s and 70s, CS was often part of math departments (or else shared between math and EE), but eventually the computer scientists were asked to stop sleeping on the sofa and move to their own place, so by and large they did.

  74. Douglas Knight Says:

    Yes, Scott, trolls and anger are a cost of blogging, so the right unit of analysis is the post. This post is a response to the very same troll. He got under your skin, you vented, and you provided him with a venue to get under your skin again and again.

  75. Rahul Says:

    Scott #73:

    Indeed there are more than two subgroups like you describe. I meant a broad division based on theory versus more applied topics.

    But then again, there’s the stuff you mention within AI that’s far from immediate application. But I consider that the exception than the rule. There’s stuff within other applied disciplines too that gets to be far away from immediate application. But in general that gets treated as a bug rather than a feature of the choices one makes. I could be wrong.

    Unlike theory, where elegance and understanding nature or elucidating fundamental principles are consciously superior to any application motives or usefulness concerns (IMO).

    I think, to an outsider or a lay-observer the very mention of “computer” evokes a field that’s applied or utility oriented. That’s perhaps partly responsible for the false expectation that a department called “computer science” be something to do with physical computers and ideas to make them better or faster.

    And in that sense the pure theory, non-applied parts might be a bit of a surprise or disappointment to some people. OTOH, that alone is no good reason to change the status quo.

  76. JG Says:

    Scott #66 Your “thought experiments” have (in essence) already been done, and the results are in—so there’s no need whatsoever to speculate about these matters.
    Forgive me for being pedantic. I am not any kind of scientist so perhaps I have no business questioning established paradigms. It seems like a math proof is incontrovertible, perhaps I should have stated that the hypothetical researcher had more of a system or a theory that was subject to more degrees of interpretation.
    “CS was often part of math departments (or else shared between math and EE), but eventually the computer scientists were asked to stop sleeping on the sofa and move to their own place, so by and large they did.”
    Interesting that the birth of CS departments was born of the disdain of the Mathematicians.
    “when I read JG’s trollish comment #64 on my phone (“[this thought experiment] requires your honesty, so perhaps its flawed”). The comment really got under my skin”
    What I meant was that its flawed in regards to its scientific rigor since its based on subjective experience, but not adding latter sentence in the original was a flaw on my part so sorry. I’m attacking your field, not you.
    That being said, I think your field, because of ego and self interest, has not done the ‘scientifically manly’ thing and put up its disdain to put forth theories of what that machine ‘might’ do. Us scientific wannabe relative mental midgets out here are looking for giants such as yourself to show us some light in these areas.
    Peter Shor #68
    I dont think that is a fair comparison, sir. Aircraft require a massive industrial base and material support system with attendant cost. Robotics and code can pretty much be created with electrons and pure thought. Unlike building a jumbo jet, one person working alone can achieve Herculean heights with little energy and material expenditure. Interesting you mention planes because I think many of the current multirotor aircraft flight offerings did originate in academia and thats going to be the driver for robotics as evidenced by one of your guys going to work on Googles tail sitter drone. If that is the funding driver for robotics I worry that the pure theorists will be driven back to the math couches.

  77. Scott Says:
      I think, to an outsider or a lay-observer the very mention of “computer” evokes a field that’s applied or utility oriented … And in that sense the pure theory, non-applied parts might be a bit of a surprise or disappointment to some people.

    To an outsider or lay observer, the very mention of “math” might evoke an accountant rechecking rows of numbers, using one of those old-fashioned calculators that prints on a roll of paper. The word “philosophy” might evoke a bearded dude on a hillside with a robe and staff, sharing his bumper-sticker platitudes about life. The word “chemistry” might evoke a maniacally-laughing, poofy-haired mad scientist mixing boiling beakers in a dark dungeon with the occasional comic explosion. The word “archaeology” might evoke Indiana Jones, etc. Should all those other fields also change in order better to match the popular perceptions of them? (And yes, I know you didn’t suggest that; I just found it amusing. ;-) )

  78. Vadim Says:

    Wait, so the philosophy one is wrong? :)

  79. Darrell Burgan Says:

    JG #76: you said:

    … one person working alone can achieve Herculean heights with little energy …

    Well, it may not take much electricity but it takes an awful lot of hard work. Not sure what you mean by “energy” in this context …

  80. Vitruvius Says:

    Hang on, I thought chemistry evoked Kid Charlemagne !-) But seriously, Scott, I think your #66 was brilliantly gracious, especially under the circumstances. I would have paid the price of admission just for that. When you wrote that you “didn’t have the right extracurriculars and social activities, and [had] clashed with teachers in high school”, I thought of your Buhl Lecture, and it felt very good. Pardon my vicarious pleasure ;-)

  81. rrtucci Says:

    Scott, Google wants you to come back to them. They are willing to forget your fling with Microsoft
    http://googleresearch.blogspot.com/2014/09/ucsb-partners-with-google-on-hardware.html

  82. Raoul Ohio Says:

    1. D-Wave in the news: Apparently Google is building a (something) to build on the stuff it learned from the D-Wave machine it picked up last year:

    http://www.eweek.com/servers/google-developing-quantum-computing-chip.html

    2. I think I might have posted a similar challenge in the past, but here is an update. Suppose we all get together in 100 years and by then we will know which of the following is true:

    The D-Wave machine:

    A. Totally works as advertised, perhaps after a few bugs are ironed out.
    B. Kind of works.
    C. Slightly works.
    D. Does not work at all, but the D-Wave people believed it would.
    E. The whole thing was a scam, no one with any brains thought it would work.

    The challenge is to give your guess of the percent likelihood of these outcomes. The Raoul Ohio guess is:

    1%, 2%, 10%, 85%, 2%.

    Anyone else want to put out your guesses?

    3. Jon Groff’s challenge is insane, and I doubt if he could pass it.

    However, he does have somewhat of a valid point: Most TCS people work in a kind of “pure math” ivory tower, and have no idea of how Software Engineering issues impact their theories. They just think someday the SE stuff will be solved and their algorithm will work as advertised.

    This is sort of like D-Wave on getting a quantum computer to work, or the “fusion reactor” than will make electricity free. This last has a time frame of “in a couple years now” for the last 50 years.

  83. Peter Shor Says:

    JG #76:

    If you think working industrial software can be written by one person working alone in academia, you either have an extremely high opinion of academics, or an extremely low opinion of programmers in industry. Working industrial software should be done right, and a large company would have no hesitation at all in assigning a team of 20 programmers to this kind of project and let them deliver the software in a year.

    If some company thinks that academics can do better at this kind of stuff than they can, they should just give the academics money and let them do it.

  84. Rahul Says:

    Scott #77:

    Well, any sort of compartmentalization of researchers into Departments is going to end up being somewhat arbitrary.

    If that means no particular compartmentalization scheme may ever be legitimately critiqued, I guess I can live with that.

    OTOH, if you think there are certain fair grounds to question how these departmental borders get drawn I’m all ears.

  85. Joe Fitzsimons Says:

    JG: I am failing to follow the logic behind your DWave comments. There has been plenty of effort (too much in my view) put into understanding the DWave devices. Shin, Smith, Smolin and Vazirani have a model which well describes experiment outcomes and (due to its classical nature) is efficiently simulable. Why put time into trying to come up with applications for a device which is well described by a classical model? Maybe there is some quantumness hiding somewhere in there, but I would rather bet on a less noisy architecture. In my view, this is just making a sensible allocation of my (finite) time. I would much rather concentrate on technologies which seem more plausible to me.

  86. Scott Says:

    Rahul #84:

      If that means no particular compartmentalization scheme may ever be legitimately critiqued, I guess I can live with that.

    No, that’s silly. Of course you can critique particular compartmentalization schemes; I do it myself all the time. All I claim is that putting CS theorists into CS departments is reasonable, even if CS theorists could also be classed as mathematicians. (Note that physics departments, econ departments, EE departments, and several others also often contain people who could be classed as mathematicians.)

  87. fred Says:

    Labeling people is pointless unless you’re looking at it from an administrative point of view (budget allocations, etc)?

    In the end it’s all about the individual and his/her personal drives and interests and, is he/she a talker or a doer?

  88. fred Says:

    I’ve always been fascinated with the scientific aspects of WW2, like the Manhattan project or Bletchley Park (the Enigma code breaking effort).
    Achieving practical results was critical obviously, so amazing theoretical advances were made by brilliant minds who also had to be very “hands-on” (VonNeumann, Turing, young Feynman, etc).

  89. John Sidles Says:

    Juris Hartmanis’ essay “Observations about the development of theoretical computer science” (Annals of the History of Computing, 1981) presents a well-reasoned survey of “theory” versus “practice” in computer science, and is commended to all Shtetl Optimized readers.

    For extensive quotations from Hartmanis’ essay — including practical implications for 21st century simulation science — see comment #137 (to appear) of Scott’s question from August 13 “Is the P vs. NP problem ill-posed? (Answer: no.)”.

    Alternative Framing  “Could the P vs. NP problem be better-posed? (Arguably: yes.)”

    Conclusion  Hartmanis’ essay points toward open questions, alternatives, and potentialities in computer science that are good news for young researchers.

  90. JG Says:

    Sidles #89 1981? honestly does that still apply considering the subject matter? Fitz #85 Today is a new day, we have a new machine, we had nothing else available yesterday but theirs, thats the grubby pragmatist.
    Peter Shor #83 The reason I think it should come from academia is similar to what Torvalds did with Linux as a grad student. Is what he did any less remarkable than a working ROS? One man, one mission, from a shiver of dust a planet sweeping storm ramps up. There is an ROS, it came from corporate, went to academia, back to corporate and they dropped the ball. So many interesting robots come from academia why not the other piece? There was a DARPA grant to academia to build the Internet, why not a decent ROS?
    Ohio #82:
    Its done. Documented here:
    http://hackaday.io/project/1784-ROSCOE—A-Robot-to-Deliver-Us-from-Tyranny
    Since no one has a definition for consciousness, I’ll even say its conscious. Hah!
    rrtucci #81 Go for it Scott, make that machine hum. Dont get left behind on this one. we’re all counting on you.

  91. JG Says:

    Holy carp! Its a new day indeed! We have error correction and a clear scaling path using J-junction SQUIDS from the look of this abstract. Gee, looks like it was based on experimental evidence from the DWave machines:

    A quantum computer can solve hard problems, such as prime factoring1, 2, database searching3, 4 and quantum simulation5, at the cost of needing to protect fragile quantum states from error. Quantum error correction6 provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing7 is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger–Horne–Zeilinger state8, 9 using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

  92. Scott Says:

    JG #90: On the contrary, no one’s “counting on me” to make practical QC a reality—and that’s a very good thing, since that’s never been my specialty; it’s a wonderful goal that I’ve only contributed to in incidental ways. On the other hand, I’m fortunate to live in a civilization with lots of people who do have that as their specialty, and I’m happy to cheer them from the sidelines, or maybe the very edge of the playing field.

    In particular, I think Google’s Quantum AI Lab made an excellent choice to get involved with UCSB and John Martinis, and to end their exclusive relationship with D-Wave. I have no idea whether or not this new collaboration will lead to practical QCs, but if nothing else, I have high hopes that it will lead to some quality science.

  93. Scott Says:

    JG #91: While you neglected to include the context, I recognize what you pasted here as the abstract of Superconducting quantum circuits at the surface code threshold for fault tolerance, a Nature paper from the Martinis group at UCSB. And no, you’re completely wrong that that paper is “based on experimental evidence from the DWave machines.” It’s based on experimental evidence from the Martinis group.

  94. JG Says:

    Scott #93 From the other above link, as to your assertion that I am “completely wrong”, please note the use of the words ‘learnings’ and ‘from the Dwave quantum annealing architeture': John and his group have made great strides in building superconducting quantum electronic components of very high fidelity. He recently was awarded the London Prize recognizing him for his pioneering advances in quantum control and quantum information processing. With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture. We will continue to collaborate with D-Wave scientists and to experiment with the “Vesuvius” machine at NASA Ames which will be upgraded to a 1000 qubit “Washington” processor.

  95. JG Says:

    One chap asked if any good would come of all this. I think it has. I think the responses here as well as the timely announcement of the new collaboration has shown us that problems exist, but the group that overcomes them will be successful. I think the key component is the integration of the hardware group. I believe that quantum computing, like physics itself, is now officially an experimental science.

  96. rrtucci Says:

    ” And no, you’re completely wrong that that paper is “based on experimental evidence from the DWave machines.” It’s based on experimental evidence from the Martinis group.”

    JG may be excused here. He was only parroting Neven’s BS in the Sept 2 announcement of “Neven’s” AI group, which is now really Martinis’s group

  97. fred Says:

    The concept of cloning is ambiguous by definition because once any physical thing (like a brain) is “cloned”, the clone and the original are automatically distinct since the original and the clone have to occupy different locations in space! This (and the fact that every particle in the physical world influences every other particles) is enough to insure that their futures will be different (so they’re effectively unique).
    The only way around this would be a closed universe filled with nothing but clones – like a crystal that would somehow wrap around in every direction, so that the observable universe from any point of view is independent of location, and their state at every point in the future would be the same, always.

    It’s actually quite common to deal with this sort of ambiguity about “identity” in every day Object Oriented programs.
    Figuring what we mean by “two objects are the same” can be quite tricky, e.g. I’m different from the “me” from 10 minutes ago, but at some level I’m the same object as well, because some of the characteristics attached to me are unique and pretty constant, like my social security number, my DNA, my long term memory… but lots of attributes are also ever changing, like my location in space or the atoms in my body are being replaced because of continuous aging, etc.
    So, the two notions have to be defined: a set of attributes to define uniqueness of identity “Is this Scott or Bob?”/”is this the same bank account than this one?”, but also a set of attributes we need to look at when answering “how different is Scott from 10 minutes ago?”/”has my bank account changed since last month?”.
    So any OO program requires distinct functions to test for those two separate notions of “equality”. If you mess that up, dealing with things like hashing and sets of objects can become quite confusing.

    Somehow it reminds me of an idea Feynman had that maybe all the electrons in the universe are really maybe all the “same” particle.

  98. Scott Says:

    rrtucci #96: I know John Martinis, and he’s often expressed views about D-Wave only barely distinguishable from mine (if more diplomatic :-) ). Now, it’s possible that his new collaboration with Google’s AI Lab will cause him to deal with D-Wave more than he has in the past. But whether or not it does, it’s certainly not true that the Nature paper the UCSB group already published owes anything to D-Wave.

  99. JG Says:

    rrtuchi #96 A little of that ‘disdain’ coming through? Maybe a little professional jealousy at others gaining the prize?

  100. JG Says:

    Scott #98 Dang I’m tempted to spend the $32 just to prove you wrong but instead all I have to say is Bang! I hit 100!

  101. Scott Says:

    JG #100: You can get their paper for free from the arXiv. I just looked, and I see no mention of D-Wave anywhere in it. Will you now admit you were wrong about this?

  102. rrtucci Says:

    Ouch, JG, you really know how to hurt a guy. Tucci not Tuchi, as in wop name

  103. Joe Fitzsimons Says:

    JG: It sounds from your comments like you considered DWave to be the only possible route, until Google announced their agreement with Martinis yesterday. Most of us have been aware of his work for quite some time, as well as significant progress in ion traps, for example, where operations have also managed with fidelities sufficient for fault-tolerance. These provide a far more believable path (in my opinion) to a significant computational advantage than the DWave approach, which is exactly why many of us choose not to work on understanding the DWave devices. Please understand that this is not news to us (the nature paper you cite is from April, for example, and builds on work going back a number of years). Even the commercial backing for high fidelity superconducting qubits is not new, as IBM have been following a similar approach.

  104. Ron Fagin Says:

    When I became an IBM Fellow, I made it my mission to try to convince theoreticians and practitioners to work together. I have been giving my “stump speech” on “Applying theory to practice (and practice to theory)” at a number of places, including at IBM Research Labs around the world. My message to theoreticians is “Working with practitioners is not a price you have to pay, but instead a positive good, that will lead to better and more influential theory.” My message to practitioners is, “You will make better products if you interact with theoreticians.”

    I will be giving that stump speech at MIT on Sept. 23, and at Harvard on Oct. 9. Here is the URL of my talk abstract:

    http://toc.csail.mit.edu/node/615

  105. Rahul Says:

    @Ron

    You obviously know more about this but IMO the best way to get them to work together is to force (at some career stage) a theoretician to actually work on a practical project and vice versa. Not just collaborate but actually physically work & for a reasonably long duration.

    My experience is from the Chemical Industry & some of the best guys are those who spent a couple of years of their careers on the “wrong” side before switching.

  106. Darrell Burgan Says:

    Ron #104: spot on. That area where theory and practice overlap is where real world innovation comes from.

  107. Ron Fagin Says:

    @Rahul

    Instead of forcing theoreticians to work with practitioners, we’ve found that gentle persuasion works well. Many theoreticians are eager to work with practitioners, but they just don’t know where the opportunities are. Plus, persuasion is certainly much more palatable than forcing!

  108. vzn Says:

    theory & application are like the yin & yang of science. it advances through their adroit fusion. on the other hand scientists are human, and humans can have blind spots.

    what a great coincidence that PShor is posting on this thread, to me he is one of the inspiring historical leaders in balancing/ working in and advocating uniting theoretical & applied CS aspects & worthy of study/ emulation by anyone interested in that balance.

    also re theory vs applied, there are many jokes on this sujbect & old dichotomy/ juxtaposition. eg

    An engineer thinks that his equations are an approximation to reality. A physicist thinks reality is an approximation to his equations. A mathematician doesn’t care. —anonymous

  109. quax Says:

    Anybody willing to venture a guess at what kind of chip Martinis will design for Google? Based on his past work I naturally assumed Gate QC, but the Technology Review quotes him like this:

    “We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.”

    This may be just horribly out of context, but it makes it sound as if this is going to be quantum annealing with error correction.

  110. J Says:

    Scott I do not think you would never have worked as a software engineer at the industry. Imagine putting yourselves on H1B shoes/mid age american worker shoes and slave working 12hours a day 6 days a week with no over time pay with no career improvement and getting an equivalent $22.22/hour. I do not just see you doing that. I see you as someone who saw an opportunity where you can use your gifts and moved there.

  111. rrtucci Says:

    quax, my guess is that Martinis is NOT going to help continue developing the D-wave machine. He is just going to continue developing his machine, as he would have before, except faster due to the increase in funds.

    I think that if Martinis had agreed to work on D-wave’s machine, Vern Brownell would have made a statement to the press trumpeting this news. That’s sort of his job. Instead, D-wave has been very mum about the Google/Martinis news.

    Besides, I think that the D-wave engineers are quite competent and don’t need any help from Martinis in developing their machine. They know their machine much better than Martinis ever will.

  112. krishna Says:

    I am an average programmer with no math or CS theory background. I had a short discussion with my friend who said computer systems research like Networks or Operating Systems or databases is not real research they are nothing but engineering prototypes or an engineering approach. His opinion is one doesn’t have to be bright or talented to do CS systems research(common sense is enough) but on the contrary for CS theory one has to be talented and bright because it requires lot of advanced math knowledge. He said this is one of the reasons why CS theory people doesn’t want to approach systems research but do not disdain or look system research as mediocre area. I have to accept his opinion as valid and true. Even if CS theory scientists ignore system research there is nothing to be offended because it is difficult to do theory research. Sometimes I have seen I can understand few systems publication but not theory publications. It doesn’t mean research has to be hard and complicated. We can see very few thriving cstheory departments at the universities whereas Isee many people doing many cs systems research at the universities and most of the research are repetitive. Personally I will not be offended if cs theory people ignore Systems research or consider systems research as a low activity. Great post and hats off for being an exceptional researcher

  113. quax Says:

    rrtucci #111, don’t mean to suggest that he’d been working on the D-Wave chip, rather I am wondering if Google may have asked for an improved annealing chip with error correction of their own making.

    Clearly for Martinis, based on his prior talks, that’ll be just a stepping stone to true gate QC, I am just not exactly clear at this point on what Google will pursue. The press releases are just too vaguely worded.

  114. physpostdoc Says:

    I work in the area of superconducting qubits an have interacted with John Martinis and his group members on multiple occasions. John has been a strong advocate of digital quantum logic. D-wave machine though is a technological tour de force, most of the academic superconducting community, including UCSB team desires more transparency in DWave results. The kind of platform DWabe has developed is usefulut my guess is that the workhorses qubits will be quite different from what D wave uses. Also in all probability the architecture may be revised to accommodate a completely different way of reading out quantum registers, unlike Dwave’s to make it more QND.

  115. Peter Shor Says:

    While on the whole, I think theorists and practitioners in computer science respect each other, there really is not enough communication between them.

    If theorists understood practitioners, they would put pseudocode in any paper containing an algorithm they think might be practical.

  116. quax Says:

    physpostdoc #114, thanks for this comment, sheds some light.

    Yet, what does the acronym QND stand for?

  117. physpostdoc Says:

    QND stands for quantum non demolition which essentially means that the measurement does not increase the uncertainty in the state of qubit. It is mostly another word for strong measurement of a system observable which does not disturb the observed quantity, so that all the measurements done at subsequent times commute with each other. It is considered an essential feature of quantum error correction schemes.

  118. JG Says:

    scott #101 disingenuous. I admit there is no mention of DWave in the paper, yes, I was wrong about that fact. You, and I , and everyone that matters knows that the design is based on DWave. It sure aint NMR, right? Its annealing, Scott. Their architecture diagrams, such as they are, are stunningly similar to DWave. They use the ‘surface code’ entanglement of a qubit and its 4 nearest neighbors to achieve error correction, and they use a different substrate, so the ‘engine’ is different but the seats and such…I doubt they used their own dewar design, or cryo pump. Maybe the A/D converters are higher res but the basic architecture is the same. My guess is the software (which you complemented) has not been rewritten. I realize theres only so many ways to chill out and such and only they know for sure. So if you need me to say “yes, I was wrong” because I cant “prove I was right” then ok, but you are being disingenuous by claiming DWave had absolutely nothing to do with the construction of this current setup.

  119. Scott Says:

    JG #118: No, I claim you’re wrong on the substance, not merely on some technicality. People were talking about superconducting quantum computing for years before D-Wave was around. Indeed, I’ve learned from Seth Lloyd that in the 1990s, he and others were thinking of founding a company to do almost exactly what D-Wave is now doing, but they decided against it once they realized that, with any of the technologies then on the horizon, the spectral gap was going to get much smaller than the temperature and for that reason, you had virtually no chance of seeing any speedup.

    Martinis has, of course, followed D-Wave with interest (I once toured D-Wave’s headquarters with him), but I see no evidence that the Martinis group’s design owes anything to D-Wave, any more than D-Wave owes its design to the Martinis group. If you think the Martinis group secretly based their design on D-Wave’s but failed to cite them in their paper, then you’re accusing them of extremely serious academic misconduct. Are you?

  120. JG Says:

    Scott #199 Negatory good buddy. I accuseth not the good quantum thaumaturge of any malfeasance. As I stated, there are only so many ways to do this job. As I also stated previously I dont think DWave did anything remarkable with their architecture. Martinis implemented Shor on an NMR prototype machine right? This proposed machine is not of such a construction, nay, nary an approximation to his devices of yore. There is a comment from Martinis in one of the articles that said he thought the reason DWave showed no speedup was partially due to impurities in their substrate, which is probably why he is choosing a new one. He probably didnt cite them for the same reason Pyrex isnt cited in bio research papers. Besides, DWave publishes so little i doubt anyone includes them much.
    Now I have one for you sir. If DWave had absolutely nothing to do with the new machine, then why is the director of the lab running the project saying it did? Are you accusing Nevin of extremely serious advertising miscounduct? Are you? Huh? well? cmon…

  121. Joe Fitzsimons Says:

    JG: You seem to be crediting DWave with lots of innovations that aren’t due to them (superconducting qubits, which predate the founding of DWave, the adiabatic approach, etc). Martinis’ approach has been based on gates anyway, not adiabatic evolution or annealing. Regarding dewars and cryo infrastructure, unless I am mistaken, pretty much everybody, including DWave, is using third-party (i.e. Oxford Instruments) dilution refrigerators.

  122. Scott Says:

    JG #120:

      If DWave had absolutely nothing to do with the new machine, then why is the director of the lab running the project saying it did?

    Which “new machine” are you talking about? And which statement by Hartmut Neven? Please provide links.

    Once again, it sounds like Martinis might indeed work with D-Wave in the future, but that won’t retroactively cause him to have worked with them in the past.

  123. JG Says:

    Scott #122:Refer to Comment #81 and #82 above. The links as follows:
    http://googleresearch.blogspot.com/2014/09/ucsb-partners-with-google-on-hardware.html
    http://www.eweek.com/servers/google-developing-quantum-computing-chip.html
    both refer to and credit some of the work on the DWave machine as being the ‘inspiration’ for the current work.
    The “new machine” is one based on the ‘surface code’ architecture (4 nearest neighbor qbits) with the new I think its Al/Sa substrate. The one whose architecture diagram is roughly illustrated in the paper.

  124. Scott Says:

    JG #123: OK, that’s about future machines, not about what the Martinis group already published. This discussion has become tiresome and is now closed.

Leave a Reply