Microsoft: From QDOS to QMA in less than 35 years

This past week I was in Redmond for the Microsoft Faculty Summit, which this year included a special session on quantum computing.  (Bill Gates was also there, I assume as our warmup act.)  I should explain that Microsoft Research now has not one but two quantum computing research groups: there’s Station Q in Santa Barbara, directed by Michael Freedman, which pursues topological quantum computing, but there’s also QuArC in Redmond, directed by Krysta Svore, which studies things like quantum circuit synthesis.

Anyway, I’ve got two videos for your viewing pleasure:

  • An interview about quantum computing with me, Krysta Svore, and Matthias Troyer, moderated by Chris Cashman, and filmed in a studio where they put makeup on your face.  Just covers the basics.
  • A session about quantum computing, with three speakers: me about “what quantum mechanics is good for” (quantum algorithms, money, crypto, and certified random numbers), then Charlie Marcus about physical implementations of quantum computing, and finally Matthias Troyer about his group’s experiments on the D-Wave machines.  (You can also download my slides here.)

This visit really drove home for me that MSR is the closest thing that exists today to the old Bell Labs: a corporate lab that does a huge amount of openly-published, high-quality fundamental research in math and CS, possibly more than all the big Silicon-Valley-based companies combined.  This research might or might not be good for Microsoft’s bottom line (Microsoft, of course, says that it is, and I’d like to believe them), but it’s definitely good for the world.  With the news of Microsoft’s reorganization in the background, I found myself hoping that MS will remain viable for a long time to come, if only because its decline would leave a pretty gaping hole in computer science research.

Unfortunately, last week I also bought a new laptop, and had the experience of PowerPoint 2013 first refusing to install (it mistakenly thought it was already installed), then crashing twice and losing my data, and just generally making everything (even saving a file) harder than it used to be for no apparent reason.  Yes, that’s correct: the preparations for my talk at the Microsoft Faculty Summit were repeatedly placed in jeopardy by the “new and improved” Microsoft Office.  So not just for its own sake, but for the sake of computer science as a whole, I implore Microsoft to build a better Office.  It shouldn’t be hard: it would suffice to re-release the 2003 or 2007 versions as “Office 2014”!  If Mr. Gates took a 2-minute break from curing malaria to call his former subordinates and tell them to do that, I’d really consider him a great humanitarian.

96 Responses to “Microsoft: From QDOS to QMA in less than 35 years”

  1. Slipper.Mystery Says:

    > Unfortunately, last week I also bought a new laptop

    But this makes no sense … Why would you buy a Windows machine in the first place? Some forlorn sense of obligation to keep your friends at MSR propped up?

    Tell us there weren’t at least ten mac os X machines at Microsoft’s faculty summit for every Windows machine, just as at every other academic meeting?

  2. Scott Says:

    Slipper.Mystery: I didn’t count Macs vs. PCs at the meeting, though of course I saw both. For me personally, the issue is simply that I can’t stand the Mac OS interface. This has consistently been the case for me for 20+ years, and I won’t even try to justify it. Maybe the failing is mine; maybe early exposure to Windows permanently crippled me and made me uncomfortable with anything else (just like it wouldn’t do any good to tell me how Esperanto is so much better-designed than English: I don’t speak Esperanto, and never will). For whatever it’s worth, I think the iPhone interface is great, so it’s not some sort of animus against Apple the company.

  3. foobar Says:

    I’ve noticed what @Slipper.Mystery says, though I absolutely don’t understand why academics prefer macbooks (much more than the average person).

    Assume that most people consider Apple products to be desirable because of it’s good design, especially looks, and mostly because everybody else considers it desirable. Maybe the observation just signifies that since academics can afford an Apple product (significantly more expensive than laptops/PCs from other brands) or if somebody is sponsoring their general purpose computer, then they’ll prefer to buy an Apple product. So they’re just mirroring non-academics, except they have more purchasing power.

    @Scott: You’ve probably heard this suggestion before, but you could give linux a spin. There’s plenty of options to choose from, but since you prefer the Windows interface over OS-X, I’ll take a guess that you might prefer the KDE desktop environment. Linux distributions have rapidly become quite user-friendly over the years to a point where now, it pretty much “just works” and getting the right Windows drivers seems like a pain in comparison.

  4. Gian Mario Says:

    I’ve been in computational physics for a while and I can tell you why people tend to prefer macs in that environment.
    The reason is that many of us are quite comfortable with Linux/Unix and have been using it regularly on our desktops / clusters.
    The stability of the system is absolutely necessary when you run simulations or analyze data for days/weeks, so many of us can easily spend few days per month or year just playing around with our linux distribution and configuring/optimizing out pc.

    The Macbook-pros came out, laptops became common (and sometimes the only computer you had) and macs seemed to offer the best of both worlds: unix stability / development tools and user-friendliness, that as Scott mentioned is mostly useful when traveling a lot, with varying departments networks security systems, conferences networks protocols, airports wireless hotspots, printers, projectors, and God knows what…

    Now Linux is catching up, and I even know colleagues that put Ubuntu on their macbook pro because they liked it better then MacOsX, and I myself I installed ubuntu on my parents computer and my wife’s parents computers and… wait for it… they like it better that Windows!
    Unbelievable but true, normally if I only suggested installing linux they would look at me as a dangerous outcast trying to destroy their comfortable simple blessed lifestyles.

  5. Don Crutchfield Says:

    You should have bought a Surface Pro. The machine is garbage but if Gates had seen you using it he would have given you a cushy job at MSR for sure. By the way, why didn’t you boycott MSR given the extent to which the company helps Obama Inc. violate the constitution?

  6. cnol Says:

    Just watched the two talks, very interesting. Is there a way to get the slides from Charlie Marcus’s talk? I did a basic search but didn’t find it… It’s very intriguing to hear both your talks on quantum computation and the very different perspectives: one from an information theory perspective and one from an experimental perspective. It’s almost like you both are talking about two completely different fields.

  7. wolfgang Says:

    Scott,

    your post comes one day after Microsoft’s stock dropped more than 10% and Slate explained what went wrong at Microsoft using ‘The Wire’ as a pretty convincing analogy.

  8. Slipper.Mystery Says:

    Scott #2: This has consistently been the case for me for 20+ years, and I won’t even try to justify it.

    Note that 20+ years ago, the Mac OS interface was a different beast entirely (the “Mac Classic”, descended from the original Mac operating system of 1984).

    Since about 12 years ago, the Mac OS interface has instead been layered on top of a Unix operating system (specifically the Mach kernel, descended from NeXTStep).

    Since much of your beloved Windows interface was copied from one or another form of Mac OS interface, you might be pleasantly surprised to have your first look at the Mac OS since 20+ years ago (the latest also incorporates many features from your iPhone interface as well).

    It has the added benefit that no one ever accused Steve Jobs of being a humanitarian.

  9. John Sidles Says:

    —————–
    Scott remarks “MSR is the closest thing that exists today to the old Bell Labs … I found myself hoping that MS will remain viable for a long time to come,”
    —————–

    Bell Labs’ multi-decade sustainment of research derived very largely from one source: the technology of Harold Black’s patented Wave Translation System (USPTO #2,102,671, 1937). Harold Black gives a lively account of the genesis of his invention in an IEEE Spectrum article Inventing the Negative Feedback Amplifier (1977).

    Black’s patent is sufficiently deep that graduate-level courses were taught from it and vast enterprises were founded upon it. The literature of quantum computing has yet to achieve this dual distinction! It is reasonable to wonder, whether any such quantum achievements are in the offing, and if so, what form(s) these quantum innovations might take.

  10. asdf Says:

    Scott, what do you mean when you say a purported quantum RNG can be tested using the Bell inequalities? Do you have to be able to observe the internals of the generator and switch it in and out of the system, or what? Of course there can’t be any black-box randomness test unless you reject Church’s thesis.

    Re laptops: I got a new one recently too, and Fedora 19 installed from a USB memory stick without a hitch, everything works. I don’t see any reason to mess with Macs or Windows any more.

  11. Ian Finn Says:

    I hate to be “that guy,” but I really wanted to see the second video posted; however, the link doesn’t work for me. Anyone else having this issue? Is there another place to see the video?

  12. Mike Says:

    Ian@11,

    Doesn’t work for me either, unsure why. I’ll try from a difference computer on Monday.

  13. Rahul Says:

    I hate to think of MSR as in the same league as Bell. Wasn’t Bell so much broader? Whereas MSR seems fairly narrow to me. CS & Math take up the big chunk of MSR. Correct me if I am wrong.

    Another difference: Bell Labs other than fundamental research actually contributed a lot to AT&T’s technology. Lot of good applied research was churned out. Tons of revolutionary technology and innovations came out of Bell and were later commercialized successfully by AT&T.

    OTOH, MSR seems particularly awful at creating useful work that then Microsoft can turn into better products. Just like Scott’s Office story reveals, when was the last time Microsoft brought out something game-changing in the Technology area?

    Yes, Bill Gates has indeed thrown a lot of money at MSR and hired good people and, no doubt, good comes out of that. But what MSR has been abysmal at (in ~20 years that it has existed) is in contributing genuine improvements to Microsoft’s products. Office is as good or bad as it was 10 years earlier and Windows too for most purposes.

    Really, comparing MSR to Bell makes me cringe, at least for the disparity of scale of it all.

  14. Rahul Says:

    Scott says:

    “For me personally, the issue is simply that I can’t stand the Mac OS interface.”

    What about Linux? Or one of its GUI variants?

  15. emanuel Says:

    I don’t get it, i thought u always used latex for presentation ? why did you change ur format to powerpoint ? or are all ur presentation slides done via powerpoint ?

  16. William Hird Says:

    Scott,
    I noticed in your talk that Bob of “Alice and Bob fame” was replaced with a guy named Bill. Who is Bill and why did poor Bob get canned? 🙂

  17. Scott Says:

    asdf #10:

      what do you mean when you say a purported quantum RNG can be tested using the Bell inequalities? Do you have to be able to observe the internals of the generator and switch it in and out of the system, or what?

    I meant pretty much what I said! To be clear, there are three caveats:

    1. The procedure (for concreteness, let’s say the one by Vazirani and Vidick, which built on earlier work by Colbeck and Pironio et al.) is really for exponential randomness expansion. I.e., it’s for turning a log(n)-bit random seed, which you’re assumed to already have, into an n-bit nearly-random string. The seed is needed to help Alice and Bob decide on random measurement settings for their Bell experiments.

    2. To run the randomness-expansion protocol, you need two quantum devices, together with a source of entangled particles that repeatedly sends one particle to each. So, the protocol won’t work to certify the current, commercially-available quantum RNGs (like id Quantique’s). On the other hand, everything the protocol does require is well within current technology, and I’m told that NIST (and maybe others) are working on demonstrating it in the near future.

    3. If Nature resorted to faster-than-light signalling between the two measurement devices, then it could cause the output string to be non-random. To put it another way, the conclusion that the output string is random doesn’t assume anything about the internals of the devices (not even that they obey quantum mechanics!). What it does assume is that (a) the devices were unable to send signals to each other, and (b) they nevertheless managed to violate the Bell inequality.

    Note that there’s no contradiction with Church’s Thesis here, intuitively because your randomness generation procedure isn’t quite a black box: instead, it’s two black boxes that are out of causal contact with each other! The surprise is that that’s already enough to get guaranteed random bits.

    If, despite the above caveats, this result still goes against your intuition—well, it also went against mine, in my first minute or two of reading about it! 🙂 And that’s all the more embarrassing, since I’d made closely-related observations in my review of Stephen Wolfram’s book. So all I can suggest at some point is to read the papers, see for yourself how it works, and upgrade your intuition!

  18. cnol Says:

    I wasn’t able to get the 2nd link to work either when using Chrome, but I reopened the link through IE and it downloaded a windows media player file and was able to work that way.

  19. Scott Says:

    William Hird #16:

      Who is Bill and why did poor Bob get canned?

    I’ve started the practice, when I give talks, of replacing Alice and Bob by characters related to the place where I’m speaking.

    If you want to know why, try sitting through 10,000 quantum information talks all centered around Alice and Bob (often even portrayed with the same cliparts), until the very invocation of their names takes on a homiletic character, like the invocation of God in prayer. I’ve found that the trivial act of breaking this convention is sometimes enough, by itself, to rouse a few dozing audience members.

  20. Scott Says:

    emanuel #15: No, I’ve always used PowerPoint. It has some features that I really like (e.g., the animations), but probably more important is just how familiar it is and how much reusable material I already have there.

  21. Scott Says:

    Rahul #13: I spent four summers working at Bell Labs in the late 90s. I completely agree with you about its greater breadth and practical impact—it’s pretty hard to compete with the laser, transistor, communications satellite, cosmic background radiation, etc.! What I said was that, make of it what you will, MSR seems like the closest thing to the old Bell Labs that exists today—the one main competitor that springs to mind being IBM. (And maybe TTI?) I wish there were more companies that invested in open, long-term research the way AT&T used to, but maybe it requires large, unthreatened monopolies or quasi-monopolies, or some other factor that no longer exists today.

  22. Rahul Says:

    Scott #21:

    GE does a lot of good research too on the long term horizon.

    In general, I think what has happened is the model evolved from one big lab doing all the long term research to a lot of large companies each investing in long term research in its own area.

    Look closer at the research programs of companies like Alstom, Siemens, ABB, Wartsila, Bombardier, BASF, Dow and even more importantly many of the (oft maligned) oil majors.

    Great research being done at many of these companies and, surprisingly (unlike popular perception) a lot of it is not “merely” applied. Longer horizon stuff gets done too. Unfortunately, they might be a bit less active in openly publishing it though, I agree.

  23. Rahul Says:

    Scott says:

    “What I said was that, make of it what you will, MSR seems like the closest thing to the old Bell Labs that exists today—the one main competitor that springs to mind being IBM.”

    What’s you opinion on Research done at Google? Somehow, I always found it cooler that stuff that comes out of MSR. And seems to me, Google has actually succeeded in converting at least some of that research into workable ideas. I cannot think of anything that came from MSR that rivals Google’s autonomous car, say.

    Perhaps, it is my internal prejudice or bias.

  24. Scott Says:

    Don Crutchfield #5:

      By the way, why didn’t you boycott MSR given the extent to which the company helps Obama Inc. violate the constitution?

    I should say at the outset that I don’t understand Obama’s thinking on privacy issues—and not merely from some idealistic standpoint, but from the standpoint of the US’s long-term interests or even Obama’s own legacy. Were I in his shoes, I’d probably pardon Bradley Manning (or at least commute his sentence), and declare Edward Snowden free to return to the US with no charges brought. Yes, I can see the value in deterring other people from leaking classified information. But that seems to me to be outweighed, in these cases, by the further erosion of America’s reputation, as autocrats around the world can giddily declare, “see them hunting down these idealistic whistleblowers? they talk a good game, but they’re no better than we are!” And the things Manning and Snowden brought to light really do need to be debated in public, regardless of where anyone stands personally in that debate, and even if one considers these issues more complicated than the Chomskyan left does.

    Having said all that: if Microsoft’s cooperation with the government in these matters were sufficient grounds to boycott them, then I guess one should also boycott Google, Facebook, and Apple? Great, so who wants to be the first? (Not you, Stallman. 😉 ) Seriously, my standards for when to boycott something are pretty high. I do boycott overpriced journals—but in that case, not only is there almost no personal cost to me, but the “boycott” gets me out of onerous reviewing work that I’d otherwise have to do! So it’s hardly even a “boycott” at all: more like just a sensible decision not to do volunteer labor for Elsevier, which happens to align with my principles as an extra cherry on top.

  25. Scott Says:

    Rahul #22, #23: OK, I was only addressing corporate research labs that I know something about personally—which tends to mean, those few with a footprint in math or theoretical computer science, and that heavily interact with academics in those fields.

    I’m genuinely curious: what are some examples of important basic research to have come out of Google? (Besides, of course, PageRank, which predates Google?) They certainly have no shortage of brilliant people there (many hired away from academia), and I’m ready to believe that they’re doing important basic research in machine learning or other areas that I don’t follow as well as I should. But there are three factors that make it hard for me to be sure:

    1. When I ask my friends at Google what they’re up to, they usually say they’re not allowed to tell me! That’s completely different than my friends at MSR (or IBM Almaden or Yorktown Heights), who happily tell me everything…

    2. Google seems to have made an explicit decision, early on, not to create a big Bell-Labs-style basic research organization, but rather to put its researchy people to work on things directly relevant to the company. Now, I don’t criticize them for that decision—indeed, I like to point to them as one of the preeminent examples of a company successfully turning academic CS into practice. But of course, it does limit Google’s ability to impact the CS research community. (I hope that in the future, Google Research does evolve in a Bell Labs- or MSR- or Almaden-like direction, and I wouldn’t be surprised if that happened.)

    3. Suppose I try to judge by the one area I know best, quantum computing. Then what do I see? MSR: Michael Freedman, Matt Hastings, … IBM: Charlie Bennett, John Smolin… Google: the D-Wave deal. 🙂

  26. John Sidles Says:

    Scott evaluates: “Suppose I try to judge by the one area I know best, quantum computing. Then what do I see? MSR: Michael Freedman, Matt Hastings, … IBM: Charlie Bennett, John Smolin… Google: the D-Wave deal. :-)”
    ————————
    That is a fascinating line of inquiry!

    Change in valuation since QIST 2003:
    MS: +30%
    IBM: +130%
    Google: +740%

    Summary  Market valuations provide no substantial evidence that the present generation of quantum researchers is contributing appreciably to corporate enterprise prosperity.

  27. Scott Says:

    John Sidles #26: During the same time interval, Bear Stearns, Blockbuster Video, Borders Books, and Hostess Cakes all collapsed, and none of them had any quantum computing research. From this I deduce that they all should’ve gotten into quantum computing as an insurance policy.

    On the other hand, suppose that, for the sake of argument, I accepted the bizarre theory that a company’s stock price has essentially nothing to do with the quality of whatever quantum computing research it happens to sponsor with 0.001% of its budget. In that case, the conclusion I would draw would be that stock price can’t have been that important in the first place, if it does such a bad job of predicting QC research quality!

  28. asdf Says:

    Scott, thanks for the answer about randomness, which I’m trying to make sense of. One immediate corrolary of what you’re saying seems to be that Church’s thesis implies that superluminal signalling is scientifically unfalsifiable. 😉

  29. Scott Says:

    asdf #28: Yes, there’s a clear sense in which the existence of superluminal signals (just like the existence of the tooth fairy) is scientifically unfalsifiable, and we didn’t even need Church’s Thesis to tell us that! On the other hand, what we learn from Albert E. is that, if you do have superluminal signalling, then you unavoidably also get backwards-in-time signalling—which might raise the stakes a bit. 🙂

  30. John Sidles Says:

    It’s not so easy, in the writings of pioneering quantum computing researchers, to find explicit manifestos addressing the question “Why should we study quantum dynamics?” (and of course, we need not presuppose that this question has only one reasonable answer, or even that the set of reasonable answers has a natural ordering).

    With these caveats in-mind, the concluding chapter of David Deutsch’s recent The Beginning of Infinity: Explanations That Transform the World (2012) speaks plainly:

    “Illness and old age are going to be cured soon — certainly within the next few lifetimes. … We know that both of the deepest prevailing theories in physics [quantum mechanics and general relativity] must be false.  … There is only one way of thinking that is capable of making progress, or of surviving in the long run, and that is the way of seeking good explanations through creativity and criticism. What lies ahead of us is in any case infinity. All we can choose is whether it is an infinity of ignorance, or of knowledge, wrong or right, death or life.”

    David Deutsch’s (quantum-grounded) scientific optimism is well-matched to Google’s (information-grounded) corporate optimism Google’s mission is to organize the world’s information and make it universally accessible and useful.” It’s less evident (to me) that enterprise strategies of MS and/or IBM — as implemented in the 21st century — are similarly well-matched to Deutsch’s vision of an infinity-spanning Enlightenment that is solidly grounded in scientific understanding.

    Summary  The “Deutschian” vision of quantum science is well-matched to the “Googly” vision of 21st century informatic enterprises. If David Deutsch’s scientific vision is reasonably accurate, then Google’s enterprise strategy is reasonably optimal.

  31. John Sidles Says:

    As a follow-on to #30, perhaps one weakness of Deutsch’s broad conclusions regarding research in general (and quantum research in particular) is that Deutsch’s conclusions are so broad, and so noble, and so non-specific (in regard to near-term enterprises) that is difficult to disagree substantially with any of them.

    And this is a flaw, because according to Deutsch himself, there is no creativity without criticism!

    Here, therefore, is a concrete criticism of the following key Deutsch claim:

    “(p. 276) There is a way — I think it is the only way — to meet simultaneously the requirements that our fictional laws of physics be universal and deterministic and forbid faster-than-light and inter-universe communication: many [quantum] universes. … (p. 305) We have no option but to accept the [quantum] theory’s explanations, because it is the only known explanation of many phenomena and has survived all known experimental tests.”

    From an algebraic point-of-view, it’s tough to argue against these claims. But criticism is much easier if we adopt geometric point-of-view … and therefore (according to Deutsch) creativity is easier too … and perhaps this accounts for the burgeoning popularity of the 21st century’s geometry-centric physics/computer science textbooks (Mikio Nakahara’s Geometry, Topology and Physics (2003) and Joseph Landsberg’s Tensors: Geometry and Applications (2012), for example).

    In Overview  Almost all physicists embrace the notion that dynamical flows preserve symplectic structure, such that the Zeroth through Third Laws of Thermodynamics are respected (in mathematical terms, the Lie derivative of the symplectic form is zero). That is because the thermodynamical laws are exceedingly well-tested, and the informatic consequences of violating any of these laws would be confusingly dire.

    The metric analog of this same invariance postulate (namely, the postulate that the Lie derivative of the Nature’s quantum metric is zero) is what 20th century’s algebra-oriented quantum physicists call “unitarity evolution”. But if we think geometrically, the experimental evidence that Nature’s flow is a dynamical metric isomorphism is far weaker than the evidence that this flow is a dynamical symplectic isomorphism. This is for the common-sense reason, that in the laboratory we measure only an exponentially tiny subset of the basis vectors of a Hilbert space of exponentially vast dimensionality, for an exponentially tiny subset of Hamiltonian flows (a set selected by Nature, not by us!).

    Thus the existing experimental evidence that Nature’s dynamical metric isomorphism is both complete (that is, dimension-spanning) and exact is not nearly as strong as the existing thermodynamical evidence that Nature’s dynamical symplectic isomorphism is exact.

    In a Nutshell  (1) Macroscopic decreases in entropy are never observed; thus we reasonably embrace (quantum) symplectic isomorphism as an dynamical law that is exact. (2) Macroscopic Schrödinger cats are never observed; thus we reasonably embrace (quantum) metric isomorphism as a dynamical law that is only approximate (in that it accurately describes only a restricted set of low-dimension Hamiltonian dynamical flows).

    Conclusion  Our best experimental evidence, as understood in light of our best explanatory mathematics, leads us to regard David Deutsch’s claim “We have no option but to accept the [quantum] theory’s explanations” as algebraically strong yet geometrically weak, and symplectically strong yet metrically weak … and overall plausibly false.

  32. Raoul Ohio Says:

    John,

    Re: “Illness and old age are going to be cured soon — certainly within the next few lifetimes.” , I bet you $5 that Deutsch is wrong.

  33. Mike Says:

    “I bet you $5 that Deutsch is wrong.”

    I’ll take that bet. It’s nice knowing that only I will be able to collect!! 😉

  34. Jordan Says:

    Thanks for posting this and thanks for starting my day with a big laugh. @11:02, when the interviewer said he pictured a quantum computer being “in some remote forrest in a giant building; there’s a man with gadgets and levers”. Hahahaha. Did this image soar into your head at that moment? 😉
    http://www.extremetech.com/wp-content/uploads/2012/06/Quantum.jpg

  35. Rahul Says:

    I’ll take that bet. It’s nice knowing that only I will be able to collect!! 😉

    You’ll both be dead. “next few lifetimes” 🙂

  36. Mike Says:

    Rahul@34,

    Yeah, I know, I’m counting on it to be “cured” within my lifetime, which is part of the set of the next few lifetimes. 🙂

  37. Rahul Says:

    Scott #25:

    Perhaps the reason I value Google over MSR is my personal bias which values the useful but not fundamental over the fundamental but not useful. 🙂

    But about the dominant position of MSR in the specific, narrow case of “basic research in CS” I agree with you.

    OTOH if I were to search for a heir to the Bell legacy, I would never think of MSR. Closest in my mind would probably be GE Global Research, mainly on basis of their size and wide range of research interests.

  38. Scott Says:

    John Sidles #30: I wish I had even 1% of your hermeneutic powers! For example, I would never have guessed that the subtext of that (somewhat bombastic) David Deutsch passage about the future of life and intelligence in the universe was actually, “hooray for Google! And boo, IBM and Microsoft!” 🙂

  39. John Sidles Says:

    My wife and I are fond of a charming Star Trek episode “In the Cards” that includes this dialog:

    ——————-
    Jake (human youth)  “We work to better ourselves and the rest of humanity.”

    Nog (ferengi youth)  (skeptically) “What does that mean, exactly?”

    Jake (hesitates) “It means … it means we don’t need money.”

    Nog (triumphantly) “Well, if you don’t need money, then you certainly don’t need mine!”
    ——————-

    What’s notably missing from David Deutsch’s unbounded long-term vision are concrete answers to awkward short-term Nog-style STEM-related questions like “Progress? What does that mean, exactly?” and “Research? How does that work, exactly?”

    The generation-long (and continuing!) immiseration of the North American/European STEM community lends increasing urgency (as it seems to me) to the challenge of finding better answers to pragmatic Nog-type questions.

    It seems (to me) that the Google folks thoroughly appreciate that their core business is to help every person on earth, and also every enterprise on earth, to find their own good answers.

    Good on `yah, Google! 🙂

    Will MS and IBM go down Google’s path? They pretty much have to (as it seems to me).

  40. Michael Bacon Says:

    Scott@30

    Without arguing about the completeness and context of the “quote”, still not right to blame Deutsch for the way John uses these statements — it’s not easy to withstand that kind of scrutiny. 🙂

  41. Michael Bacon Says:

    Feel free to change that to Scott@38 and delete this comment if you like 🙁

  42. Gian Mario Says:

    Just a curiosity, discovered today by chance, but you can already plan for your next phone/laptop/computer, but you need to be fast 😀
    http://www.indiegogo.com/projects/ubuntu-edge?c=home.

    I think researchers should start using crowd-funding. I already see myself explaining to my grandma why I need 500.000$ to develop a code to do the first 4D dynamical simulation of quantum effects in gravitational collapse.

  43. Sanjeev Arora Says:

    Scott, I agree with a previous poster that Google is the dominant technological innovator today. They throw algorithms, hardware and plain old smarts at problems in a way that is just inspiring. For example what they have done with their maps and translate apps is simply fantastic (as I discovered during a 2-month stay in Japan). I am also rooting for their self-driving car to succeed.

    Apple and Microsoft both look stodgy in comparison. As you say yourself, Powerpoint 2003 was almost as good as the newest one.

    Maybe the days of labs like Bell Labs are over; the world has changed too much.

  44. Scott Says:

    Sanjeev #43: Well, I wasn’t talking about companies’ overall coolness, but only about their footprint in basic CS and math research (which is why, for example, I didn’t mention Apple at all).

    But if you want to discuss overall coolness: yes, it’s obvious that Google starts with a huge coolness lead over almost everyone else. I rely on them dozens of times every day, probably even more than I rely on Microsoft or Apple. But in my book, Google loses coolness points over the following:

    • Summarily taking away services people had grown to rely on (e.g., Google Reader) in an attempt to corral them into using Google+. (The video chat in Gmail also became noticeably harder to use after it was tied to Google+.)
    • My Gmail account going down for days, with all my years’ worth of stuff there, and realizing that there’s no user support whatsoever (I had to contact friends who work there to find out what was going on). Later, I read this long Atlantic article by James Fallows, which describes precisely what I’d experienced: Google’s services are free and wonderful until the day they stop working, and you realize your whole life is now locked away in an uncontact-able fortress (unless you’re lucky enough to have friends “on the inside,” as I and Fallows were).
    • The D-Wave deal.

    Meanwhile, Microsoft and IBM gain some coolness points for being friendly places for D-Wave skepticism (Matthias Troyer consults for MSR, and Smolin and Smith work at IBM). But yes, if we’re considering the companies as a whole, I guess Google still ends with the overall coolness advantage. 🙂

  45. Raoul Ohio Says:

    Mike and Rahul,

    How about some some Zeno based cleverness regarding “in a few lifetimes” as “old age is cured”?

  46. Anonymous Says:

    Google is the most innovative technology firm in the world, which is its mission. That is not the same as excelling at basic computer science research, which MSR and especially academia are best at.

  47. Sanjeev Arora Says:

    I never use gmail or other free emails for work related stuff. If you are talking about which company is best at business, then there’s no question it is Apple.

    If gmail were run by apple it would provide a great user experience (not a gmail strength) and we would all pay $100 per year for it. (Maybe apple already has such a service that I don’t know about.)

    One intepretation of the Dwave deal (based on no inside knowledge) is that Google is making a big push in machine learning and is willing to spend big sums to speed up Gibbs sampling. It has also gone on a buying spree for the big name researchers…

  48. John Sidles Says:

    Sanjeev Arora says: “One interpretation of the Dwave deal (based on no inside knowledge) is that Google is making a big push in machine learning and is willing to spend big sums to speed up Gibbs sampling.”
    ——————–
    Similarly, one reasonable interpretation — out of diversely many — of the Berkeley/Simons Institute’s 2014 workshop series (that begins with Algebraic Geometry Boot Camp) is that these workshops are substantially about this very same topic.

    This is because the natural physics question “What dynamical trajectories are the DWAVE machines unravelling physically?” is dual to the natural CS question “What algorithmic trajectories are the DWAVE machines unravelling computationally?” … and both aspects of these fundamental questions will receive illumination, no doubt, at these sure-to-be-thrilling Berkeley/Simons workshops.

    Microsoft’s Henry Cohn is one of the organizers; it will be mighty interesting to see what other corporations participate, and what various STEM themes emerge as central to these workshops.

    Good on `yah, Berkeley/Simons/Microsoft! 🙂

  49. John Sidles Says:

    RE #48: Here’s a (hopefully) working link to the Simons/Berkeley Institute 2014 workshop/program series Algorithms and Complexity in Algebraic Geometry

  50. Rahul Says:

    Scott #44:

    I agree with your criticism. Lately, some of gmail etc. changes have been annoying. I wish they allowed an opt out every time they had a UI tweak that makes no sense for most of us.

    Google products like GoogleNews or Gmail seem mature enough for me, say, circa 2009 or so. Everything they changed after that, to the UI at least, was annoying and any incremental benefits negligible. In fact, I’d much better like their legacy interfaces.

    My personal suspicion is that Google’s smart tech / UI design core team is losing ground to a more mediocre, managerial, non-visionary workforce that got added as Google grew.

  51. Rahul Says:

    Scott says:

    you realize your whole life is now locked away in an uncontact-able fortress

    Google these days allows pretty comprehensive ways of “data liberation”. Essentially downloading in xml / plaintext or some similar open format a data dump of your whole data with google.

    Have you given that a shot?

    In any case, Gmail allows IMAP clients for a long time so downloading your full mailbox shouldn’t be a big problem.

  52. luca turin Says:

    Ah, but Bell Labs did _hardware_, and that makes all the difference.

  53. Rahul Says:

    Sanjeev says:

    I never use gmail or other free emails for work related stuff.

    Quite a few workplaces have GMail-paid mailboxes these days. I think.

  54. Alexander Vlasov Says:

    Certainly right way for such presentation is Fedora 19 codename Shroedinger’s Cat with Libra Office Impress. BTW, equations in Aaronson_Scott_GoodFor.pptx were not readable in my Impress, so I have to convert that in ppt.

  55. EconomicsWins Says:

    Bell labs and other corporate labs won’t come back in communications and computing simply because the conditions that existed in the late 19th century do not exist anymore. Shannon theory is correct and Moore’s law is not dying. If either one of them were false then those labs could return but to what ultimate purpose, I would not know (sell 10^100 bps modem for $1 to everyone?)

    However, other corporate mathematical labs will come up to explain things like what is going on here in the video and find uses. The phase of corporate albs from late 19th to late 20th century is done. New avenues are the new hope. I think coporate labs doing fundamental research will come up again. http://www.youtube.com/watch?v=yKW4F0Nu-UY&list=PLsi3K3pf6burCcGnW7sZW5BTuZyNCKEqh

  56. John Sidles Says:

    EconomicsWins says (#55): “The phase of corporate labs from late 19th to late 20th century is done. New avenues are the new hope.”
    ——————-
    EconomicsWins, your comment (#55) perhaps would be even stronger if it reviewed the abundant existing literature that has been saying the same thing for many decades.

    In regard to Moore’s Law, exponential increases in capacity, sustained over many tens of doublings and tens of years, are commonly seen in 20th century STEM history. Bell Labs was itself mainly funding by the Moore-doubling of long-distance channel capacity (per references in Comment #9) … and this sustained Moore-doubling is not done yet. Imaging technologies too show sustained Moore-doubling … and this sustained Moore-doubling is not done yet. Classical bit-gates show sustained Moore-doubling, and the classical algorithms thus embodied each independently shows sustained Moore-doubling … and this sustained Moore-doubling too is (obviously) not done yet.

    Multiple further Moore-doubling STEM-components can be named — gene sequencing! the neural connectome! cognitive reconsolidation! internet-searching! — and the length and diversity of the Moore-doubling STEM-list is (as it seems to me) the 21st century’s best assurance that the STEM community’s generation-long immiseration is foreseeably coming to an end.

    Remark  Notably absent from the Moore-doubling STEM-list are foundational quantum computing elements like error-corrected n-qubit gates (stuck for a decade at n ≈ 1) and on-demand n-photon sources (also stuck for a decade at n ≈ 1).

    Conclusion  In retrospect, it is regrettable that from a stable of exciting quantum STEM-horses, the QIST 2003 panel bet a “superfecta” of STEM-horses that (in recent decades) have not demonstrated a capacity for sustained Moore-doubling. STEM history shows plainly that such “stuck” technologies never provide foundations for global-scale enterprises. Fortunately, there remain plenty of faster horses in the quantum stables, and these vigorously Moore-doubling technologies provide good hope of mitigating the world’s long-endured plague of STEM-career immiseration.

  57. EconomicsWins Says:

    @John Sidles (#56)

    ” this sustained Moore-doubling is not done yet.”

    That is exactly my point. Say we had to deal with machines capable of n x ENIACs still instead of those capable 2^n x ENIACS (where n is a function of time lapsed since the first ENIAC), then there would have been been more such labs.

    Unless they find an application comparable to say startrek’s starship enterprise and beaming up and down, my opinion is any progress from quantum mechanics will be incremental and will not put us in a situation parallel to the conditions of late 19th century.

    Your error correcting point has also got the unfortunate Shannon wall (for mathematics producers that could benefit society in a drastic and not just incremental progress). I just gave you an example. Say you have a genie which can give you 10^10^10bps system for a $1million, try selling it to the outside world.

    The status quo of more labs will change unless something drastic comes in non-pure traditional technology (outside communications and computing). I do not think an economy that invested 10s of trillions of dollars over 100 years building communications and computing infrastructure for a bunch of teenagers to exchange pictures on facebook in going t help things and any math which fundamentally goes t helping those teenagers is not going to help much the loss of former great labs.

  58. EconomicsWins Says:

    “(for mathematics producers that could benefit society in a not drastic but just incremental way)”

  59. EconomicsWins Says:

    I guess the first statement was right. “(for mathematics producers that could benefit society in a drastic and not just incremental progress).”

  60. John Sidles Says:

    EconomicsWins says (#57): “Any progress from quantum mechanics will be incremental and will not put us in a situation parallel to the conditions of late 19th century.”
    ———————
    Perhaps a middle point-of-view is that non-incremental STEM progress in the 21st century  — meaning objectively, STEM progress that is concretely associated to Moore-doubling global-scale enterprises — necessarily will build upon solid physical foundations in quantum dynamics united with solid computational foundations in complexity theory.

    And yet this union well require substantial reconsolidation of the 20th century’s conceptions of the physical and computational foundations of the quantum STEM community — in multiple multiple senses of “reconsolidation” that themselves are Moore-redoubling with burgeoning STEM vigor nowadays

    The 21st century’s immiseration-mending quantum reconsolidation will be already is fun! 🙂

  61. EconomicsWins Says:

    One more thing – My manager used to say:

    Bell Labs, Motorola and IBM were engineering companies and they need infrastructure in an engineering and basic research sense.

    All google and MS needs is a cable connected to yourhome or a cd manufacture machine.

    Ofcourse the example is exaggerated.

    However, you get the idea.

    There is no more real technological innovation. Gone are the days fft, even sending 1bps or adding -1+1 = 0 were miracles.

    Even if the power of QC is revealed tomorrow and the promise is heralded as possible in 10 years, it wont lead to conditions of late 19th century.

    What we dont have is a mathematical theory of life even to the extent that we had after an apple fell on Newton’s head heralding a new miraculous force or even before that when the ancients worshipped the celestial forces (they atleast had decent calendars)? This is where fundamental theory should focus on. May be a 1000 years from now, some Newton may be bitten by a bug and get something new out of it.

  62. John Sidles Says:

    EconomicsWins claims: “There is no more real technological innovation.”
    —————————
    As David Deutsch’s The Beginning of Infinity: Explanations That Transform the World (2012) entertainingly documents, that claim has been heard before.

    Interestingly, Deutsch’s chapter “A Dream of Socrates” sets forth a considerable body of evidence that conservative political cognition is particularly prone to this fallacy.

    Example  It was America’s “Spartan” president Eisenhower who acted to restrain the Space Program (as being too expensive), whereas America’s “Athenian” president Kennedy proclaimed

    “We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”

    Kennedy’s reasons were irrational by any purely economic standard, needless to say!

    As for the mathematical foundations of Kennedy’s program, a very readable account is Denis Serre’s recent Von Neumann’s comments about existence and uniqueness for the initial-boundary value problem in gas dynamics (Bull AMS, 2010).

    Conclusion  Present-day advances in our understanding of nonlinear quantum dynamics are (as it seems to me) comparably interesting — and of comparable economic and strategic consequence too — to the advances of von Neumann’s generation in understanding nonlinear gas dynamics. It even happens that key elements of the mathematical frameworks are similar, in that entropic considerations are central to both! 🙂

  63. Raoul Ohio Says:

    At the time, I was a huge fan of going to the moon. I still think it was absolutely worthwhile, and it is probably worthwhile to go back every 20 years or so.

    Sending humans to Mars is another story. My quick guess estimate is that it is 1000 times harder and more expensive. Many of the problems will NOT get much easier with advances in computers, etc. This is why it is not likely to ever happen.

    The frequent proposals for Mars travel are entertaining. One came out of England this week: Human health under no gravity? No problem, just have two ships spinning on a tether, like in the old SF books. Fuel for the return trip? Not a problem! Just do what Martha Stewart would do: Simply dig up 10000 tons of dust, extract 10 tons of water from it, and split it into hydrogen and oxygen for fuel! Simple, and the astronauts can use the exercise after sitting for a couple years. One thing they forgot: where are they going to plug in the processing equipment? Maybe they can take a nuke plant along in the baggage section. You can probably get an old one pretty cheap if you clean up the site after you launch it.

  64. Rahul Says:

    Every time Microsoft gets some praise, when it really deserves heaps of ridicule, it riles me up a little bit more.

  65. Scott Says:

    Rahul #64: Then here’s a thought to rile you up some more.

    Suppose you had asked yourself, as a teenager, “how should I live my life so as to maximize my impact on reducing the most widespread, obvious forms of human suffering today, like childhood deaths from malaria?” And then set out, as an earnest utilitarian, to implement your answer? What would the result look like?

    It seems clear that your life would look nothing at all like Mother Teresa’s, or that of any other traditional “saint.” But it might look a helluva lot like Bill Gates’s. That is, the best strategy might well be to spend the first half of your career making billions of dollars almost any way you could—stealing other people’s ideas, making deals only to backstab your partners later, locking customers in to buggy, inferior products, whatever—and then to spend the second half giving your billions away, thinking very hard about how to maximize the impact of each grant.

    Now, I’m not claiming that Gates actually had any such conscious plan. He was obviously tremendously lucky; not even he could’ve had any idea in 1975 of MS’s eventual scale. And at least from what I’ve read about him, for most of Microsoft’s rise he was motivated less by altruism than by, e.g., the pleasure of crushing competitors. And we all know all of that, and I suspect that that’s why Gates’s resemblance to my hypothetical utilitarian altruist grates on our intuition so much. Or at least, I confess that it grates on mine—so much so that I probably wouldn’t have brought this up, had I not felt the urge to rile you up just now. 🙂

  66. Raoul Ohio Says:

    I am putting on my contrarian hat to rile up Rahul and point out some mildly good aspects of Microsoft.

    1. Anyone who bad raps MS does not work at a University plagued with Blackboard. MS Excel is very easy and powerful; I use it for a dozen things a day, sometimes including research. Check out the Bb spreadsheet to see how it can be done wrong. Yes I know there are free range SS’s out there that, in the hands of a Linux guru, work half as good as Excel. So what?

    2. MS Word is also easy and powerful. It is even good for rough drafts of research papers. When you get it organized, use Texmaker to render the final version into LaTeX. A frequent criticism is that most users only use the basic 10%, plus 1% of the advanced features, so Word is “bloatware”. But, we all use a different 1%, so what is the problem with 0.0001% of our hard drive holding a bunch of stuff we never use?

    3. Microsoft products get slowly better. They have huge “usabilty labs” to try to figure out how to make products easier. We all grip about the changes in each new version. But if you go back a couple of versions, it feels like trying to program in Fortran, because we have realized that most of the changes were a good idea. Google’s changes have not been as well received.

    4. Bill Gates was absolutely somewhat of a jerk, but net compared to other software titans like Steve Jobs and Larry Ellison. I am cutting Ellison some slack because his $$ has resulted in major improvements in Java and NetBeans. Who would have guessed that Oracle buying Sun would turn out great?

  67. Rahul Says:

    Scott #65:

    Bill Gates as the modern Robin Hood model. 🙂

    I think Microsoft just grates on my, even without the background of Gates’ altruism.

    OTOH, given the reality that we are stuck with a Windows that is going to produce blue screens and my Word thesis will keep crashing the day before the submission deadline, I’d rather have this world with a massively altruist Bill Gates than another with a stingy Scroogy Gates.

  68. Rahul Says:

    Raoul Ohio #66:

    Funnily, your comment did not rile me up. 🙂 I kind of agree especially with your points 1 and 2.

    I’m no visceral Microsoft products hater: I use Excel and Word a lot too. Perhaps because I use them so much, their bugs rile me even more.

    I don’t agree with your 3rd point though. Excel and Word and even Windows are success stories whose foundations were laid in Microsoft’s early decades. I’d be perfectly happy with a version produced 10 years ago of most Microsoft products (a fact that’s been a major challenge for their own sales and marketing teams)

    It’s sad that the later generations of Microsofters have done very little to improve on these core products (and oftentimes done a lot to destroy their strengths if you use the example of Vista).

    There’s a bug in Excel when fitting exponential regressions that’s been documented in an online Microsoft advisory since 2000 or so and still happily persists because no one cares to fix it (and note they have ample opportunity looking at the number of updates waiting to be pushed every time I reboot my PC). Tons of other such sad stories.

    Which is why I’d have a little more respect for Microsoft Research if they even did a tiny bit to improve upon and iron out the bugs in these massively popular core products than spend their time on exotic flights of fancy or on proving esoteric bounds on computation that have very little practical utility.

    Even a 0.5% reduction in the number of times Windows crashes probably means literally millions of work-hours saved every day. I can’t think of an IT project that would give you a higher effort leverage.

  69. Ben Shipman Says:

    Don Crutchfield and Scott recognize the problem of unconstitutional corporate-government surveillance, while John Sidles recognizes the exponential nature of the technological advances that the guilty corporations are driving. What about the connection between the two? As the technology at the disposal of these people approaches infinity, so will their ability to spy on, manipulate, and otherwise harm us.

  70. Raoul Ohio Says:

    Ben,

    Recall 10 or 20 years ago, someone (Larry Ellison?) said something like “Privacy is dead. Get used to it.”. I thought “What a jerk”. Unfortunately, it looks like he was right. And, it is not just governments and corporations. For a couple million $, anyone could, say, track all the traffic in a major city, or, who knows what and why. Any idea how much Amazon, Google, Facebook, etc., know about you?

    So, what can you do? To paraphrase the gun lobby, “When tracking is illegal, only outlaws will have information”.

    Perhaps the best bet is laws protecting people from certain uses of information.

  71. Tim May Says:

    First time post here, I think. (I’m reading “QCSD” and had a couple of great conversations this past summer with a crypographer and a string theory/Sci Fi novelist about the book. We all focused on the “L2 is the natural and obvious space for qubits and QM to live in,” which I’ve thought since Scott’s “Islands” paper was at the core of the how the universe works.)

    A few misc. points, including about the Mac-Windows debate:

    * I worked for Intel from 1974-86 (device physics, including the alpha particle/cosmic ray problem, about which I am most associated)

    * built my first PC in around 1978, from Processor Technology…I used to go to the Homebrew Computer Club meetings at SLAC…used to hand out Intel cosmetic rejects to those interested in getting free 8080s.

    * bought a home PC in 1983, the PC-XT. My work computer was my lab’s VAX. (Bought Intel’s second VAX, delivered in 1980.) Tried Windows 1.0: terrible. Tried Windows 2.0: terrible. Also bought Microsoft Word 1.0, with an included Microsoft Mouse, in around December 1983. Took my home machine, with MS Word, into my office to write a paper for a 1984 conference. (The alternative at work was to submit to a typing pool, blah blah.)

    * used a Symbolics 3600 Lisp Machine. Was sold on graphics UI, windows, dropdown menus, etc. (Had earlier seen the Xerox Alto/Star that inpsired Jobs via a tour given by Larry Tesler at PARC.)

    * was allowed to get a Mac for our graphics work, mainly via the new Laserwriter, in 1985.

    * retired from Intel in 1986. Bought a Mac Plus for my home use.

    * note that Microsoft Word, which started on the PC was later ported to the Macintosh, in around the late 80s. It was then back-ported to the new Windows 3.0/3.1 versions. Also appearing first on the Mac were: PowerPoint (another company, acquired by MS), Excel (introduced by MS on the Mac first), and various products like PageMaker, Illustrator, and Photoshop. These were “bread and butter” apps on the Mac in the crucial desktop publishing age. Most were later ported to the Windows environment.

    (I only mention this for the ironic point that what most people think of as core MicroSoft products–Word, Excel, Powerpoint, and graphics/publishing apps–mostly had their first success on Macs.)

    * For nearly 30 years I’ve happily used Macs. They had more “out of the box usability” than what my friends still at Intel were getting out of their Windows 3.1, then Windows 95, then later iterations, were getting.

    * They often cited Michael Dell’s famous quote that Apple ought to just liquidate the company and return the proceeds to the shareholders.

    * This was around 1997-98. That year I bought $16,000 worth of AAPL to add to my existing shares (some bought in ’84). Just those shares alone came to be worth a few million bucks.

    (From my Apple alone, not counting my Intel, etc., shares, I could amost buy my own D-Wave machine. Should I? Rhetorical question.)

    So, yeah, I see nearly all of my (current) friends happily using Macs. I see most people at conferences using Macs.

    Works for me.

    In any case, I greatly admire Scott’s work. Especially in debunking the folk science about how a working quantum computer could somehow solve all computational problems in vanishly small time (all those other universes doing the work, doncha know?).

    Exciting times.

    –Tim May, Corralitos, California

  72. ブルガリ チョコ Says:

    AS I did not an important concept tips on how to usage WordPress together with Comicpress to generate internet websites together with present comic tape! You should allow. Are you aware of of your excellent walkthrough? Kudos a great deal!.

  73. John Sidles Says:

    John Sidles (in comment #70 is asserted to) “recognize the exponential nature of the technological advances that the guilty corporations are driving.”
    —————

    Summer Job 2014  Meet strangers at any public venue and buy them drinks … and we’ll pay the tab! The larger the party, the better! Don’t forget to take photos at the party! Then swab the party-goer glasses/spoons/forks and send their DNA to us, at EfficientHealthCareCo for sequencing.

    We at EfficientHealthCareCo identify the genetically/socially/economically healthiest quartile (by matching photos to swabs) and we offer these picked customers insurance at rates that other companies can’t match.

    You’ll be doing you lucky party-friends a big favor *and* making *terrific* money *and* having fun!
    —————

    Conclusion  The “Big Brother” dystopia of George Orwell’s 1984 is upon us … its agents are corporate … its chief tool is 21st century population surveillance. The Republican Party’s persistent blindness to this Orwellian technological reality has dealt a crippling blow to the viability of principled American conservatism.

  74. Raoul Ohio Says:

    Tim,

    Being a stockholder, you can be excused for not being offended by the exorbitant prices.

  75. Alexander Vlasov Says:

    Raoul Ohio #66, I did not know programs for painless conversion of MS Word file with equations into LaTeX. I just checked Texmaker and again did not find such option at all.

  76. Raoul Ohio Says:

    Alexander,

    I looked into this issue a couple of years ago and found some. I did not try to use them because I suspected they would be more trouble than they are worth.

    The following approach works for me:

    1. Write your rough drafts in MS Word. Either (a) enter equations using Equation Editor, (b) write abbreviated equations in text, or (c) just write placeholders, something like “(Main Inclusion result)”.

    2. I usually have to do a lot of reorganizing of text to get a decent document; largely Copy/Delete and Paste. Also a lot of rewriting, using different words, etc. This is a task MS Word is particularly good at; I find it a lot easier in MSW than in any LaTeX editor.

    3. When you have things fairly well organized, write the final version in TexMaker. Enter the equations in TM. I find it easier to write them in LaTeX than clicking on the TM icons. Pure text sections can be copied and pasted directly from MSW to TM.

    4. Do final polishing up from the .pdf created. This task seems easier if, in the raw .tex, you start each sentence unit on a new line.

    In the past, I have written tons of stuff with EquationEditor, MathType, etc. This was a big mistake; I wish they were all in LaTeX.

  77. asdf Says:

    Scott #64, that malaria strategy only works if the “become a billionaire” part of the plan is successful when you go about implementing it: millions of people try it and fail. You prevent maybe 1e8 cases of malaria if you’re successful at becoming a billionaire, and 0 cases otherwise. If your chances of becoming a billionaire are 1e-6, your expectation E(malaria cases prevented) is 100 cases. While if you enter an ordinary profession and then take off for 6 months to hand out malaria pills or mosquito netting in Africa, you can prevent 200 cases with probability close to 1. That seems to beat the Bill Gates plan in malaria utility.

  78. asdf Says:

    Actually here’s one for you. You are in a dark room with two other rational people. Each person is given a hat that is either red or blue, but since it’s dark nobody can see any of the hats. The colors are assigned by independent fair coin flips, so the chance of all 3 hats being the same color is 1 in 4. Each person also has a button they can press that sends a signal outside of the room. The people in the room cannot communicate with each other.

    I’m in another room and I also don’t know the hat colors. I write a betting slip that says I (asdf) will pay $4 to the holder of the slip in the event that all 3 hats are the same color. I offer (over an intercom) to sell the betting slip for $3: press your button if you are willing to buy it. (If more than one person presses their button, I pick a buyer at random). This appears to be a fair bet.

    1. Do you press the button for this bet?

    Now, the lights are turned on, so each person can see the other two people’s hats but not their own hat. You see that the other two people both have blue hats, so (conditioned on this info) you can determine that the chances that all 3 hats are the same color is now 50%. From where I am, I still don’t see any of the hats. I write another betting slip saying I’ll pay $4 if all three hats are NOT the same color. I offer over the intercom to sell this slip for $2: press the button if you are willing to buy it. This also seems fair.

    2. Do you press the button for this second bet?

    Both bets seem fair, but if you take both of them, you end up unconditionally paying $5 and collecting $4, and I get a $1 guaranteed profit. What has happened?

    A long-sinded analysis of the problem is here:
    http://www.bovens.org/ADutchFV.pdf

    I think it’s supposed to mean something, but I’m not sure what 😉

  79. Sniffnoy Says:

    I’m confused; that first one doesn’t seem fair at all. You’re paying $3 for an expected $1.

  80. asdf Says:

    Paying $3 for an expected $1 is fair because you have 3/4 chance of winning.

  81. asdf Says:

    Wait did I write it backwards? I better check, but I’m in the middle of something else right now.

    There’s an abstract of the paper here: http://www.math.uni-bonn.de/people/fotfs/VI/abstracts.html#bovens

    That describes the problem, but in kind of a confusing way. I tried to de-confuse it and may have messed it up.

  82. asdf Says:

    I did get it backwards. I wanted to bet that the hats were the same color, and I wrote it the opposite (pay if the hats are the same color). Both bets should be reversed, i.e. I pay $4 in the first bet if the hats are NOT all the same color, and vice versa in the second bet. Unfortunately there’s not a way to edit comments here AFAIK.

  83. Alexander Vlasov Says:

    Raoul Ohio #75, Thank you, in fact I had to write simple editor for LaTeX myself to resolve similar problems. And I do not like enter “\longleftarrow”, etc. instead of couple mouse clicks (so implemented that in my toy) Now going to try TexStudio/TexMaker instead. My 5 cents to proof that MS world is not too friendly for scientists.

  84. Raoul Ohio Says:

    Alexander V,

    I find that learning and guessing things like \longleftarraw are easier than clicking an icon. Also, you can make your own “escape words” (Is there a standard name for these?) including redefining a name you don’t like for one you do.

    Here is an example. I do a lot of work with sets of scalars, vectors, and matrices. I follow a common linear algebra convention of: lowercase Greek for scalars, lowercase Latin for vectors, and uppercase Latin for Matrices, with bold indicating a set. Because I have many results that hold for sets of scalars, sets of vectors, and for sets of matrices, I have taken to using uppercase Greek to indicate: scalar, vector, or matrix.

    For Greek letters with uppercase the same as in Latin (e.g., A and Alpha, …), the UC Greek font (perhaps called “math roman”) is usually slightly different from the UC Latin font. But (!) TeX only has uppercase Greek for letters that are different than the UC Latin letters.

    This situation goes against the programmers rule of thumb that I call “Don’t take dumb shortcuts. You will be sorry someday”. I wonder if this decision goes back to Knuth? I am so used to Knuth having figured out everything correctly that that it is worth comment.

    Anyway, to fix this issue, I just include the following (I hope this gets rendered properly!)

    \newcommand{\Alpha}{\mathrm{A}}
    \newcommand{\Beta}{\mathrm{B}}
    \newcommand{\Epsilon}{\mathrm{E}}
    \newcommand{\Zeta}{\mathrm{Z}}
    \newcommand{\Eta}{\mathrm{H}}
    \newcommand{\Iota}{\mathrm{I}}
    \newcommand{\Kappa}{\mathrm{K}}
    \newcommand{\Mu}{\mathrm{M}}
    \newcommand{\Nu}{\mathrm{N}}
    \newcommand{\Omicron}{\mathrm{O}}
    \newcommand{\Rho}{\mathrm{R}}
    \newcommand{\Tau}{\mathrm{T}}
    \newcommand{\Chi}{\mathrm{X}}

    For anyone just getting started, there is a nice “Wikibooks” LaTeX tutorial that you can also get in print for about $8. Then you should get one of the books by Gratzer, by Kopka and Daly, or by Mittelbach and Goossens. I am a book addict, so I have all three. You know they are good, because used copies are not cheap on Amazon!

  85. Alexander Vlasov Says:

    Raoul Ohio, #83 There are many discussions about that in many places. I afraid we may be banned here. And I also have used to write “\longrigtharrow” or something like that.

  86. Ashley Lopez Says:

    Scott,

    “What quantum mechanics is good” for reminded me of a ‘prediction’ you had made before: “The effort to build quantum computers, and to understand their capabilities and limitations, will lead to a major conceptual advance in our understanding of QM (one that hasn’t happened yet)”.

    Of course, if we find that we CAN’T build quantum computers for some fundamental reason this would be true, but why would you think this would still be true even if OTHERWISE? (Or is this some deep intuition that you would find difficult to put in words? 🙂 )

  87. Scott Says:

    Ashley #85: Eh, I was asked to make a “prediction” in that talk, but what I offered was really more of an “aspiration.” (I’m usually not in the prediction business — especially not about the future! 🙂 ) I hope there will be such a conceptual advance. But I have no good argument that it’s likely, other than the fact that asking what you can and can’t do with quantum mechanics really did lead to great conceptual advances in the past (e.g., the Bell inequality, quantum teleportation).

  88. John Sidles Says:

    Scott notes (#85): “Asking what you can and can’t do with quantum mechanics really did lead to great conceptual advances in the past.”
    ———————
    It is fun to extend those two lists:

    I. Certainly Can Do With QM
    • Hanbury Brown-Twist correlations
    • Bell inequality
    • observe single-particle jumps
    • quantum cryptography
    • state teleportation

    II. Probably Can’t Do With QM
    • violate the Second Law
    • solve NP problems in PTIME

    III. Status and/or Domain Unknown
    • describe gravitational dynamics (?)
    • scalable on-demand photon source (?)
    • scalable computational error correction (?)
    • simulate quantum dynamics with PTIME resources (?)

    It ain’t easy to imagine what rigorous understanding of the Class II objectives might look like, or to imagine what plausible understanding of the Class III objectives might look like.

    In regard to the Class III objectives, the recent work by Glen Evenbly and Guifre Vidal — e.g., their recent A theory of minimal updates in holography (arXiv:1307.0831, 2013)  — seems (to me) to point in directions that are well-worth exploration.

  89. Alexander Vlasov Says:

    I have a personal example likely proving that QC efforts may be useful in any case. Few years ago I participated in a project and we needed some fast method to estimate absorption an energy in bodies of different shapes. It is classical problem without any relation with quantum computing and quantum mechanics. There was a known method for a convex body and suddenly I found that it may be applied to non-convex body … if to accept possibility of a formal extension with negative probabilities. I strongly doubt I could find that without active work with quantum tomographic problems in QC before.

  90. Ben Shipman Says:

    Raoul (#70),

    I do not complain that my government has nuclear weapons.

    I would complain if they had started nuking their own population.

    The United States is already using state-of-the-art information weaponry against its population.

    In 50 years, the state-of-the-art may encompass mind-reading and mind-control.

    Modern ‘democratic’ governments have gotten deeply into the habit and practice of manipulating people. What makes you think that, with unlimited manipulation power, they will stop at threshold X?

    What can I do? Use my own facility with information: reveal the danger, ask for help. There are a lot of smart people here. Any one of the following I will take as help:

    Say why the danger is not what I think it is.

    Say why the matter is not worth worrying about here.

    Say that it is, and offer your own perspective/suggestions.

    Thanks in advance.

  91. John Sidles Says:

    Ben Shipman requests: (#89) “Say why the danger [of informatic surveillance] not what I think it is.”
    —————

    Thesis  The most effective agents of George Orwell’s “Big Brother” dystopia are corporate; their chief tool is market-driven population surveillance using 21st century informatic technologies.

    —————
    Example  Summer Job 2014 Meet strangers at any public venue and buy them drinks … and we’ll pay the tab! The larger the party, the better! Don’t forget to take photos at the party! Then swab the party-goer glasses/spoons/forks and send their DNA to us, at EfficientHealthCareCo for sequencing.

    We at EfficientHealthCareCo identify the genetically/socially/economically healthiest quartile (by matching photos to swabs) and we offer these picked customers insurance at rates that other companies can’t match.

    You’ll be doing you lucky party-friends a big favor *and* making *terrific* money *and* having fun!
    —————

    Conclusion I  The social safeguards and moral principles historically associated to the “checks and balances” of Jeffersonian republican government are at-risk of being substantially or entirely suborned by global-scale NGOs that reside entirely “in the cloud”.

    Conclusion II  Medical technologies in particular are associated to dystopian Orwellian potentialities, as medical sensing capabilities advance at a sustained Moore-doubling pace toward fundamental quantum limits that allow for many more doublings.

    Conclusion III  The Orwellian implications of sustained advances in sensing and surveillance capabilities — advances that are grounded physically and informatically in quantum science — pose fundamental challenges to principled conservatism and liberalism alike.

    It is regrettable that few political parties (and few STEM professionals? and in particular, few QC/CT researchers?) are grappling effectively with these issues, which already are acting to reconfigure human society, and reconsolidate our understanding of human nature, far more quickly and more radically than any quantum computation technology ever envisioned.

  92. Raoul Ohio Says:

    Ben,

    You say “The United States is already using state-of-the-art information weaponry against its population.”. How do you decide if it is for or against its population? Do you fly on airplanes? Are you happy or sad that they are not bombed?

    Consider spending some time in Mogadishu so you can enjoy freedom from security.

  93. Ben Shipman Says:

    The Modern Surveillance State vs. Mogadishu is a ridiculous false dichotomy.

    As for airplanes, I am FINITELY happy that they are not bombed, meaning that I do not want human freedom destroyed to save one airplane.

    Even at twenty times the current terror risk, I would still fly. I would prefer not being able to fly at all to a 1% chance that my government imposes a permanent global despotism.

  94. Ben Shipman Says:

    But I should answer Rauol’s question specifically: “How do you decide if it is for or against [the U.S.] population?”

    For one, the spying was done without oversight by Congress, let alone public knowledge. This is a subversion not only of democracy, but of the rule of law in the United States. Such a thing is incompatible with liberty, and hence is a transgression against every Citizen and every State in the Union.

    For another, if the US government were interested in the security of its citizens, it would secure the national borders, rather than pretending this to be impossible, irrelevant, or predicated on amnesty for illegals.

    The simple explanation is that corporations want big data and cheap labor, while politicians want corporate lobbying money, corruption-tolerating constituents, and excuses to keep tabs on opponents. The drive is not to make Americans safe, but to make them safe for the system – helpless, powerless children who never grow up.

    So many ordinary people already feel like this, like their lives, no matter how long they last, don’t matter. I request that you ask yourselves the question:

    “Is the place where we are heading only acceptable to me because I am a member of the cognitive elite, the new ruling class, the narrowing circle that makes change rather than simply being subject to it?”

    If your answer is yes, then I ask you, for everyone’s sake, to reject what you have accepted, and find a different way.

  95. Anonymous Says:

    Are the remainder of the MSR Faculty Summit videos (the other talks) online anywhere?

  96. Michael Says:

    “If Mr. Gates took a 2-minute break from curing malaria to call his former subordinates and tell them to do that, I’d really consider him a great humanitarian.”
    I hope this is a joke. Bill really is a great humanitarian.