Science journalism: good and hilarious

On Wednesday, Larry Hardesty of the MIT News Office published a nice article about my work with Alex Arkhipov on the computational complexity of linear optics.  Although the title—”The quantum singularity”—made me wince a little, I was impressed by the effort Larry put into getting the facts right, and especially laying out the problems that still need to be solved.

Less successful was a story in PC Magazine based on MIT’s press release, which contained the following sentence (let me know if you can decipher what the author meant—I couldn’t):

Aaronson says that he and Arkhipov have not successfully proven that designing a device capable of testing the theory is impossible—which is an important first step, whether to eventually building a quantum computer, or even just laying the initial framework for using the microscopic secrets of the universe to let humans better understand the world that surrounds them.

However, in the competition for Popular Science Article Sentence of the Year, the sentence above will have to contend with a now-classic sentence from the New York Times article about Watson:

More than anything, the contest was a vindication for the academic field of computer science, which began with great promise in the 1960s with the vision of creating a thinking machine and which became the laughingstock of Silicon Valley in the 1980s, when a series of heavily financed start-up companies went bankrupt.

To the NYT’s credit, they quickly posted a correction:

An article last Thursday about the I.B.M. computer Watson misidentified the academic field vindicated by Watson’s besting of two human opponents on “Jeopardy!” It is artificial intelligence — not computer science, a broader field that includes artificial intelligence.

19 Responses to “Science journalism: good and hilarious”

  1. Think! Says:

    What “heavily financed start-up companies” working on AI went bankrupt in the 80s? And did they indeed become laughingstocks?


    I other news from the NYT, aluminum tubes and Uranium from Niger have conclusively been linked to Saddam’s plan to build a nuclear bomb. Also, Iran is in noncompliance with the NPT.

  2. chazisop Says:

    Basically I believe that the PC Magazine article says that you have not proven the impossibility of such a device (I trust the quotation to be true?) , therefore it is possible to create such a device, which if built will advance humankind etc.

    Summarizing, the logical fallacy of the author is equating the lack of an impossibility proof with a proof of existence.

  3. Sam Alexander Says:

    Here’s my theory on the PC Magazine article. The “which” after the long dash is meant to refer to “designing a device”, rather than to “proven that…” So in other words, I would paraphrase it:

    “They have not successfully proven that designing a device (which designing is an important first step) is impossible”

    This officially upgrades the writing quality from 0.25/10 to 0.5/10.

    BTW, yesterday I stumbled onto Larry Hardesty’s article while messing around on StumbleUpon. So take pride knowing your work is quite popular, alongside the likes of cute kitten pictures, webcomics, and so on.

  4. Tim Converse Says:

    @Think! – although the article puts it a little strongly, some example 1980′s AI companies are Symbolics (made Lisp machines), Intellicorp and Teknowledge (expert systems), and Thinking Machines (highly parallel computers, with intended AI applications). Thinking Machines didn’t go bust until ’92, and a couple of the other companies exist in some legacy form now.

    “Laughingstock” is a little too strong as well, but let’s say that most of these companies didn’t live up to the expectation of their funders, and it became harder to secure funding for AI startups after that.

  5. Richard Warner Says:


    the timing is fairly convenient. Same day, I remember this slashdot “Gosper’s Algorithm Meets Wall Street Formulas” story also conceived by an MIT brain …

    You guys at MIT are fairly amazing folks. Congrats. Continue your great work

  6. Giorgio Camerani Says:

    The problem with the PC Magazine article is that its author considers “not having proved that X is impossible” and “having proved that X is not impossible” as two logically equivalent sentences.

  7. Pete Says:

    Two questions demonstrating my ignorance:

    1) What exactly does it mean for two photons to arrive somewhere at “exactly” the same time? Is there some equivalent of a Planck length for time that serves as this tolerance?

    2) Aren’t there already natural phenomena that we can’t simulate on a classical computer, like the weather? Isn’t seeing how much precipitation occurs where and at what time just as much of a “calculation” being done by nature as this apparatus “calculating” the distribution in question?

  8. Vadim Pokotilov Says:


    Do you have any sense of when an experiment like this might actually be done?

  9. Scott Says:


    1) When we say “at the same time,” we really mean: within a short enough time interval of one another that their Gaussian wavepackets mostly overlap. That’s already been done for 2 photons (the now-standard Hong-Ou-Mandel dip), but seems hard to scale to a large number of photons whose wavepackets all need to overlap.

    2) Yes, weather is hard to simulate, but for completely different reasons than the ones that interest us here! (Namely, weather seems hard to simulate because of the sheer number of particles involved, the chaotic behavior of those particles, and our lack of precise knowledge of initial conditions.)

    You could think of the goal of quantum computing as being to show that Nature can remain hard to simulate on a classical computer, even when
    (1) there’s a relatively-small number of particles involved (say, a few hundred),
    (2) there’s no sensitive dependence on initial conditions, and
    (3) we know the initial state and evolution equations of the particles almost exactly.

  10. Scott Says:

    Vadim: I believe some people are already trying to do the experiment with 3 or 4 photons; and in any case, that should be well within current technology. Of course, we’re eagerly looking forward to the results!

    Scaling up to (say) 10 or 20 photons is a different matter—I’m not sure when that will become possible. As Barry Sanders says in the article, it seems to depend mostly on when it’ll become possible to create a reliable stream of single photons spaced at extremely regular intervals.

  11. John Sidles Says:

    To provide some additional technical background to Scott’s answer, the six-photon output states described by Radmark, Zukowski, and Bourennane preprint Experimental high fidelity six-photon entangled state for telecloning protocols (arXiv:0906.1530) represents pretty much the state-of-the-art … doing better will require considerable advances.

    To appreciate why, we reason physically. Photons are created by (quantum) currents in the photon emitter and absorbed by (quantum) currents in the photon detector. Moreover, QED tells us that these emitter-detector currents are deterministically coupled … if one is specified, the other is completely determined.

    20th-century technologies are severely limited in their ability to control emitter and/or detector currents. In consequence, a typical experimental strategy has been to specify (what amounts to) a high-power photon source current, then post-select the resulting detector currents. In essence, we wait until (at random intervals) we observe detector currents that have the desired n-photon properties.

    Coherent-source methods have two practical limitations: (1) they are hugely wasteful of optical power, and (2) the correlated photons emerge at random intervals. It’s not obvious that these methods can be extended much beyond the present six-photon experiments.

    A more elegant engineering approach is to carefully control the (quantum) source current fluctuations, or the (quantum) detector current fluctuations (or some clever combination of the two). Here the practical challenge is that it’s not obviously easier to control the quantum currents in (say) a high-coherence deterministically clocked 10-photon source, than it is to build a high-coherence deterministically clocked 10-gate quantum circuit.

    Students should not imagine that the engineering calculations associated to these experiments are easy … Feynman himself famously asserted—wrongly—of the Hanbury Brown and Twiss experiment (an early correlated-photon experiment whose high-power photon source was the star Sirius) that “It can’t work!”.

    Moreover, the history of large-scale QED calculations abundantly illustrates that the probability of significant error in published theoretical calculations is nonzero … particularly when the calculations are done in advance of experiment … I’ve had personal experience of the immense work required to track-down even mundane errors in field-theoretic calculations.

    The bottom line IMHO is that we should not regard n-photon linear optics experiments as being necessarily easier than building n-gate quantum circuits … and neither should we overlook the striking originality and power of these experiments, which are (IMHO) worthy of the highest admiration.

  12. csrster Says:

    “Aaronson says that he and Arkhipov have not successfully proven that designing a device capable of testing the theory is impossible”

    Sounds like quite an achievement. and I say that as someone who has not successfully proven both the Riemann Conjecture and P(!)=NP.

  13. weather man Says:

    Pete (and Scott): a nice way to explain the issue about the weather being unpredictable is that even if the world was classical rather than Quantum, weather is a PRG with reasonable entropy: it is hard to predict to within some accuracy in time t+10^7 (measured in milliseconds) even if you know its status at time t.

  14. John Sidles Says:

    One wonderful aspect of (of many IMHO) of Scott and Alex’s new class of experiments is the motivation these experiments provide for students to go beyond Feynman’s celebrated Lectures on Physics in understanding the physics of photon counting.

    The quantum physics of photon detection is a subtle topic that even Feynman got wrong on occasion. The story of Feynman’s mistake is vividly told in the Physics Today’s obituary for Robert Hanbury Brown (volume 55(7), 2002), which tells of Richard Feynman standing up during a talk Hanbury Brown, proclaiming (wrongly) “It can’t work!”, and walking out of the lecture.

    The quantum physics associated to this Feynman story is summarized in series of six short letters, totaling 12 pages in all, that appeared in Nature during 1955-6. These letters describe what is today called the “Hanbury Brown and Twiss Effect”—the first-ever observation of higher-order photon counting correlations.

    The thrilling story of the Hanbury Brown and Twiss Effect, as recounted on the pages of Nature, in effect has six chapters. Chapter 1: Hanbury Brown and Twiss announce (in effect) “In the laboratory, we observe nontrivial correlations in photons generated by glowing gases.” (Correlation between photons in two coherent beams of light, Nature 177(4497), 1956). Chapter 2: Brannen and Ferguson announce (in effect) “The claims of Hanbury Brown and Twiss, if true, would require major revision of some fundamental concepts of quantum mechanics; moreover when we did a more careful experiment, we saw nothing.” (The question of correlation between photons in coherent light rays, Nature 178(4531), 1956). Chapter 3: Hanbury Brown and Twiss announce (in effect) “We observe nontrivial correlations even in photons from the star Sirius, and our theory allows us to determine its diameter” (Test of new type of stellar interferometer on Sirius, Nature 178(4541), 1956). Chapter 4: Hanbury Brown and Twiss reply “The experiment of Brannen and Ferguson was grossly lacking in sensitivity; had they analyzed their experiment properly, they would have expected to see no effect” (The question of correlation between photons in coherent light rays, Nature 178(4548), 1956). Chapter 5: In an accompanying letter, Ed Purcell announces (in effect) “Hanbury Brown and Twiss are right, moreover their theoretical predictions and their experiments data are in accord with quantum mechanics as properly understood.” (Nature 178(4548), 1956). Chapter 6: Hanbury Brown and Twiss announce (in effect) “When the experimental methods of Brannen and Ferguson are implemented with higher sensitivity, and analyzed with due respect for quantum theory as explained by Purcell, the results wholly confirm our earlier findings.” (Correlation between photons, in coherent beams of light, detected by a coincidence counting technique, Nature 180(4581), 1956).

    When we read the 12-page story of Hanbury Brown and Twiss side-by-side with the discussion of photon counting in The Feynman Lectures on Physics, we are struck by three aspects of the Hanbury Brown and Twiss experiments that are not emphasized in the Feynman Lectures.

    First, the Hanbury Brown and Twiss articles exhibit a charming physicality that is largely absent from the Feynman Lectures. For example, Hanbury Brown and Twiss describe the use of an “integrating motor” to measure the total current associated to photon detection during an experimental run. Modern physics students will wonder “What the heck is an integrating motor?”, yet in the physics literature of the 1950s this concept was viewed as being so intuitively obvious as to require no explanation: the total number of revolutions of an electric motor (as counted by purely mechanical means!) obviously can be made proportional to the integral of the current flowing through it … that’s how electric meters work, right?” As Ed Purcell’s letter to Nature rightly observes, the observation of subtle quantum correlations with purely mechanical counters “adds lustre to the notable achievement of Hanbury Brown and Twiss.”

    Second, the experimental protocol of Hanbury Brown and Twiss includes elements that are highly sophisticated from the viewpoint of modern quantum information theory. In particular, while aligning their apparatus, they reverse the flow of photons by placing their eyes at the position of the source, and while physically looking at two photodetectors through a half-silvered mirror, they adjust the mirrors such that the images of the photodectors are coherently superimposed. We nowadays appreciate that from the viewpoint of QED, this time-reversed coherence is necessary to ensure that quantum fluctuations in the photon detector currents are deterministically associated to quantum fluctuations in the photon source currents.

    Third, it follows that in the observations of Sirius recorded by Hanbury Brown and Twiss, their experimental record of correlated photocurrents here on earth is deterministically associated to currents that span the surface of the remote star Sirus — eight light-years away! This counterintuitive implication was why many theoretical physicists (including Feynman) at first considered the results of Hanbury Brown and Twiss to be (literally) incredible.

    Nowadays we appreciate that this seeming paradox is naturally reconciled via the quantum informatic mechanism that Nielsen and Chuang call the “Principles of Deferred and Implicit Measurement” — principles that are formally associated to work by Kraus and Lindblad in the 1970s; principles that were not readily appreciated by Feynman and his colleagues in the 1950s.

    Moreover, the experiments of Hanbury Brown and Twiss were vastly wasteful of photonic resources. The star Sirius emits about 10^{46} photons/second, of which Hanbury Brown and Twiss detected about 10^{9} two-photon entangled states/second … the relative production efficiency thus was a dismal 10^{-37}. Even today, more than 50 years later, the production of six-photon entangled states stil is dismally inefficient: in recent experiments 10^{18} photons/second of pump power yield about one six-photon state per thousand seconds, for a relative production efficiency of order 10^{-21}.

    We see that one of the fundamental challenges (among many!) that Scott and Alex’s experiment poses for 21st century physicists, is to devise methods for generating entangled photon states that are exponentially more efficient than existing methods. To achieve this, modern physicists will have to do exactly what Hanbury Brown and Twiss did … “look” at the photon detectors from the time-reversed viewpoint of the photon source … and then (by careful design) arrange for the photon source currents to have near-unity correlation with the photon detector currents.

    This is an immense practical challenge in cavity quantum electrodynamics, that we are certain to learn a great deal in trying to solve. At present we are similarly far from having scalable quantum-coherent n-photon sources, as we are far from having scalable quantum-coherent n-gate quantum computers.

    These considerations are why, from an engineeering point-of-view, it is prudent to regard n-photon linear optics experiments, not as being obviously easier than building n-gate quantum circuits, but rather as being comparably challenging from a technical point-of-view. And this is why it will not be surprising (to me) if the Aaronson/Arkhipov distribution-sampling algorithms prove in the long run to be similarly seminal mathematically and theoretically—and similarly challenging experimentally—to Peter Shor’s number-factoring algorithms.

  15. Raoul Ohio Says:


    Great summary.

  16. Dr. Philip Carey Says:

    Scott, is Shtetl-optimized going to organize any relief efforts for the people in Japan? Maybe draft a manifesto on the evils of Nuclear Power? At the very least, could you address the issue in a post? Thanks.

  17. Mike Says:


    The Japan Society has created a disaster relief fund to aid victims in Japan. 100% of your generous tax-deductible contributions will go to organization(s) that directly help victims recover from the devastating effects of the earthquake and tsunami that struck Japan on March 11, 2011.


  18. Scott Says:

    Philip and Mike: Alas, I don’t really have anything to say about it that isn’t being said elsewhere. It’s a terrible tragedy, and our thoughts are with those affected by it.

  19. tgm Says:

    I understand what the author meant: Scott and Alex did not prove it’s impossible to design such a device. This is an important first step. The even more important second step, is proving that it is possible. The first step that was already taken, is crucial — it allows for the possibility that the second step is possible, maybe!
    What’s not to get?
    And Scott — congrats on that important first step!! I also achieved it independently, but I think this is one of those steps where the identity of the authors does indeed matter in evaluating the importance.

Leave a Reply