Raise a martini glass for Google and Martinis!

September 6th, 2014

We’ve already been discussing this in the comments section of my previous post, but a few people emailed me to ask when I’d devote a separate blog post to the news.

OK, so for those who haven’t yet heard: this week Google’s Quantum AI Lab announced that it’s teaming up with John Martinis, of the University of California, Santa Barbara, to accelerate the Martinis group‘s already-amazing efforts in superconducting quantum computing.  (See here for the MIT Tech‘s article, here for Wired‘s, and here for the WSJ‘s.)  Besides building some of the best (if not the best) superconducting qubits in the world, Martinis, along with Matthias Troyer, was also one of the coauthors of two important papers that found no evidence for any speedup in the D-Wave machines.  (However, in addition to working with the Martinis group, Google says it will also continue its partnership with D-Wave, in an apparent effort to keep reality more soap-operatically interesting than any hypothetical scenario one could make up on a blog.)

I have the great honor of knowing John Martinis, even once sharing the stage with him at a “Physics Cafe” in Aspen.  Like everyone else in our field, I profoundly admire the accomplishments of his group: they’ve achieved coherence times in the tens of microseconds, demonstrated some of the building blocks of quantum error-correction, and gotten tantalizingly close to the fault-tolerance threshold for the surface code.  (When, in D-Wave threads, people have challenged me: “OK Scott, so then which experimental quantum computing groups should be supported more?,” my answer has always been some variant of: “groups like John Martinis’s.”)

So I’m excited about this partnership, and I wish it the very best.

But I know people will ask: apart from the support and well-wishes, do I have any predictions?  Alright, here’s one.  I predict that, regardless of what happens, commenters here will somehow make it out that I was wrong.  So for example, if the Martinis group, supported by Google, ultimately succeeds in building a useful, scalable quantum computer—by emphasizing error-correction, long coherence times (measured in the conventional way), “gate-model” quantum algorithms, universality, and all the other things that D-Wave founder Geordie Rose has pooh-poohed from the beginning—commenters will claim that still most of the credit belongs to D-Wave, for whetting Google’s appetite, and for getting it involved in superconducting QC in the first place.  (The unstated implication being that, even if there were little or no evidence that D-Wave’s approach would ever lead to a genuine speedup, we skeptics still would’ve been wrong to state that truth in public.)  Conversely, if this venture doesn’t live up to the initial hopes, commenters will claim that that just proves Google’s mistake: rather than “selling out to appease the ivory-tower skeptics,” they should’ve doubled down on D-Wave.  Even if something completely different happens—let’s say, Google, UCSB, and D-Wave jointly abandon their quantum computing ambitions, and instead partner with ISIS to establish the world’s first “Qualiphate,” ruling with a niobium fist over California and parts of Oregon—I would’ve been wrong for having failed to foresee that.  (Even if I did sort of foresee it in the last sentence…)

Yet, while I’ll never live to see the blog-commentariat acknowledge the fundamental reasonableness of my views, I might live to see scalable quantum computers become a reality, and that would surely be some consolation.  For that reason, even if for no others, I once again wish the Martinis group and Google’s Quantum AI Lab the best in their new partnership.


Unrelated Announcement: Check out a lovely (very basic) introductory video on quantum computing and information, narrated by John Preskill and Spiros Michalakis, and illustrated by Jorge Cham of PhD Comics.

Do theoretical computer scientists despise practitioners? (Answer: no, that’s crazy)

August 28th, 2014

A roboticist and Shtetl-Optimized fan named Jon Groff recently emailed me the following suggestion for a blog entry:

I think a great idea for an entry would be the way that in fields like particle physics the theoreticians and experimentalists get along quite well but in computer science and robotics in particular there seems to be a great disdain for the people that actually do things from the people that like to think about them. Just thought I’d toss that out there in case you are looking for some subject matter.

After I replied (among other things, raising my virtual eyebrows over his rosy view of the current state of theoretician/experimentalist interaction in particle physics), Jon elaborated on his concerns in a subsequent email:

[T]here seems to be this attitude in CS that getting your hands dirty is unacceptable. You haven’t seen it because you sit a lofty heights and I tend to think you always have. I have been pounding out code since ferrite cores. Yes, Honeywell 1648A, so I have been looking up the posterior of this issue rather than from the forehead as it were. I guess my challenge would be to find a noteworthy computer theoretician somewhere and ask him:
1) What complete, working, currently functioning systems have you designed?
2) How much of the working code did you contribute?
3) Which of these systems is still operational and in what capacity?
Or say, if the person was a famous robotics professor or something you may ask:
1) Have you ever actually ‘built’ a ‘robot’?
2) Could you, if called upon, design and build an easily tasked robot safe for home use using currently available materials and code?

So I wrote a second reply, which Jon encouraged me to turn into a blog post (kindly giving me permission to quote him).  In case it’s of interest to anyone else, my reply is below.


Dear Jon,

For whatever it’s worth, when I was an undergrad, I spent two years working as a coder for Cornell’s RoboCup robot soccer team, handling things like the goalie.  (That was an extremely valuable experience, one reason being that it taught me how badly I sucked at meeting deadlines, documenting my code, and getting my code to work with other people’s code.)   Even before that, I wrote shareware games with my friend Alex Halderman (now a famous computer security expert at U. of Michigan); we made almost $30 selling them.  And I spent several summers working on applied projects at Bell Labs, back when that was still a thing.  And by my count, I’ve written four papers that involved code I personally wrote and experiments I did (one on hypertext, one on stylometric clusteringone on Boolean function query properties, one on improved simulation of stabilizer circuits—for the last of these, the code is actually still used by others).  While this is all from the period 1994-2004 (these days, if I need any coding done, I use the extremely high-level programming language called “undergrad”), I don’t think it’s entirely true to say that I “never got my hands dirty.”

But even if I hadn’t had any of those experiences, or other theoretical computer scientists hadn’t had analogous ones, your questions still strike me as unfair.  They’re no more fair than cornering a star coder or other practical person with questions like, “Have you ever proved a theorem?  A nontrivial theorem?  Why is BPP contained in P/poly?  What’s the cardinality of the set of Turing-degrees?”  If the coder can’t easily answer these questions, would you say it means that she has “disdain for theorists”?  (I was expecting some discussion of this converse question in your email, and was amused when I didn’t find any.)

Personally, I’d say “of course not”: maybe the coder is great at coding, doesn’t need theory very much on a day-to-day basis and doesn’t have much free time to learn it, but (all else equal) would be happy to know more.  Maybe the coder likes theory as an outsider, even has friends from her student days who are theorists, and who she’d go to if she ever did need their knowledge for her work.  Or maybe not.  Maybe she’s an asshole who looks down on anyone who doesn’t have the exact same skill-set that she does.  But I certainly couldn’t conclude that from her inability to answer basic theory questions.

I’d say just the same about theorists.  If they don’t have as much experience building robots as they should have, don’t know as much about large software projects as they should know, etc., then those are all defects to add to the long list of their other, unrelated defects.  But it would be a mistake to assume that they failed to acquire this knowledge because of disdain for practical peoplerather than for mundane reasons like busyness or laziness.

Indeed, it’s also possible that they respect practical people all the more, because they tried to do the things the practical people are good at, and discovered for themselves how hard they were.  Maybe they became theorists partly because of that self-discovery—that was certainly true in my case.  Maybe they’d be happy to talk to or learn from a practical roboticist like yourself, but are too shy or too nerdy to initiate the conversation.

Speaking of which: yes, let’s let bloom a thousand collaborations between theorists and practitioners!  Those are the lifeblood of science.  On the other hand, based on personal experience, I’m also sensitive to the effect where, because of pressures from funding agencies, theorists have to try to pretend their work is “practically relevant” when they’re really just trying to discover something cool, while meantime, practitioners have to pretend their work is theoretically novel or deep, when really, they’re just trying to write software that people will want to use.  I’d love to see both groups freed from this distorting influence, so that they can collaborate for real reasons rather than fake ones.

(I’ve also often remarked that, if I hadn’t gravitated to the extreme theoretical end of computer science, I think I might have gone instead to the extreme practical end, rather than to any of the points in between.  That’s because I hate the above-mentioned distorting influence: if I’m going to try to understand the ultimate limits of computation, then I should pursue that wherever it leads, even if it means studying computational models that won’t be practical for a million years.  And conversely, if I’m going to write useful software, I should throw myself 100% into that, even if it means picking an approach that’s well-understood, clunky, and reliable over an approach that’s new, interesting, elegant, and likely to fail.)

Best,
Scott

“Could a Quantum Computer Have Subjective Experience?”

August 25th, 2014

Author’s Note: Below is the prepared version of a talk that I gave two weeks ago at the workshop Quantum Foundations of a Classical Universe, which was held at IBM’s TJ Watson Research Center in Yorktown Heights, NY.  My talk is for entertainment purposes only; it should not be taken seriously by anyone.  If you reply in a way that makes clear you did take it seriously (“I’m shocked and outraged that someone who dares to call himself a scientist would … [blah blah]”), I will log your IP address, hunt you down at night, and force you to put forward an account of consciousness and decoherence that deals with all the paradoxes discussed below—and then reply at length to all criticisms of your account.

If you’d like to see titles, abstracts, and slides for all the talks from the workshop—including by Charles Bennett, Sean Carroll, James Hartle, Adrian Kent, Stefan Leichenauer, Ken Olum, Don Page, Jason Pollack, Jess Riedel, Mark Srednicki, Wojciech Zurek, and Michael Zwolak—click here.  You’re also welcome to discuss these other nice talks in the comments section, though I might or might not be able to answer questions about them.  Apparently videos of all the talks will be available before long (Jess Riedel has announced that videos are now available).

(Note that, as is probably true for other talks as well, the video of my talk differs substantially from the prepared version—it mostly just consists of interruptions and my responses to them!  On the other hand, I did try to work some of the more salient points from the discussion into the text below.)

Thanks so much to Charles Bennett and Jess Riedel for organizing the workshop, and to all the participants for great discussions.


I didn’t prepare slides for this talk—given the topic, what slides would I use exactly?  “Spoiler alert”: I don’t have any rigorous results about the possibility of sentient quantum computers, to state and prove on slides.  I thought of giving a technical talk on quantum computing theory, but then I realized that I don’t really have technical results that bear directly on the subject of the workshop, which is how the classical world we experience emerges from the quantum laws of physics.  So, given the choice between a technical talk that doesn’t really address the questions we’re supposed to be discussing, or a handwavy philosophical talk that at least tries to address them, I opted for the latter, so help me God.

Let me start with a story that John Preskill told me years ago.  In the far future, humans have solved not only the problem of building scalable quantum computers, but also the problem of human-level AI.  They’ve built a Turing-Test-passing quantum computer.  The first thing they do, to make sure this is actually a quantum computer, is ask it to use Shor’s algorithm to factor a 10,000-digit number.  So the quantum computer factors the number.  Then they ask it, “while you were factoring that number, what did it feel like?  did you feel yourself branching into lots of parallel copies, which then recohered?  or did you remain a single consciousness—a ‘unitary’ consciousness, as it were?  can you tell us from introspection which interpretation of quantum mechanics is the true one?”  The quantum computer ponders this for a while and then finally says, “you know, I might’ve known before, but now I just … can’t remember.”

I like to tell this story when people ask me whether the interpretation of quantum mechanics has any empirical consequences.

Look, I understand the impulse to say “let’s discuss the measure problem, or the measurement problem, or derivations of the Born rule, or Boltzmann brains, or observer-counting, or whatever, but let’s take consciousness off the table.”  (Compare: “let’s debate this state law in Nebraska that says that, before getting an abortion, a woman has to be shown pictures of cute babies.  But let’s take the question of whether or not fetuses have human consciousness—i.e., the actual thing that’s driving our disagreement about that and every other subsidiary question—off the table, since that one is too hard.”)  The problem, of course, is that even after you’ve taken the elephant off the table (to mix metaphors), it keeps climbing back onto the table, often in disguises.  So, for better or worse, my impulse tends to be the opposite: to confront the elephant directly.

Having said that, I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other.  Many people would say that the connection doesn’t go any deeper than: “quantum mechanics is mysterious, consciousness is also mysterious, ergo maybe they’re related somehow.”  But I’m not sure that’s entirely true.  One thing that crystallized my thinking about this was a remark made in a lecture by Peter Byrne, who wrote a biography of Hugh Everett.  Byrne was discussing the question, why did it take so many decades for Everett’s Many-Worlds Interpretation to become popular?  Of course, there are people who deny quantum mechanics itself, or who have basic misunderstandings about it, but let’s leave those people aside.  Why did people like Bohr and Heisenberg dismiss Everett?  More broadly: why wasn’t it just obvious to physicists from the beginning that “branching worlds” is a picture that the math militates toward, probably the simplest, easiest story one can tell around the Schrödinger equation?  Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it?

Here was Byrne’s answer: he said, before you can really be on board with Everett, you first need to be on board with Daniel Dennett (the philosopher).  He meant: you first need to accept that a “mind” is just some particular computational process.  At the bottom of everything is the physical state of the universe, evolving via the equations of physics, and if you want to know where consciousness is, you need to go into that state, and look for where computations are taking place that are sufficiently complicated, or globally-integrated, or self-referential, or … something, and that’s where the consciousness resides.  And crucially, if following the equations tells you that after a decoherence event, one computation splits up into two computations, in different branches of the wavefunction, that thereafter don’t interact—congratulations!  You’ve now got two consciousnesses.

And if everything above strikes you as so obvious as not to be worth stating … well, that’s a sign of how much things changed in the latter half of the 20th century.  Before then, many thinkers would’ve been more likely to say, with Descartes: no, my starting point is not the physical world.  I don’t even know a priori that there is a physical world.  My starting point is my own consciousness, which is the one thing besides math that I can be certain about.  And the point of a scientific theory is to explain features of my experience—ultimately, if you like, to predict the probability that I’m going to see X or Y if I do A or B.  (If I don’t have prescientific knowledge of myself, as a single, unified entity that persists in time, makes choices, and later observes their consequences, then I can’t even get started doing science.)  I’m happy to postulate a world external to myself, filled with unseen entities like electrons behaving in arbitrarily unfamiliar ways, if it will help me understand my experience—but postulating other versions of me is, at best, irrelevant metaphysics.  This is a viewpoint that could lead you Copenhagenism, or to its newer variants like quantum Bayesianism.

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett.  I certainly have sympathies in that direction too.  In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer.  But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered.  I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

Of course, there are already tremendous difficulties here, even if we ignore quantum mechanics entirely.  Ken Olum was over much of this ground in his talk yesterday (see here for a relevant paper by Davenport and Olum).  You’ve all heard the ones about, would you agree to be painlessly euthanized, provided that a complete description of your brain would be sent to Mars as an email attachment, and a “perfect copy” of you would be reconstituted there?  Would you demand that the copy on Mars be up and running before the original was euthanized?  But what do we mean by “before”—in whose frame of reference?

Some people say: sure, none of this is a problem!  If I’d been brought up since childhood taking family vacations where we all emailed ourselves to Mars and had our original bodies euthanized, I wouldn’t think anything of it.  But the philosophers of mind are barely getting started.

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around.  It took them several years just to simulate a single second of your thought processes.  Would that bring your subjectivity into being?  Would you accept it as a replacement for your current body?  If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table?  That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive.  Would that bring about your consciousness?  Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table?  Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more.  Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes.  Would that bring three “copies” of your consciousness into being?  Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries?  Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time?  Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

You might say, sure, maybe these questions are puzzling, but what’s the alternative?  Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na.  Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and the lookup table and homomorphically-encrypted brain and Vaidman brain and all these other apparitions aren’t, because we alone have “biological causal powers.”  And what do those causal powers consist of?  Hey, you’re not supposed to ask that!  Just accept that we have them.  Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity.  But neither of those two options ever struck me as much of an improvement.

Yet I submit to you that, between these extremes, there’s another position we can stake out—one that I certainly don’t know to be correct, but that would solve so many different puzzles if it were correct that, for that reason alone, it seems to me to merit more attention than it usually receives.  (In an effort to give the view that attention, a couple years ago I wrote an 85-page essay called The Ghost in the Quantum Turing Machine, which one or two people told me they actually read all the way through.)  If, after a lifetime of worrying (on weekends) about stuff like whether a giant lookup table would be conscious, I now seem to be arguing for this particular view, it’s less out of conviction in its truth than out of a sense of intellectual obligation: to whatever extent people care about these slippery questions at all, to whatever extent they think various alternative views deserve a hearing, I believe this one does as well.

The intermediate position that I’d like to explore says the following.  Yes, consciousness is a property of any suitably-organized chunk of matter.  But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious.  Namely, it has to participate fully in the Arrow of Time.  More specifically, it has to produce irreversible decoherence as an intrinsic part of its operation.  It has to be continually taking microscopic fluctuations, and irreversibly amplifying them into stable, copyable, macroscopic classical records.

Before I go further, let me be extremely clear about what this view is not saying.  Firstly, it’s not saying that the brain is a quantum computer, in any interesting sense—let alone a quantum-gravitational computer, like Roger Penrose wants!  Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical.  The view I’m discussing doesn’t challenge conventional neuroscience on that account.

Secondly, this view doesn’t say that consciousness is in any sense necessary for decoherence, or for the emergence of a classical world.  I’ve never understood how one could hold such a belief, while still being a scientific realist.  After all, there are trillions of decoherence events happening every second in stars and asteroids and uninhabited planets.  Do those events not “count as real” until a human registers them?  (Or at least a frog, or an AI?)  The view I’m discussing only asserts the converse: that decoherence is necessary for consciousness.  (By analogy, presumably everyone agrees that some amount of computation is necessary for an interesting consciousness, but that doesn’t mean consciousness is necessary for computation.)

Thirdly, the view I’m discussing doesn’t say that “quantum magic” is the explanation for consciousness.  It’s silent on the explanation for consciousness (to whatever extent that question makes sense); it seeks only to draw a defensible line between the systems we want to regard as conscious and the systems we don’t—to address what I recently called the Pretty-Hard Problem.  And the (partial) answer it suggests doesn’t seem any more “magical” to me than any other proposed answer to the same question.  For example, if one said that consciousness arises from any computation that’s sufficiently “integrated” (or something), I could reply: what’s the “magical force” that imbues those particular computations with consciousness, and not other computations I can specify?  Or if one said (like Searle) that consciousness arises from the biology of the brain, I could reply: so what’s the “magic” of carbon-based biology, that could never be replicated in silicon?  Or even if one threw up one’s hands and said everything was conscious, I could reply: what’s the magical power that imbues my stapler with a mind?  Each of these views, along with the view that stresses the importance of decoherence and the arrow of time, is worth considering.  In my opinion, each should be judged according to how well it holds up under the most grueling battery of paradigm-cases, thought experiments, and reductios ad absurdum we can devise.

So, why might one conjecture that decoherence, and participation in the arrow of time, were necessary conditions for consciousness?  I suppose I could offer some argument about our subjective experience of the passage of time being a crucial component of our consciousness, and the passage of time being bound up with the Second Law.  Truthfully, though, I don’t have any a-priori argument that I find convincing.  All I can do is show you how many apparent paradoxes get resolved if you make this one speculative leap.

For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk.  But that has an interesting consequence: if you’re an outside observer who doesn’t know the chunk’s quantum state, it might be difficult or impossible for you to predict what the chunk is going to do next—even just to give decent statistical predictions, like you can for a hydrogen atom.  And of course, you can’t in general perform a measurement that will tell you the chunk’s quantum state, without violating the No-Cloning Theorem.  For the same reason, there’s in general no physical procedure that you can apply to the chunk to duplicate it exactly: that is, to produce a second chunk that you can be confident will behave identically (or almost identically) to the first, even just in a statistical sense.  (Again, this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.)

It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator.  You can make, if not an exact copy of a Geiger counter, surely one that’s close enough for practical purposes.  And, even though the two counters will record different sequences of clicks when pointed at identical sources, the statistical distribution of clicks will be the same (and precisely calculable), and surely that’s all that matters.  So, what separates these examples from the sorts of examples I want to discuss?

What separates them is the undisputed existence of what I’ll call a clean digital abstraction layer.  By that, I mean a macroscopic approximation to a physical system that an external observer can produce, in principle, without destroying the system; that can be used to predict what the system will do to excellent accuracy (given knowledge of the environment); and that “sees” quantum-mechanical uncertainty—to whatever extent it does—as just a well-characterized source of random noise.  If a system has such an abstraction layer, then we can regard any quantum noise as simply part of the “environment” that the system observes, rather than part of the system itself.  I’ll take it as clear that such clean abstraction layers exist for a Geiger counter, a photodetector, or a computer with a quantum random number generator.  By contrast, for (say) an animal brain, I regard it as currently an open question whether such an abstraction layer exists or not.  If, someday, it becomes routine for nanobots to swarm through people’s brains and make exact copies of them—after which the “original” brains can be superbly predicted in all circumstances, except for some niggling differences that are traceable back to different quantum-mechanical dice rolls—at that point, perhaps educated opinion will have shifted to the point where we all agree the brain does have a clean digital abstraction layer.  But from where we stand today, it seems entirely possible to agree that the brain is a physical system obeying the laws of physics, while doubting that the nanobots would work as advertised.  It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels.  Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—not only as a contingent matter of technology but as a fundamental matter of physics.

(As a side note, I should stress that obviously, even without invasive nanobots, our brains are constantly changing, but we normally don’t say as a result that we become completely different people at each instant!  To my way of thinking, though, this transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you, in the sense we’re talking about.  For one thing, all your transtemporal doppelgängers are connected by a single, linear chain of causation.  For another, outside movies like Bill and Ted’s Excellent Adventure, you can’t meet your transtemporal doppelgängers and have a conversation with them, nor can scientists do experiments on some of them, then apply what they learned to others that remained unaffected by their experiments.)

So, on this view, a conscious chunk of matter would be one that not only acts irreversibly, but that might well be unclonable for fundamental physical reasons.  If so, that would neatly resolve many of the puzzles that I discussed before.  So for example, there’s now a straightforward reason why you shouldn’t consent to being killed, while your copy gets recreated on Mars from an email attachment.  Namely, that copy will have a microstate with no direct causal link to your “original” microstate—so while it might behave similarly to you in many ways, you shouldn’t expect that your consciousness will “transfer” to it.  If you wanted to get your exact microstate to Mars, you could do that in principle using quantum teleportation—but as we all know, quantum teleportation inherently destroys the original copy, so there’s no longer any philosophical problem!  (Or, of course, you could just get on a spaceship bound for Mars: from a philosophical standpoint, it amounts to the same thing.)

Similarly, in the case where the simulation of your brain was run three times for error-correcting purposes: that could bring about three consciousnesses if, and only if, the three simulations were tied to different sets of decoherence events.  The giant lookup table and the Earth-sized brain simulation wouldn’t bring about any consciousness, unless they were implemented in such a way that they no longer had a clean digital abstraction layer.  What about the homomorphically-encrypted brain simulation?  That might no longer work, simply because we can’t assume that the microscopic fluctuations that get amplified are homomorphically encrypted.  Those are “in the clear,” which inevitably leaks information.  As for the quantum computer that simulates your thought processes and then perfectly reverses the simulation, or that queries you like a Vaidman bomb—in order to implement such things, we’d of course need to use quantum fault-tolerance, so that the simulation of you stayed in an encoded subspace and didn’t decohere.  But under our assumption, that would mean the simulation wasn’t conscious.

Now, it might seem to some of you like I’m suggesting something deeply immoral.  After all, the view I’m considering implies that, even if a system passed the Turing Test, and behaved identically to a human, even if it eloquently pleaded for its life, if it wasn’t irreversibly decohering microscopic events then it wouldn’t be conscious, so it would be fine to kill it, torture it, whatever you want.

But wait a minute: if a system isn’t doing anything irreversible, then what exactly does it mean to “kill” it?  If it’s a classical computation, then at least in principle, you could always just restore from backup.  You could even rewind and not only erase the memories of, but “uncompute” (“untorture”?) whatever tortures you had performed.  If it’s a quantum computation, you could always invert the unitary transformation U that corresponded to killing the thing (then reapply U and invert it again for good measure, if you wanted).  Only for irreversible systems are there moral acts with irreversible consequences.

This is related to something that’s bothered me for years in quantum foundations.  When people discuss Schrödinger’s cat, they always—always—insert some joke about, “obviously, this experiment wouldn’t pass the Ethical Review Board.  Nowadays, we try to avoid animal cruelty in our quantum gedankenexperiments.”  But actually, I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment.  And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}.  But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states.  (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.)  But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all?  Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from.

(There followed discussion among some audience members about the question of whether, if you destroyed all records of some terrible atrocity, like the Holocaust, everywhere in the physical world, you would thereby cause the atrocity “never to have happened.”  Many people seemed surprised by my willingness to accept that implication of what I was saying.  By way of explaining, I tried to stress just how far our everyday, intuitive notion of “destroying all records of something” falls short of what would actually be involved here: when we think of “destroying records,” we think about burning books, destroying the artifacts in museums, silencing witnesses, etc.  But even if all those things were done and many others, still the exact configurations of the air, the soil, and photons heading away from the earth at the speed of light would retain their silent testimony to the Holocaust’s reality.  “Erasing all records” in the physics sense would be something almost unimaginably more extreme: it would mean inverting the entire physical evolution in the vicinity of the earth, stopping time’s arrow and running history itself backwards.  Such ‘unhappening’ of what’s happened is something that we lack any experience of, at least outside of certain quantum interference experiments—though in the case of the Holocaust, one could be forgiven for wishing it were possible.)

OK, so much for philosophy of mind and morality; what about the interpretation of quantum mechanics?  If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders?  You could make a case for either.  The Many-Worlders would be right that we could always, if we chose, think of decoherence events as “splitting” our universe into multiple branches, each with different versions of ourselves, that thereafter don’t interact.  On the other hand, the Copenhagenists would be right that, even in principle, we could never do any experiment where this “splitting” of our minds would have any empirical consequence.  On this view, if you can control a system well enough that you can actually observe interference between the different branches, then it follows that you shouldn’t regard the system as conscious, because it’s not doing anything irreversible.

In my essay, the implication that concerned me the most was the one for “free will.”  If being conscious entails amplifying microscopic events in an irreversible and unclonable way, then someone looking at a conscious system from the outside might not, in general, be able to predict what it’s going to do next, not even probabilistically.  In other words, its decisions might be subject to at least some “Knightian uncertainty”: uncertainty that we can’t even quantify in a mutually-agreed way using probabilities, in the same sense that we can quantify our uncertainty about (say) the time of a radioactive decay.  And personally, this is actually the sort of “freedom” that interests me the most.  I don’t really care if my choices are predictable by God, or by a hypothetical Laplace demon: that is, if they would be predictable (at least probabilistically), given complete knowledge of the microstate of the universe.  By definition, there’s essentially no way for my choices not to be predictable in that weak and unempirical sense!  On the other hand, I’d prefer that my choices not be completely predictable by other people.  If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought.  But you’d have to actually do the experiment (or convince me that it could be done): it doesn’t count just to talk about it, or to extrapolate from fMRI experiments that predict which of two buttons a subject is going to press with 60% accuracy a few seconds in advance.

But since we’ve got some cosmologists in the house, let me now turn to discussing the implications of this view for Boltzmann brains.

(For those tuning in from home: a Boltzmann brain is a hypothetical chance fluctuation in the late universe, which would include a conscious observer with all the perceptions that a human being—say, you—is having right now, right down to false memories and false beliefs of having arisen via Darwinian evolution.  On statistical grounds, the overwhelming majority of Boltzmann brains last just long enough to have a single thought—like, say, the one you’re having right now—before they encounter the vacuum and freeze to death.  If you measured some part of the vacuum state toward which our universe seems to be heading, asking “is there a Boltzmann brain here?,” quantum mechanics predicts that the probability would be ridiculously astronomically small, but nonzero.  But, so the argument goes, if the vacuum lasts for infinite time, then as long as the probability is nonzero, it doesn’t matter how tiny it is: you’ll still get infinitely many Boltzmann brains indistinguishable from any given observer; and for that reason, any observer should consider herself infinitely likelier to be a Boltzmann brain than to be the “real,” original version.  For the record, even among the strange people at the IBM workshop, no one actually worried about being a Boltzmann brain.  The question, rather, is whether, if a cosmological model predicts Boltzmann brains, then that’s reason enough to reject the model, or whether we can live with such a prediction, since we have independent grounds for knowing that we can’t be Boltzmann brains.)

At this point, you can probably guess where this is going.  If decoherence, entropy production, full participation in the arrow of time are necessary conditions for consciousness, then it would follow, in particular, that a Boltzmann brain is not conscious.  So we certainly wouldn’t be Boltzmann brains, even under a cosmological model that predicts infinitely more of them than of us.  We can wipe our hands; the problem is solved!

I find it extremely interesting that, in their recent work, Kim Boddy, Sean Carroll, and Jason Pollack reached a similar conclusion, but from a completely different starting point.  They said: look, under reasonable assumptions, the late universe is just going to stay forever in an energy eigenstate—just sitting there doing nothing.  It’s true that, if someone came along and measured the energy eigenstate, asking “is there a Boltzmann brain here?,” then with a tiny but nonzero probability the answer would be yes.  But since no one is there measuring, what licenses us to interpret the nonzero overlap in amplitude with the Boltzmann brain state, as a nonzero probability of there being a Boltzmann brain?  I think they, too, are implicitly suggesting: if there’s no decoherence, no arrow of time, then we’re not authorized to say that anything is happening that “counts” for anthropic purposes.

Let me now mention an obvious objection.  (In fact, when I gave the talk, this objection was raised much earlier.)  You might say, “look, if you really think irreversible decoherence is a necessary condition for consciousness, then you might find yourself forced to say that there’s no consciousness, because there might not be any such thing as irreversible decoherence!  Imagine that our entire solar system were enclosed in an anti de Sitter (AdS) boundary, like in Greg Egan’s science-fiction novel Quarantine.  Inside the box, there would just be unitary evolution in some Hilbert space: maybe even a finite-dimensional Hilbert space.  In which case, all these ‘irreversible amplifications’ that you lay so much stress on wouldn’t be irreversible at all: eventually all the Everett branches would recohere; in fact they’d decohere and recohere infinitely many times.  So by your lights, how could anything be conscious inside the box?”

My response to this involves one last speculation.  I speculate that the fact that we don’t appear to live in AdS space—that we appear to live in (something evolving toward) a de Sitter space, with a positive cosmological constant—might be deep and important and relevant.  I speculate that, in our universe, “irreversible decoherence” means: the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone—even if for no others—you can’t put Humpty Dumpty back together again.  (Here I should point out, as several workshop attendees did to me, that Bousso and Susskind explored something similar in their paper The Multiverse Interpretation of Quantum Mechanics.)

Does this mean that, if cosmologists discover tomorrow that the cosmological constant is negative, or will become negative, then it will turn out that none of us were ever conscious?  No, that’s stupid.  What it would suggest is that the attempt I’m now making on the Pretty-Hard Problem had smacked into a wall (an AdS wall?), so that I, and anyone else who stressed in-principle irreversibility, should go back to the drawing board.  (By analogy, if some prescription for getting rid of Boltzmann brains fails, that doesn’t mean we are Boltzmann brains; it just means we need a new prescription.  Tempting as it is to skewer our opponents’ positions with these sorts of strawman inferences, I hope we can give each other the courtesy of presuming a bare minimum of sense.)

Another question: am I saying that, in order to be absolutely certain of whether some entity satisfied the postulated precondition for consciousness, one might, in general, need to look billions of years into the future, to see whether the “decoherence” produced by the entity was really irreversible?  Yes (pause to gulp bullet).  I am saying that.  On the other hand, I don’t think it’s nearly as bad as it sounds.  After all, the category of “consciousness” might be morally relevant, or relevant for anthropic reasoning, but presumably we all agree that it’s unlikely to play any causal role in the fundamental laws of physics.  So it’s not as if we’ve introduced any teleology into the laws of physics by this move.

Let me end by pointing out what I’ll call the “Tegmarkian slippery slope.”  It feels scientific and rational—from the perspective of many of us, even banal—to say that, if we’re conscious, then any sufficiently-accurate computer simulation of us would also be.  But I tried to convince you that this view depends, for its aura of obviousness, on our agreeing not to probe too closely exactly what would count as a “sufficiently-accurate” simulation.  E.g., does it count if the simulation is done in heavily-encrypted form, or encoded as a giant lookup table?  Does it matter if anyone actually runs the simulation, or consults the lookup table?  Now, all the way at the bottom of the slope is Max Tegmark, who asks: to produce consciousness, what does it matter if the simulation is physically instantiated at all?  Why isn’t it enough for the simulation to “exist” mathematically?  Or, better yet: if you’re worried about your infinitely-many Boltzmann brain copies, then why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π?  Why not hold workshops about how to avoid the prediction that we’re infinitely likelier to be “living in π” than to be our “real” selves?

From this extreme, even most scientific rationalists recoil.  They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow.  But now I have the opening I want.  I can say: once we agree that physical existence is a prerequisite for consciousness, why not participation in the Arrow of Time?  After all, our ordinary ways of talking about sentient beings—outside of quantum mechanics, cosmology, and maybe theology—don’t even distinguish between the concepts “exists” and “exists and participates in the Arrow of Time.”  And to say we have no experience of reversible, clonable, coherently-executable, atemporal consciousnesses is a massive understatement.

Of course, we should avoid the sort of arbitrary prejudice that Turing warned against in Computing Machinery and Intelligence.  Just because we lack experience with extraterrestrial consciousnesses, doesn’t mean it would be OK to murder an intelligent extraterrestrial if we met one tomorrow.  In just the same way, just because we lack experience with clonable, atemporal consciousnesses, doesn’t mean it would be OK to … wait!  As we said before, clonability, and aloofness from time’s arrow, call severely into question what it even means to “murder” something.  So maybe this case isn’t as straightforward as the extraterrestrials after all.

At this point, I’ve probably laid out enough craziness, so let me stop and open things up for discussion.

Subhash Khot’s prizewinning research

August 16th, 2014

I already congratulated Subhash Khot in my last post for winning the Nevanlinna Award, but this really deserves a separate post.  Khot won theoretical computer science’s highest award largely for introducing and exploring the Unique Games Conjecture (UGC), which says (in one sentence) that a large number of the approximation problems that no one has been able to prove NP-hard, really are NP-hard.  In particular, if the UGC is true, then for MAX-CUT and dozens of other important optimization problems, no polynomial-time algorithm can always get you closer to the optimal solution than some semidefinite-programming-based algorithm gets you, unless P=NP.  The UGC might or might not be true—unlike with (say) P≠NP itself, there’s no firm consensus around it—but even if it’s false, the effort to prove or disprove it has by now had a huge impact on theoretical computer science research, leading to connections with geometry, tiling, analysis of Boolean functions, quantum entanglement, and more.

There are a few features that make the UGC interesting, compared to most other questions considered in complexity theory.  Firstly, the problem that the UGC asserts is NP-hard—basically, given a list of linear equations in 2 variables each, to satisfy as many of the equations as you can—is a problem with “imperfect completeness.”  This means that, if you just wanted to know whether all the linear equations were simultaneously satisfiable, the question would be trivial to answer, using Gaussian elimination.  So the problem only becomes interesting once you’re told that the equations are not simultaneously satisfiable, but you’d like to know (say) whether it’s possible to satisfy 99% of the equations or only 1%.  A second feature is that, because of the 2010 work of Arora, Barak, and Steurer, we know that there is an algorithm that solves the unique games problem in “subexponential time”: specifically, in time exp(npoly(δ)), where δ is the completeness error (that is, the fraction of linear equations that are unsatisfiable, in the case that most of them are satisfiable).  This doesn’t mean that the unique games problem can’t be NP-hard: it just means that, if there is an NP-hardness proof, then the reduction will need to blow up the instance sizes by an npoly(1/δ) factor.

To be clear, neither of the above features is unique (har, har) to unique games: we’ve long known NP-complete problems, like MAX-2SAT, that have the imperfect completeness feature, and we also know NP-hardness reductions that blow up the instance size by an npoly(1/δ) factor for inherent reasons (for example, for the Set Cover problem).  But perhaps nothing points as clearly as UGC at the directions that researchers in hardness of approximation and probabilistically checkable proofs (PCP) would like to be able to go.  A proof of the Unique Games Conjecture would basically be a PCP theorem on steroids.  (Or, since we already have “PCP theorems on steroids,” maybe a PCP theorem on PCP?)

It’s important to understand that, between the UGC being true and the unique games problem being solvable in polynomial time, there’s a wide range of intermediate possibilities, many of which are being actively investigated.  For example, the unique games problem could be “NP-hard,” but via a reduction that itself takes subexponential time (i.e., it could be hard assuming the Exponential-Time Hypothesis).  It could be solvable much faster than Arora-Barak-Steurer but still not in P.  Or, even if the problem weren’t solvable any faster than is currently known, it could be “hard without being NP-hard,” having a similar status to factoring or graph isomorphism.  Much current research into the UGC is focused on a particular algorithm called the Sum-of-Squares algorithm (i.e., the Laserre hierarchy).  Some researchers suspect that, if any algorithm will solve the unique games problem in polynomial time (or close to that), it will be Sum-of-Squares; conversely, if one could show that Sum-of-Squares failed, one would’ve taken a major step toward proving the UGC.

For more, I recommend this Quanta magazine article, or Luca Trevisan’s survey, or Subhash’s own survey.  Or those pressed for time can simply check out this video interview with Subhash.  If you’d like to try my wife Dana’s puzzle games inspired by PCP, which Subhash uses 2 minutes into the video to explain what he works on, see here.  Online, interactive versions of these puzzle games are currently under development.  Also, if you have questions about the UGC or Subhash’s work, go ahead and ask: I’ll answer if I can, and otherwise rely on in-house expertise.

Congratulations again to Subhash!

Is the P vs. NP problem ill-posed? (Answer: no.)

August 13th, 2014

A couple days ago, a reader wrote to me to ask whether it’s possible that the solution to the P vs. NP problem is simply undefined—and that one should enlarge the space of possible answers using non-classical logics (the reader mentioned something called Catuṣkoṭi logic).  Since other people have emailed me with similar questions in the past, I thought my response might be of more general interest, and decided to post it here.


Thanks for your mail!  I’m afraid I don’t agree with you that there’s a problem in the formulation of P vs. NP.  Let me come at it this way:

Do you also think there might be a problem in the formulation of Goldbach’s Conjecture?  Or the Twin Prime Conjecture?  (I.e., that maybe the definition of “prime number” needs to be modified using Catuṣkoṭi logic?)  Or any other currently-unsolved problem in any other part of math?

If you don’t, then my question would be: why single out P vs. NP?

After all, P vs. NP can be expressed as a Π2-sentence: that is, as a certain relationship among positive integers, which either holds or doesn’t hold.  (In this case, the integers would encode Turing machines, polynomial upper bounds on their running time, and an NP-complete problem like 3SAT — all of which are expressible using the basic primitives of arithmetic.)  In terms of its logical form, then, it’s really no different than the Twin Prime Conjecture and so forth.

So then, do you think that statements of arithmetic, like there being no prime number between 24 and 28, might also be like the Parallel Postulate?  That there might be some other, equally-valid “non-Euclidean arithmetic” where there is a prime between 24 and 28?  What exactly would one mean by that?  I understand exactly what one means by non-Euclidean geometries, but to my mind, geometry is less “fundamental” (at least in a logical sense) than positive integers are.  And of course, even if one believes that non-Euclidean geometries are just as “fundamental” as Euclidean geometry — an argument that seems harder to make for, say, the positive integers versus the Gaussian integers or finite fields or p-adics  — that still doesn’t change the fact that questions about Euclidean geometry have definite right answers.

Let me acknowledge two important caveats to what I said:

First, it’s certainly possible that P vs. NP might be independent of standard formal systems like ZF set theory (i.e., neither provable nor disprovable in them).  That’s a possibility that everyone acknowledges, even if (like me) they consider it rather unlikely.  But note that, even if P vs. NP were independent of our standard formal systems, that still wouldn’t mean that the question was ill-posed!  There would still either be a Turing machine that decided 3SAT in polynomial time, or else there wouldn’t be.  It would “only” mean that the usual axioms of set theory wouldn’t suffice to tell us which.

The second caveat is that P vs. NP, like any other mathematical question, can be generalized and extended in all sorts of interesting ways.  So for example, one can define analogues of P vs. NP over the reals and complex numbers (which are also currently open, but which might be easier than the Boolean version).  Or, even if P≠NP, one can still ask if randomized algorithms, or nonuniform algorithms, or quantum algorithms, might be able to solve NP-complete problems in polynomial time.  Or one can ask whether NP-complete problems are at least efficiently solvable “on average,” if not in the worst case.  Every one of these questions has been actively researched, and you could make a case that some of them are just as interesting as the original P vs. NP question, if not more interesting — if history had turned out a little different, any one of these might have been what we’d taken as our “flagship” question, rather than P vs. NP.  But again, this still doesn’t change the fact that the original P vs. NP question has some definite answer (like, for example, P≠NP…), even if we can’t prove which answer it is, even if we won’t be able to prove it for 500 years.

And please keep in mind that, if P vs. NP were solved after being open for hundreds of years, it would be far from the first such mathematical problem!  Fermat’s Last Theorem stayed open for 350 years, and the impossibility of squaring the circle and trisecting the angle were open for more than 2000 years.  Any time before these problems were solved, one could’ve said that maybe people had failed because the question itself was ill-posed, but one would’ve been mistaken.  People simply hadn’t invented the right ideas yet.

Best regards,
Scott


Unrelated Announcements: As most of you have probably seen, Subhash Khot won the Nevanlinna Prize, while Maryam Mirzakhani, Artur Avila, Manjul Bhargava and Martin Hairer won the Fields Medal. Mirzakhani is the first female Fields Medalist. Congratulations to all!

Also, I join the rest of the world in saying that Robin Williams was a great actor—there was no one better at playing “the Robin Williams role” in any given movie—and his loss is a loss for humanity.

US State Department: Let in cryptographers and other scientists

July 26th, 2014

Predictably, my last post attracted plenty of outrage (some of it too vile to let through), along with the odd commenter who actually agreed with what I consider my fairly middle-of-the-road, liberal Zionist stance.  But since the outrage came from both sides of the issue, and the two sides were outraged about the opposite things, I guess I should feel OK about it.

Still, it’s hard not to smart from the burns of vituperation, so today I’d like to blog about a very different political issue: one where hopefully almost all Shtetl-Optimized readers will actually agree with me (!).

I’ve learned from colleagues that, over the past year, foreign-born scientists have been having enormously more trouble getting visas to enter the US than they used to.  The problem, I’m told, is particularly severe for cryptographers: embassy clerks are now instructed to ask specifically whether computer scientists seeking to enter the US work in cryptography.  If an applicant answers “yes,” it triggers a special process where the applicant hears nothing back for months, and very likely misses the workshop in the US that he or she had planned to attend.  The root of the problem, it seems, is something called the Technology Alert List (TAL), which has been around for a while—the State Department beefed it up in response to the 9/11 attacks—but which, for some unknown reason, is only now being rigorously enforced.  (Being marked as working in one of the sensitive fields on this list is apparently called “getting TAL’d.”)

The issue reached a comical extreme last October, when Adi Shamir, the “S” in RSA, Turing Award winner, and foreign member of the US National Academy of Sciences, was prevented from entering the US to speak at a “History of Cryptology” conference sponsored by the National Security Agency.  According to Shamir’s open letter detailing the incident, not even his friends at the NSA, or the president of the NAS, were able to grease the bureaucracy at the State Department for him.

It should be obvious to everyone that a crackdown on academic cryptographers serves no national security purpose whatsoever, and if anything harms American security and economic competitiveness, by diverting scientific talent to other countries.  (As Shamir delicately puts it, “the number of terrorists among the members of the US National Academy of Science is rather small.”)  So:

  1. Any readers who have more facts about what’s going on, or personal experiences, are strongly encouraged to share them in the comments section.
  2. Any readers who might have any levers of influence to pull on this issue—a Congressperson to write to, a phone call to make, an Executive Order to issue (I’m talking to you, Barack), etc.—are strongly encouraged to pull them.

3-sentence summary of what’s happening in Israel and Gaza

July 24th, 2014

Hamas is trying to kill as many civilians as it can.

Israel is trying to kill as few civilians as it can.

Neither is succeeding very well.


Update (July 28): Please check out a superb essay by Sam Harris on the Israeli/Palestinian conflict.  While, as Harris says, the essay contains “something to offend everyone”—even me—it also brilliantly articulates many of the points I’ve been trying to make in this comment thread.

See also a good HuffPost article by Ali A. Rizvi, a “Pakistani-Canadian writer, physician, and musician.”

“How Might Quantum Information Transform Our Future?”

July 22nd, 2014

So, the Templeton Foundation invited me to write a 1500-word essay on the above question.  It’s like a blog post, except they pay me to do it!  My essay is now live, here.  I hope you enjoy my attempt at techno-futurist prose.  You can comment on the essay either here or over at Templeton’s site.  Thanks very much to Ansley Roan for commissioning the piece.

Seth Teller (1964-2014)

July 11th, 2014

Seth Teller

Seth Teller was a colleague of mine in CSAIL and the EECS department, and was one of my favorite people in all of MIT.  He was a brilliant roboticist, who (among many other things) spearheaded MIT’s participation in the DARPA Grand Challenge for self-driving cars, and who just recently returned from a fact-finding trip to Fukushima, Japan, to see how robots could help in investigating the damaged reactor cores there.  I saw Seth twice a week at lab and department lunches, and he often struck up conversations with me about quantum computing, cosmology, and other things.  His curiosity was immense, wide-ranging, and almost childlike (in the best way).  One small indication of his character is that, in the DARPA challenge, Seth opted not to preload MIT’s car with detailed data about the course, because he thought doing so made the challenge scientifically less interesting—even though DARPA’s rules allowed such preloading, the other teams did it, and it almost certainly would have improved MIT’s standing in the competition.

Seth was a phenomenal speaker, whose passion and clarity always won me over even though my research interests were far from his.  I made it a point to show up for lab lunch whenever I knew he’d be speaking.  Seth was also, from what I’ve heard, a superb mentor and teacher, who won an award earlier this year for his undergraduate advising.

Seth died ten days ago, on July 1st.  (See here for MIT News’s detailed obituary, and here for an article in Cambridge Day.)  While no cause of death was given at the time, according to an update yesterday in the MIT Tech, the death has been ruled a suicide.  Seth is survived by his wife, Rachel, and by two daughters.

With his cheerful, can-do disposition, Seth is one of the last people on earth I’d imagine doing this: whatever he was going through, he did an unbelievable job of hiding it.  I’m certain he wouldn’t abandon his family unless he was suffering unimaginable pain.  If there’s a tiny atom of good to come out of this, I hope that at least one other person contemplating suicide will reflect on how much Seth had to live for, and that doing so will inspire that person to get the medical help they need.

Incidentally, outside of his research and teaching, Seth was also an activist for protecting the physical environment and open spaces of East Cambridge.  At the “Wild and Crazy Ideas Session” of one CSAIL retreat, Seth floated a truly wild idea: to replace Memorial Drive, or at least the part of it that separates the MIT campus from the Charles River, by an underground tunnel, so that the land above the tunnel could be turned into a beautiful riverfront park.  In his characteristic fashion, Seth had already done a pretty detailed engineering analysis, estimating the cost at “merely” a few hundred million dollars: a lot, but a worthy investment in MIT’s future.  In any case, I can’t imagine a better way to memorialize Seth than to name some green space in East Cambridge after him, and I hope that happens.

Seth will be sorely missed.  My thoughts go out to his family at this difficult time.

The Power of the Digi-Comp II: My First Conscious Paperlet

July 4th, 2014

Foreword: Right now, I have a painfully-large stack of unwritten research papers.  Many of these are “paperlets”: cool things I noticed that I want to tell people about, but that would require a lot more development before they became competitive for any major theoretical computer science conference.  And what with the baby, I simply don’t have time anymore for the kind of obsessive, single-minded, all-nighter-filled effort needed to bulk my paperlets up.  So starting today, I’m going to try turning some of my paperlets into blog posts.  I don’t mean advertisements or sneak previews for papers, but replacements for papers: blog posts that constitute the entirety of what I have to say for now about some research topic.  “Peer reviewing” (whether signed or anonymous) can take place in the comments section, and “citation” can be done by URL.  The hope is that, much like with 17th-century scientists who communicated results by letter, this will make it easier to get my paperlets done: after all, I’m not writing Official Papers, just blogging for colleagues and friends.

Of course, I’ve often basically done this before—as have many other academic bloggers—but now I’m going to go about it more consciously.  I’ve thought for years that the Internet would eventually change the norms of scientific publication much more radically than it so far has: that yes, instant-feedback tools like blogs and StackExchange and MathOverflow might have another decade or two at the periphery of progress, but their eventual destiny is at the center.  And now that I have tenure, it hit me that I can do more than prognosticate about such things.  I’ll start small: I won’t go direct-to-blog for big papers, papers that cry out for LaTeX formatting, or joint papers.  I certainly won’t do it for papers with students who need official publications for their professional advancement.  But for things like today’s post—on the power of a wooden mechanical computer now installed in the lobby of the building where I work—I hope you agree that the Science-by-Blog Plan fits well.

Oh, by the way, happy July 4th to American readers!  I hope you find that a paperlet about the logspace-interreducibility of a few not-very-well-known computational models captures everything that the holiday is about.


The Power of the Digi-Comp II

by Scott Aaronson

Abstract

I study the Digi-Comp II, a wooden mechanical computer whose only moving parts are balls, switches, and toggles.  I show that the problem of simulating (a natural abstraction of) the Digi-Comp, with a polynomial number of balls, is complete for CC (Comparator Circuit), a complexity class defined by Subramanian in 1990 that sits between NL and P.  This explains why the Digi-Comp is capable of addition, multiplication, division, and other arithmetical tasks, and also implies new tasks of which the Digi-Comp is capable (and that indeed are complete for it), including the Stable Marriage Problem, finding a lexicographically-first perfect matching, and the simulation of other Digi-Comps.  However, it also suggests that the Digi-Comp is not a universal computer (not even in the circuit sense), making it a very interesting way to fall short of Turing-universality.  I observe that even with an exponential number of balls, simulating the Digi-Comp remains in P, but I leave open the problem of pinning down its complexity more precisely.

Introduction

To celebrate his 60th birthday, my colleague Charles Leiserson (who some of you might know as the “L” in the CLRS algorithms textbook) had a striking contraption installed in the lobby of the MIT Stata Center.  That contraption, pictured below, is a custom-built, supersized version of a wooden mechanical computer from the 1960s called the Digi-Comp II, now manufactured and sold by a company called Evil Mad Scientist.

Click here for a short video showing the Digi-Comp’s operation (and here for the user’s manual).  Basically, the way it works is this: a bunch of balls (little steel balls in the original version, pool balls in the supersized version) start at the top and roll to the bottom, one by one.  On their way down, the balls may encounter black toggles, which route each incoming ball either left or right.  Whenever this happens, the weight of the ball flips the toggle to the opposite setting: so for example, if a ball goes left, then the next ball to encounter the same toggle will go right, and the ball after that will go left, and so on.  The toggles thus maintain a “state” for the computer, with each toggle storing one bit.

Besides the toggles, there are also “switches,” which the user can set at the beginning to route every incoming ball either left or right, and whose settings aren’t changed by the balls.  And then there are various wooden tunnels and ledges, whose function is simply to direct the balls in a desired way as they roll down.  A ball could reach different locations, or even the same location in different ways, depending on the settings of the toggles and switches above that location.  On the other hand, once we fix the toggles and switches, a ball’s motion is completely determined: there’s no random or chaotic element.

“Programming” is done by configuring the toggles and switches in some desired way, then loading a desired number of balls at the top and letting them go.  “Reading the output” can be done by looking at the final configuration of some subset of the toggles.

Whenever a ball reaches the bottom, it hits a lever that causes the next ball to be released from the top.  This ensures that the balls go through the device one at a time, rather than all at once.  As we’ll see, however, this is mainly for aesthetic reasons, and maybe also for the mechanical reason that the toggles wouldn’t work properly if two or more balls hit them at once.  The actual logic of the machine doesn’t care about the timing of the balls; the sheer number of balls that go through is all that matters.

The Digi-Comp II, as sold, contains a few other features: most notably, toggles that can be controlled by other toggles (or switches).  But I’ll defer discussion of that feature to later.  As we’ll see, we already get a quite interesting model of computation without it.

One final note: of course the machine that’s sold has a fixed size and a fixed geometry.  But for theoretical purposes, it’s much more interesting to consider an arbitrary network of toggles and switches (not necessarily even planar!), with arbitrary size, and with an arbitrary number of balls fed into it.  (I’ll give a more formal definition in the next section.)

The Power of the Digi-Comp

So, what exactly can the Digi-Comp do?  As a first exercise, you should convince yourself that, by simply putting a bunch of toggles in a line and initializing them all to “L” (that is, Left), it’s easy to set up a binary counter.  In other words, starting from the configuration, say, LLL (in which three toggles all point left), as successive balls pass through we can enter the configurations RLL, LRL, RRL, etc.  If we interpret L as 0 and R as 1, and treat the first bit as the least significant, then we’re simply counting from 0 to 7 in binary.  With 20 toggles, we could instead count to 1,048,575.

counter

But counting is not the most interesting thing we can do.  As Charles eagerly demonstrated to me, we can also set up the Digi-Comp to perform binary addition, binary multiplication, sorting, and even long division.  (Excruciatingly slowly, of course: the Digi-Comp might need even more work to multiply 3×5, than existing quantum computers need to factor the result!)

To me, these demonstrations served only as proof that, while Charles might call himself a theoretical computer scientist, he’s really a practical person at heart.  Why?  Because a theorist would know that the real question is not what the Digi-Comp can do, but rather what it can’t do!  In particular, do we have a universal computer on our hands here, or not?

If the answer is yes, then it’s amazing that such a simple contraption of balls and toggles could already take us over the threshold of universality.  Universality would immediately explain why the Digi-Comp is capable of multiplication, division, sorting, and so on.  If, on the other hand, we don’t have universality, that too is extremely interesting—for we’d then face the challenge of explaining how the Digi-Comp can do so many things without being universal.

It might be said that the Digi-Comp is certainly not a universal computer, since if nothing else, it’s incapable of infinite loops.  Indeed, the number of steps that a given Digi-Comp can execute is bounded by the number of balls, while the number of bits it can store is bounded by the number of toggles: clearly we don’t have a Turing machine.  This is true, but doesn’t really tell us what we want to know.  For, as discussed in my last post, we can consider not only Turing-machine universality, but also the weaker (but still interesting) notion of circuit-universality.  The latter means the ability to simulate, with reasonable efficiency, any Boolean circuit of AND, OR, and NOT gates—and hence, in particular, to compute any Boolean function on any fixed number of input bits (given enough resources), or to simulate any polynomial-time Turing machine (given polynomial resources).

The formal way to ask whether something is circuit-universal, is to ask whether the problem of simulating the thing is P-complete.  Here P-complete (not to be confused with NP-complete!) basically means the following:

There exists a polynomial p such that any S-step Turing machine computation—or equivalently, any Boolean circuit with at most S gates—can be embedded into our system if we allow the use of poly(S) computing elements (in our case, balls, toggles, and switches).

Of course, I need to tell you what I mean by the weasel phrase “can be embedded into.”  After all, it wouldn’t be too impressive if the Digi-Comp could “solve” linear programming, primality testing, or other highly-nontrivial problems, but only via “embeddings” in which we had to do essentially all the work, just to decide how to configure the toggles and switches!  The standard way to handle this issue is to demand that the embedding be “computationally simple”: that is, we should be able to carry out the embedding in L (logarithmic space), or some other complexity class believed to be much smaller than the class (P, in this case) for which we’re trying to prove completeness.  That way, we’ll be able to say that our device really was “doing something essential”—i.e., something that our embedding procedure couldn’t efficiently do for itself—unless the larger complexity class collapses with the smaller one (i.e., unless L=P).

So then, our question is whether simulating the Digi-Comp II is a P-complete problem under L-reductions, or alternatively, whether the problem is in some complexity class believed to be smaller than P.  The one last thing we need is a formal definition of “the problem of simulating the Digi-Comp II.”  Thus, let DIGICOMP be the following problem:

We’re given as inputs:

  • A directed acyclic graph G, with n vertices.  There is a designated vertex with indegree 0 and outdegree 1 called the “source,” and a designated vertex with indegree 1 and outdegree 0 called the “sink.”  Every internal vertex v (that is, every vertex with both incoming and outgoing edges) has exactly two outgoing edges, labeled “L” (left) and “R” (right), as well as one bit of internal state s(v)∈{L,R}.
  • For each vertex v, an “initial” value for its internal state s(v).
  • A positive integer T (encoded in unary notation), representing the number of balls dropped successively from the source vertex.

Computation proceeds as follows: each time a ball appears at the source vertex, it traverses the path induced by the L and R states of the vertices that it encounters, until it reaches a terminal vertex, which might or might not be the sink.  As the ball traverses the path, it flips s(v) for each vertex v that it encounters: L goes to R and R goes to L.  Then the next ball is dropped in.

The problem is to decide whether any balls reach the sink.

Here the internal vertices represent toggles, and the source represents the chute at the top from which the balls drop.  Switches aren’t included, since (by definition) the reduction can simply fix their values to “L” to “R” and thereby simplify the graph.

Of course we could consider other problems: for example, the problem of deciding whether an odd number of balls reach the sink, or of counting how many balls reach the sink, or of computing the final value of every state-variable s(v).  However, it’s not hard to show that all of these problems are interreducible with the DIGICOMP problem as defined above.

The Class CC

My main result, in this paperlet, is to pin down the complexity of the DIGICOMP problem in terms of a complexity class called CC (Comparator Circuit): a class that’s obscure enough not to be in the Complexity Zoo (!), but that’s been studied in several papers.  CC was defined by Subramanian in his 1990 Stanford PhD thesis; around the same time Mayr and Subramanian showed the inclusion NL ⊆ CC (the inclusion CC ⊆ P is immediate).  Recently Cook, Filmus, and Lê revived interest in CC with their paper The Complexity of the Comparator Circuit Value Problem, which is probably the best current source of information about this class.

OK, so what is CC?  Informally, it’s the class of problems that you can solve using a comparator circuit, which is a circuit that maps n bits of input to n bits of output, and whose only allowed operation is to sort any desired pair of bits.  That is, a comparator circuit can repeatedly apply the transformation (x,y)→(x∧y,x∨y), in which 00, 01, and 11 all get mapped to themselves, while 10 gets mapped to 01.  Note that there’s no facility in the circuit for copying bits (i.e., for fanout), so sorting could irreversibly destroy information about the input.  In the comparator circuit value problem (or CCV), we’re given as input a description of a comparator circuit C, along with an input x∈{0,1}n and an index i∈[n]; then the problem is to determine the final value of the ith bit when C is applied to x.  Then CC is simply the class of all languages that are L-reducible to CCV.

sort

As Cook et al. discuss, there are various other characterizations of CC: for example, rather than using a complete problem, we can define CC directly as the class of languages computed by uniform families of comparator circuits.  More strikingly, Mayr and Subramanian showed that CC has natural complete problems, which include (decision versions of) the famous Stable Marriage Problem, as well as finding the lexicographically first perfect matching in a bipartite graph.  So perhaps the most appealing definition of CC is that it’s “the class of problems that can be easily mapped to the Stable Marriage Problem.”

It’s a wide-open problem whether CC=NL or CC=P: as usual, one can give oracle separations, but as far as anyone knows, either equality could hold without any dramatic implications for “standard” complexity classes.  (Of course, the conjunction of these equalities would have a dramatic implication.)  What got Cook et al. interested was that CC isn’t even known to contain (or be contained in) the class NC of parallelizable problems.  In particular, linear-algebra problems in NC, like determinant, matrix inversion, and iterated matrix multiplication—not to mention other problems in P, like linear programming and greatest common divisor—might all be examples of problems that are efficiently solvable by Boolean circuits, but not by comparator circuits.

One final note about CC.  Cook et al. showed the existence of a universal comparator circuit: that is, a single comparator circuit C able to simulate any other comparator circuit C’ of some fixed size, given a description of C’ as part of its input.

DIGICOMP is CC-Complete

I can now proceed to my result: that, rather surprisingly, the Digi-Comp II can solve exactly the problems in CC, giving us another characterization of that class.

I’ll prove this using yet another model of computation, which I call the pebbles model.  In the pebbles model, you start out with a pile of x pebbles; the positive integer x is the “input” to your computation.  Then you’re allowed to apply a straight-line program that consists entirely of the following two operations:

  1. Given any pile of y pebbles, you can split it into two piles consisting of ⌈y/2⌉ and ⌊y/2⌋ pebbles respectively.
  2. Given any two piles, consisting of y and z pebbles respectively, you can combine them into a single pile consisting of y+z pebbles.

Your program “accepts” if and only if some designated output pile contains at least one pebble (or, in a variant that can be shown to be equivalent, if it contains an odd number of pebbles).

piles

As suggested by the imagery, you don’t get to make “backup copies” of the piles before splitting or combining them: if, for example, you merge y with z to create y+z, then y isn’t also available to be split into ⌈y/2⌉ and ⌊y/2⌋.

Note that the ceiling and floor functions are the only “nonlinear” elements of the pebbles model: if not for them, we’d simply be applying a sequence of linear transformations.

I can now divide my CC-completeness proof into two parts: first, that DIGICOMP (i.e., the problem of simulating the Digi-Comp II) is equivalent to the pebbles model, and second, that the pebbles model is equivalent to comparator circuits.

Let’s first show the equivalence between DIGICOMP and pebbles.  The reduction is simply this: in a given Digi-Comp, each edge will be associated to a pile, with the number of pebbles in the pile equal to the total number of balls that ever traverse that edge.  Thus, we have T balls dropped in to the edge incident to the source vertex, corresponding to an initial pile with T pebbles.  Multiple edges pointing to the same vertex (i.e., fan-in) can be modeled by combining the associated piles into a single pile.  Meanwhile, a toggle has the effect of splitting a pile: if y balls enter the toggle in total, then ⌈y/2⌉ balls will ultimately exit in whichever direction the toggle was pointing initially (whether left or right), and ⌊y/2⌋ balls will ultimately exit in the other direction.  It’s clear that this equivalence works in both directions: not only does it let us simulate any given Digi-Comp by a pebble program, it also lets us simulate any pebble program by a suitably-designed Digi-Comp.

OK, next let’s show the equivalence between pebbles and comparator circuits.  As a first step, given any comparator circuit, I claim that we can simulate it by a pebble program.  The way to do it is simply to use a pile of 0 pebbles to represent each “0” bit, and a pile of 1 pebble to represent each “1” bit.  Then, any time we want to sort two bits, we simply merge their corresponding piles, then split the result back into two piles.  The result?  00 gets mapped to 00, 11 gets mapped to 11, and 01 and 10 both get mapped to one pebble in the ⌈y/2⌉ pile and zero pebbles in the ⌊y/2⌋ pile.  At the end, a given pile will have a pebble in it if and only if the corresponding output bit in the comparator circuit is 1.

One might worry that the input to a comparator circuit is a sequence of bits, whereas I said before that the input to a pebble program is just a single pile.  However, it’s not hard to see that we can deal with this, without leaving the world of logspace reductions, by breaking up an initial pile of n pebbles into n piles each of zero pebbles or one pebble, corresponding to any desired n-bit string (along with some extra pebbles, which we subsequently ignore).  Alternatively, we could generalize the pebbles model so that the input can consist of multiple piles.  One can show, by a similar “breaking-up” trick, that this wouldn’t affect the pebbles model’s equivalence to the DIGICOMP problem.

Finally, given a pebble program, I need to show how to simulate it by a comparator circuit.  The reduction works as follows: let T be the number of pebbles we’re dealing with (or even just an upper bound on that number).  Then each pile will be represented by its own group of T wires in the comparator circuit.  The Hamming weight of those T wires—i.e., the number of them that contain a ‘1’ bit—will equal the number of pebbles in the corresponding pile.

To merge two piles, we first merge the corresponding groups of T wires.  We then use comparator gates to sort the bits in those 2T wires, until all the ‘1’ bits have been moved into the first T wires.  Finally, we ignore the remaining T wires for the remainder of the computation.

To split a pile, we first use comparator gates to sort the bits in the T wires, until all the ‘1’ bits have been moved to the left.  We then route all the odd-numbered wires into “Pile A” (the one that’s supposed to get ⌈y/2⌉ pebbles), and route all the even-numbered wires into “Pile B” (the one that’s supposed to get ⌊y/2⌋ pebbles).  Finally, we introduce T additional wires with 0’s in them, so that piles A and B have T wires each.

At the end, by examining the leftmost wire in the group of wires corresponding to the output pile, we can decide whether that pile ends up with any pebbles in it.

Since it’s clear that all of the above transformations can be carried out in logspace (or even smaller complexity classes), this completes the proof that DIGICOMP is CC-complete under L-reductions.  As corollaries, the Stable Marriage and lexicographically-first perfect matching problems are L-reducible to DIGICOMP—or informally, are solvable by easily-described, polynomial-size Digi-Comp machines (and indeed, characterize the power of such machines).  Combining my result with the universality result of Cook et al., a second corollary is that there exists a “universal Digi-Comp”: that is, a single Digi-Comp D that can simulate any other Digi-Comp D’ of some polynomially-smaller size, so long as we initialize some subset of the toggles in D to encode a description of D’.

How Does the Digi-Comp Avoid Universality?

Let’s now step back and ask: given that the Digi-Comp is able to do so many things—division, Stable Marriage, bipartite matching—how does it fail to be a universal computer, at least a circuit-universal one?  Is the Digi-Comp a counterexample to the oft-repeated claims of people like Stephen Wolfram, about the ubiquity of universal computation and the difficulty of avoiding it in any sufficiently complex system?  What would need to be added to the Digi-Comp to make it circuit-universal?  Of course, we can ask the same questions about pebble programs and comparator circuits, now that we know that they’re all computationally equivalent.

The reason for the failure of universality is perhaps easiest to see in the case of comparator circuits.  As Steve Cook pointed out in a talk, comparator circuits are “1-Lipschitz“: that is, if you have a comparator circuit acting on n input bits, and you change one of the input bits, your change can affect at most one output bit.  Why?  Well, trace through the circuit and use induction.  So in particular, there’s no amplification of small effects in comparator circuits, no chaos, no sensitive dependence on initial conditions, no whatever you want to call it.  Now, while chaos doesn’t suffice for computational universality, at least naïvely it’s a necessary condition, since there exist computations that are chaotic.  Of course, this simpleminded argument can’t be all there is to it, since otherwise we would’ve proved CCP.  What the argument does show is that, if CC=P, then the encoding of a Boolean circuit into a comparator circuit (or maybe into a collection of such circuits) would need to be subtle and non-obvious: it would need to take computations with the potential for chaos, and reduce them to computations without that potential.

Once we understand this 1-Lipschitz business, we can also see it at work in the pebbles model.  Given a pebble program, suppose someone surreptitiously removed a single pebble from one of the initial piles.  For want of that pebble, could the whole kingdom be lost?  Not really.  Indeed, you can convince yourself that the output will be exactly the same as before, except that one output pile will have one fewer pebble than it would have otherwise.  The reason is again an induction: if you change x by 1, that affects at most one of ⌈x/2⌉ and ⌊x/2⌋ (and likewise, merging two piles affects at most one pile).

We now see the importance of the point I made earlier, about there being no facility in the piles model for “copying” a pile.  If we could copy piles, then the 1-Lipschitz property would fail.  And indeed, it’s not hard to show that in that case, we could implement AND, OR, and NOT gates with arbitrary fanout, and would therefore have a circuit-universal computer.  Likewise, if we could copy bits, then comparator circuits—which, recall, map (x,y) to (x∧y,x∨y)—would implement AND, OR, and NOT with arbitrary fanout, and would be circuit-universal.  (If you’re wondering how to implement NOT: one way to do it is to use what’s known in quantum computing as the “dual-rail representation,” where each bit b is encoded by two bits, one for b and the other for ¬b.  Then a NOT can be accomplished simply by swapping those bits.  And it’s not hard to check that comparator gates in a comparator circuit, and combining and splitting two piles in a pebble program, can achieve the desired updates to both the b rails and the ¬b rails when an AND or OR gate is applied.  However, we could also just omit NOT gates entirely, and use the fact that computing the output of even a monotone Boolean circuit is a P-complete problem under L-reductions.)

In summary, then, the inability to amplify small effects seems like an excellent candidate for the central reason why the power of comparator circuits and pebble programs hits a ceiling at CC, and doesn’t go all the way up to P.  It’s interesting, in this connection, that while transistors (and before them, vacuum tubes) can be used to construct logic gates, the original purpose of both of them was simply to amplify information: to transform a small signal into a large one.  Thus, we might say, comparator circuits and pebble programs fail to be computationally universal because they lack transistors or other amplifiers.

I’d like to apply exactly the same analysis to the Digi-Comp itself: that is, I’d like to say that the reason the Digi-Comp fails to be universal (unless CC=P) is that it, too, lacks the ability to amplify small effects (wherein, for example, the drop of a single ball would unleash a cascade of other balls).  In correspondence, however, David Deutsch pointed out a problem: namely, if we just watch a Digi-Comp in action, then it certainly looks like it has an amplification capability!  Consider, for example, the binary counter discussed earlier.  Suppose a column of ten toggles is in the configuration RRRRRRRRRR, representing the integer 1023.  Then the next ball to fall down will hit all ten toggles in sequence, resetting them to LLLLLLLLLL (and thus, resetting the counter to 0).  Why isn’t this precisely the amplification of a small effect that we were looking for?

Well, maybe it’s amplification, but it’s not of a kind that does what we want computationally.  One way to see the difficulty is that we can’t take all those “L” settings we’ve produced as output, and feed them as inputs to further gates in an arbitrary way.  We could do it if the toggles were arranged in parallel, but they’re arranged serially, so that flipping any one toggle inevitably has the potential also to flip the toggles below it.  Deutsch describes this as a “failure of composition”: in some sense, we do have a fan-out or copying operation, but the design of the Digi-Comp prevents us from composing the fan-out operation with other operations in arbitrary ways, and in particular, in the ways that would be needed to simulate any Boolean circuit.

So, what features could we add to the Digi-Comp to make it universal?  Here’s the simplest possibility I was able to come up with: suppose that, scattered throughout the device, there were balls precariously perched on ledges, in such a way that whenever one was hit by another ball, it would get dislodged, and both balls would continue downward.  We could, of course, chain several of these together, so that the two balls would in turn dislodge four balls, the four would dislodge eight, and so on.  I invite you to check that this would provide the desired fan-out gate, which, when combined with AND, OR, and NOT gates that we know how to implement (e.g., in the dual-rail representation described previously), would allow us to simulate arbitrary Boolean circuits.  In effect, the precariously perched balls would function as “transistors” (of course, painfully slow transistors, and ones that have to be laboriously reloaded with a ball after every use).

As a second possibility, Charles Leiserson points out to me that the Digi-Comp, as sold, has a few switches and toggles that can be controlled by other toggles.  Depending on exactly how one modeled this feature, it’s possible that it, too, could let us implement arbitrary fan-out gates, and thereby boost the Digi-Comp up to circuit-universality.

Open Problems

My personal favorite open problem is this:

What is the complexity of simulating a Digi-Comp II if the total number of balls dropped in is exponential, rather than polynomial?  (In other words, if the positive integer T, representing the number of balls, is encoded in binary rather than in unary?)

From the equivalence between the Digi-Comp and pebble programs, we can already derive a conclusion about the above problem that’s not intuitively obvious: namely, that it’s in P.  Or to say it another way: it’s possible to predict the exact state of a Digi-Comp with n toggles, after T balls have passed through it, using poly(n, log T) computation steps.  The reason is simply that, if there are T balls, then the total number of balls that pass through any given edge (the only variable we need to track) can be specified using log2T bits.  This, incidentally, gives us a second sense in which the Digi-Comp is not a universal computer: namely, even if we let the machine “run for exponential time” (that is, drop exponentially many balls into it), unlike a conventional digital computer it can’t possibly solve all problems in PSPACE, unless P=PSPACE.

However, this situation also presents us with a puzzle: if we let DIGICOMPEXP be the problem of simulating a Digi-Comp with an exponential number of balls, then it’s clear that DIGICOMPEXP is hard for CC and contained in P, but we lack any information about its difficulty more precise than that.  At present, I regard both extremes—that DIGICOMPEXP is in CC (and hence, no harder than ordinary DIGICOMP), and that it’s P-complete—as within the realm of possibility (along with the possibility that DIGICOMPEXP is intermediate between the two).

By analogy, one can also consider comparator circuits where the entities that get compared are integers from 1 to T rather than bits—and one can then consider the power of such circuits, when T is allowed to grow exponentially.  In email correspondence, however, Steve Cook sent me a proof that such circuits have the same power as standard, Boolean comparator circuits.  It’s not clear whether this tells us anything about the power of a Digi-Comp with exponentially many balls.

A second open problem is to formalize the feature of Digi-Comp that Charles mentioned—namely, toggles and switches controlled by other toggles—and see whether, under some reasonable formalization, that feature bumps us up to P-completeness (i.e., to circuit-universality).  Personally, though, I confess I’d be even more interested if there were some feature we could add to the machine that gave us a larger class than CC, but that still wasn’t all of P.

A third problem is to pin down the power of Digi-Comps (or pebble programs, or comparator circuits) that are required to be planar.  While my experience with woodcarving is limited, I imagine that planar or near-planar graphs are a lot easier to carve than arbitrary graphs (even if the latter present no problems of principle).

A fourth problem has to do with the class CC in general, rather than the Digi-Comp in particular, but I can’t resist mentioning it.  Let CCEXP be the complexity class that’s just like CC, but where the comparator circuit (or pebble program, or Digi-Comp) is exponentially large and specified only implicitly (that is, by a Boolean circuit that, given as input a binary encoding of an integer i, tells you the ith bit of the comparator circuit’s description).  Then it’s easy to see that PSPACE ⊆ CCEXP ⊆ EXP.  Do we have CCEXP = PSPACE or CCEXP = EXP?  If not, then CCEXP would be the first example I’ve ever seen of a natural complexity class intermediate between PSPACE and EXP.

Acknowledgments

I thank Charles Leiserson for bringing the Digi-Comp II to MIT, and thereby inspiring this “research.”  I also thank Steve Cook, both for giving a talk that first brought the complexity class CC to my attention, and for helpful correspondence.  Finally I thank David Deutsch for the point about composition.