Archive for the ‘Procrastination’ Category

The stupidest story I ever wrote (it was a long flight)

Friday, May 18th, 2018

All the legal maneuvers, the decades of recriminations, came down in the end to two ambiguous syllables.  No one knew why old man Memeson had named his two kids “Laurel” and “Yanny,” or why his late wife had gone along with it.  Not Laura, not Lauren, but Laurel—like, the leaves that the complacent rest on?  Poor girl.  And yet she lucked out compared to her younger brother. “Yanny”? Rhymes with fanny, seriously?  If you got picked on in school half as much as Yanny did, you too might grow up angry enough to spend half your life locked in an inheritance fight.

But people mostly tolerated the old man’s eccentricities, because he clearly knew something. All through the 1930s, Memeson Audio was building the highest-end radios and record players that money could buy.  And long after he’d outdone the competition, Memeson continued to outdo himself. At the 1939 New York World’s Fair, he proudly unveiled a prototype of his finest record player yet, the one he’d been tinkering with in his personal workshop for a decade: the Unmistakable.  Interviewed about it later, people who attended the demo swore that you couldn’t mishear a single syllable that came out of the thing if you were 99% deaf. No one had ever heard a machine like it—or would, perhaps, until the advent of digital audio.  On Internet forums, audiophiles still debate how exactly Memeson managed to do it with the technology of the time.  Alas, just like the other Memeson debate—about which more shortly—this one might continue indefinitely, since only one Unmistakable was ever built, and that World’s Fair was the last time anyone heard it.

The day after the triumphant demonstration, a crowd cheered as Memeson boarded a train in Grand Central Station to return to his factory near Chicago, there to supervise the mass production of Unmistakables. Meanwhile Laurel and Yanny, now both in their thirties and helping to run the family firm, stood on the platform and beamed. It hadn’t been easy to grow up with such a singleminded father, one who seemed to love his radios a million times more than them, but at a moment like this, it almost felt worth it.  When Laurel and Yanny returned to the Fair to continue overseeing the Memeson Audio exhibition, they’d be the highest-ranking representatives of the company, and would bask in their old man’s reflected glory.

In biographies, Memeson is described as a pathological recluse, who’d hole himself up in his workshop for days at a time, with strict orders not to be disturbed by anyone.  But on this one occasion—as it turned out, the last time he’d ever be seen in public—Memeson was as hammy as could be.  As the train pulled out of Grand Central, he leaned out of an open window in his private car and grinned for the cameras, waving with one arm and holding up the Unmistakable with the other.

Every schoolchild knows what happened next: the train derailed an hour later.  Along with twenty other passengers, Memeson was killed, while all that remained of his Unmistakable was a mess of wires and splintered wood.

Famously, there was one last exchange. As the train began moving, a journalist waved his hat at Memeson and called out “safe travels, sir!”

Memeson smiled and tipped his hat.

Then, noticing Laurel and Yanny on the platform, the journalist yelled to Memeson, in jest (or so he thought): “if something happens, which of these two is next in line to run the business?”

The old man had never been known for his sense of humor, and seemed from his facial expression (or so witnesses would later say) to treat the question with utmost seriousness. As the train receded into the distance, he shouted—well, everyone agrees that it was two syllables. But which? With no written will to consult—one of Memeson’s many idiosyncrasies was his defiance of legal advice—it all came down to what people heard, or believed, or believed they heard.

On the one hand, it would of course be extremely unusual back then for a woman to lead a major technology firm. And Memeson had never shown the slightest interest in social causes: not women’s suffrage, not the New Deal, nothing. In court, Yanny’s lawyers would press these points, arguing that the old man couldn’t possibly have intended to pass on his empire to a daughter.

On the other hand, Laurel was his first-born child.  And some people said that, if Memeson had ever had a human connection with anyone, it was with her.  There were even employees who swore that, once in a while, Laurel was seen entering and leaving her dad’s workshop—a privilege the old man never extended to Yanny or anyone else. Years later, Laurel would go so far as to claim that, during these visits, she’d contributed crucial ideas to the design of the Unmistakable. Most commentators dismiss this claim as bluster: why would she wait to drop such a bombshell until she and Yanny had severed their last ties, until both siblings’ only passion in life was to destroy the other, to make the world unable to hear the other’s name?

At any rate, neither Laurel nor anyone else was ever able to build another Unmistakable, or to give a comprehensible account of how it worked.  But Laurel certainly has die-hard defenders to this day—and while I’ve tried to be evenhanded in this account, I confess to being one of them.

In the end, who people believed about this affair seemed to come down to where they stood—literally. Among the passengers in the train cars adjoining Memeson’s, the ones who heard him are generally adamant that they heard “Laurel”; while most who stood on the platform are equally insistent about “Yanny.”  Today, some Memeson scholars theorize that this discrepancy is due to a Doppler effect.  People on the platform would’ve heard a lower pitch than people comoving with Memeson, and modern reconstructions raise the possibility, however farfetched, that this alone could “morph” one name to the other.  If we accept this, then it suggests that Memeson himself would have intended “Laurel”—but pitch changing a word?  Really?

Today, Laurel and Yanny are both gone, like their father and his company, but their dispute is carried on by their children and grandchildren, with several claims still winding their way through the courts.

Are there any recordings from the platform?  There is one, which was lost for generations before it unexpectedly turned up again. Alas, any hopes that this recording would definitively resolve the matter were … well, just listen to the thing.  Maybe the audio quality isn’t good enough.  Maybe an Unmistakable recording, had it existed, would’ve revealed the observer-independent truth, given us a unique map from the sensory world to the world of meaning.

30 of my favorite books

Wednesday, March 28th, 2018

A reader named Shozab writes:

Scott, if you had to make a list of your favourite books, which ones would you include?
And yes, you can put in quantum computing since Democritus!

Since I’ve gotten the same request before, I guess this is as good a time as any.  My ground rules:

  • I’ll only include works because I actually read them and they had a big impact on me at some point in my life—not because I feel abstractly like they’re important or others should read them, or because I want to be seen as the kind of person who recommends them.
  • But not works that impacted me before the age of about 10, since my memory of childhood reading habits is too hazy.
  • To keep things manageable, I’ll include at most one work per author.  My choices will often be idiosyncratic—i.e., not that author’s “best” work.  However, it’s usually fair to assume that if I include something by X, then I’ve also read and enjoyed other works by X, and that I might be including this work partly just as an entry point into X’s oeuvre.
  • In any case where the same author has both “deeper” and more “accessible” works, both of which I loved, I’ll choose the more accessible.  But rest assured that I also read the deeper work. 🙂
  • This shouldn’t need to be said, but since I know it does: listing a work by author X does not imply my agreement with everything X has ever said about every topic.
  • The Bible, the Homeric epics, Plato, and Shakespeare are excluded by fiat.  They’re all pretty important (or so one hears…), and you should probably read them all, but I don’t want the responsibility of picking and choosing from among them.
  • No books about the Holocaust, or other unremittingly depressing works like 1984.  Those are a special category to themselves: I’m glad that I read them, but would never read them twice.
  • The works are in order of publication date, with a single exception (see if you can spot it!).

Without further ado:

Quantum Computing Since Democritus by Scott Aaronson

Dialogue Concerning the Two Chief World Systems by Galileo Galilei

Dialogues Concerning Natural Religion by David Hume

Narrative of the Life of Frederick Douglass by himself

The Adventures of Huckleberry Finn by Mark Twain

The Subjection of Women by John Stuart Mill

The Autobiography of Charles Darwin by himself

Altneuland by Theodor Herzl

The Practice and Theory of Bolshevism by Bertrand Russell

What Is Life?: With Mind and Matter and Autobiographical Sketches by Erwin Schrödinger

Fads and Fallacies in the Name of Science by Martin Gardner

How Children Fail by John Holt

Set Theory and the Continuum Hypothesis by Paul Cohen

The Gods Themselves by Isaac Asimov (specifically, the middle third)

A History of Pi by Petr Beckmann

The Selfish Gene by Richard Dawkins

The Mind-Body Problem by Rebecca Goldstein

Alan Turing: The Enigma by Andrew Hodges

Surely You’re Joking Mr. Feynman by Richard Feynman

The Book of Numbers by John Conway and Richard Guy

The Demon-Haunted World by Carl Sagan

Gems of Theoretical Computer Science by Uwe Schöning and Randall Pruim

Fashionable Nonsense by Alan Sokal and Jean Bricmont

Our Dumb Century by The Onion

Quantum Computation and Quantum Information by Michael Nielsen and Isaac Chuang

The Blank Slate by Steven Pinker

Field Notes from a Catastrophe by Elizabeth Kolbert

Infidel by Ayaan Hirsi Ali

Logicomix by Apostolos Doxiadis and Christos Papadimitriou

The Beginning of Infinity by David Deutsch

You’re welcome to argue with me in the comments, e.g., by presenting evidence that I didn’t actually like these books. 🙂  More seriously: list your own favorites, discuss your reactions to these books, be a “human recommendation engine” by listing books that “those who liked the above would also enjoy,” whatever.

Addendum: Here’s another bonus twenty books, as I remember more and as commenters remind me of more that I liked quite as much as the thirty above.

The Man Who Knew Infinity by Robert Kanigel

A Mathematician’s Apology by G. H. Hardy

A Confederacy of Dunces by John Kennedy Toole

The First Three Minutes by Steven Weinberg

Breaking the Code by Hugh Whitemore

Arcadia by Tom Stoppard

Adventures of a Mathematician by Stanislaw Ulam

The Man Who Loved Only Numbers by Paul Hoffman

Mathematical Writing by Donald Knuth, Tracy Larabee, and Paul Roberts

A Beautiful Mind by Sylvia Nasar

An Introduction to Computational Learning Theory by Michael Kearns and Umesh Vazirani

The Road to Reality by Roger Penrose

The Nili Spies by Anita Engle (about the real-life heroic exploits of the Aaronsohn family)

Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig

The Princeton Companion to Mathematics edited by Timothy Gowers

The Making of the Atomic Bomb by Richard Rhodes

Fear No Evil by Natan Sharansky

The Mind’s I by Douglas Hofstadter and Daniel Dennett

Disturbing the Universe by Freeman Dyson

Unsong by Scott Alexander

Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)

Saturday, February 3rd, 2018

I’ve been way too distracted by actual research lately from my primary career as a nerd blogger—that’s what happens when you’re on sabbatical.  But now I’m sick, and in no condition to be thinking about research.  And this morning, in a thread that had turned to my views on the interpretation of quantum mechanics called “QBism,” regular commenter Atreat asked me the following pointed question:

Scott, what is your preferred interpretation of QM? I don’t think I’ve ever seen you put your cards on the table and lay out clearly what interpretation(s) you think are closest to the truth. I don’t think your ghost paper qualifies as an answer, BTW. I’ve heard you say you have deep skepticism about objective collapse theories and yet these would seemingly be right up your philosophical alley so to speak. If you had to bet on which interpretation was closest to the truth, which one would you go with?

Many people have asked me some variant of the same thing.  As it happens, I’d been toying since the summer with a huge post about my views on each major interpretation, but I never quite got it into a form I wanted.  By contrast, it took me only an hour to write out a reply to Atreat, and in the age of social media and attention spans measured in attoseconds, many readers will probably prefer that short reply to the huge post anyway.  So then I figured, why not promote it to a full post and be done with it?  So without further ado:


Dear Atreat,

It’s no coincidence that you haven’t seen me put my cards on the table with a favored interpretation of QM!

There are interpretations (like the “transactional interpretation”) that make no sense whatsoever to me.

There are “interpretations” like dynamical collapse that aren’t interpretations at all, but proposals for new physical theories.  By all means, let’s test QM on larger and larger systems, among other reasons because it could tell us that some such theory is true or—vastly more likely, I think—place new limits on it! (People are trying.)

Then there’s the deBroglie-Bohm theory, which does lay its cards on the table in a very interesting way, by proposing a specific evolution rule for hidden variables (chosen to match the predictions of QM), but which thereby opens itself up to the charge of non-uniqueness: why that rule, as opposed to a thousand other rules that someone could write down?  And if they all lead to the same predictions, then how could anyone ever know which rule was right?

And then there are dozens of interpretations that seem to differ from one of the “main” interpretations (Many-Worlds, Copenhagen, Bohm) mostly just in the verbal patter.

As for Copenhagen, I’ve described it as “shut-up and calculate except without ever shutting up about it”!  I regard Bohr’s writings on the subject as barely comprehensible, and Copenhagen as less of an interpretation than a self-conscious anti-interpretation: a studied refusal to offer any account of the actual constituents of the world, and—most of all—an insistence that if you insist on such an account, then that just proves that you cling naïvely to a classical worldview, and haven’t grasped the enormity of the quantum revolution.

But the basic split between Many-Worlds and Copenhagen (or better: between Many-Worlds and “shut-up-and-calculate” / “QM needs no interpretation” / etc.), I regard as coming from two fundamentally different conceptions of what a scientific theory is supposed to do for you.  Is it supposed to posit an objective state for the universe, or be only a tool that you use to organize your experiences?

Also, are the ultimate equations that govern the universe “real,” while tables and chairs are “unreal” (in the sense of being no more than fuzzy approximate descriptions of certain solutions to the equations)?  Or are the tables and chairs “real,” while the equations are “unreal” (in the sense of being tools invented by humans to predict the behavior of tables and chairs and whatever else, while extraterrestrials might use other tools)?  Which level of reality do you care about / want to load with positive affect, and which level do you want to denigrate?

This is not like picking a race horse, in the sense that there might be no future discovery or event that will tell us who was closer to the truth.  I regard it as conceivable that superintelligent AIs will still argue about the interpretation of QM … or maybe that God and the angels argue about it now.

Indeed, about the only thing I can think of that might definitively settle the debate, would be the discovery of an even deeper level of description than QM—but such a discovery would “settle” the debate only by completely changing the terms of it.

I will say this, however, in favor of Many-Worlds: it’s clearly and unequivocally the best interpretation of QM, as long as we leave ourselves out of the picture!  I.e., as long as we say that the goal of physics is to give the simplest, cleanest possible mathematical description of the world that somewhere contains something that seems to correspond to observation, and we’re willing to shunt as much metaphysical weirdness as needed to those who worry themselves about details like “wait, so are we postulating the physical existence of a continuum of slightly different variants of me, or just an astronomically large finite number?” (Incidentally, Max Tegmark’s “mathematical multiverse” does even better than MWI by this standard.  Tegmark is the one waiting for you all the way at the bottom of the slippery slope of always preferring Occam’s Razor over trying to account for the specificity of the observed world.)  It’s no coincidence, I don’t think, that MWI is so popular among those who are also eliminativists about consciousness.

When I taught my undergrad Intro to Quantum Information course last spring—for which lecture notes are coming soon, by the way!—it was striking how often I needed to resort to an MWI-like way of speaking when students got confused about measurement and decoherence. (“So then we apply this unitary transformation U that entangles the system and environment, and we compute a partial trace over the environment qubits, and we see that it’s as if the system has been measured, though of course we could in principle reverse this by applying U-1 … oh shoot, have I just conceded MWI?”)

On the other hand, when (at the TAs’ insistence) we put an optional ungraded question on the final exam that asked students their favorite interpretation of QM, we found that there was no correlation whatsoever between interpretation and final exam score—except that students who said they didn’t believe any interpretation at all, or that the question was meaningless or didn’t matter, scored noticeably higher than everyone else.

Anyway, as I said, MWI is the best interpretation if we leave ourselves out of the picture.  But you object: “OK, and what if we don’t leave ourselves out of the picture?  If we dig deep enough on the interpretation of QM, aren’t we ultimately also asking about the ‘hard problem of consciousness,’ much as some people try to deny that? So for example, what would it be like to be maintained in a coherent superposition of thinking two different thoughts A and B, and then to get measured in the |A⟩+|B⟩, |A⟩-|B⟩ basis?  Would it even be like anything?  Or is there something about our consciousness that depends on decoherence, irreversibility, full participation in the arrow of the time, not living in an enclosed little unitary box like AdS/CFT—something that we’d necessarily destroy if we tried to set up a large-scale interference experiment on our own brains, or any other conscious entities?  If so, then wouldn’t that point to a strange sort of reconciliation of Many-Worlds with Copenhagen—where as soon as we had a superposition involving different subjective experiences, for that very reason its being a superposition would be forevermore devoid of empirical consequences, and we could treat it as just a classical probability distribution?”

I’m not sure, but The Ghost in the Quantum Turing Machine will probably have to stand as my last word (or rather, last many words) on those questions for the time being.

Practicing the modus ponens of Twitter

Monday, January 29th, 2018

I saw today that Ryan Lackey generously praised my and Zach Weinersmith’s quantum computing SMBC comic on Twitter:

Somehow this SMBC comic is the best explanation of quantum computing for non-professionals that I’ve ever found

To which the venture capitalist Matthew Ocko replied, in another tweet:

Except Scott Aaronson is a surly little troll who has literally never built anything at all of meaning. He’s a professional critic of braver people.  So, no, this is not a good explanation – anymore than Jeremy Rifkin on CRISPR would be… 🙄

Now, I don’t mind if Ocko hates me, and also hates my and Zach’s comic.  What’s been bothering me is just the logic of his tweet.  Like: what did he have in his head when he wrote the word “So”?  Let’s suppose for the sake of argument that I’m a “surly little troll,” and an ax murderer besides.  How does it follow that my explanation of quantum computing wasn’t good?  To reach that stop in proposition-space, wouldn’t one still need to point to something wrong with the explanation?

But I’m certain that my inability to understand this is just another of my many failings.  In a world where Trump is president, bitcoin is valued at $11,000 when I last checked, and the attack-tweet has fully replaced the argument, it’s obvious that those of us who see a word like “so” or “because,” and start looking for the inferential step, are merely insufficiently brave.  For godsakes, I’m not even on Twitter!  I’m a sclerotic dinosaur who needs to get with the times.

But maybe I, too, could learn the art of the naked ad-hominem.  Let me try: from a Google search, we learn that Ocko is an enthusiastic investor in D-Wave.  Is it possible he’s simply upset that there’s so much excitement right now in experimental quantum computing—including “things of meaning” being built by brave people, at Google and IBM and Rigetti and IonQ and elsewhere—but that virtually none of this involves D-Wave, whose devices remain interesting from various physics and engineering standpoints, but still fail to achieve any clear quantum speedups, just as the professional critics predicted?  Is he upset that the brave system-builders who are racing finally to achieve quantum computational supremacy over the next year, are the ones who actually interacted with academic researchers (sorry: surly little trolls), and listened to what they said?  Who understood, for example, why scaling up to 50+ qubits only made a lot of sense once you had one or two qubits that at least behaved well enough in isolation—which, after years of heroic effort, many of these system-builders now do?

How’d I do?  Was there still too much argument there for the world of 2018?

Googatory

Thursday, December 7th, 2017

When I awoke with glowing, translucent hands, and hundreds of five-pointed yellow stars lined up along the left of my visual field, my first thought was that a dream must have made itself self-defeatingly obvious. I was a 63-year-old computer science professor. I might’ve been dying of brain cancer, but my mind was lucid enough that I’d refused hospice care, lived at home, still even met sometimes with my students, and most importantly: still answered my email, more or less. I could still easily distinguish dreams from waking reality. Couldn’t I?

I stared at the digital clock beside my bed: 6:47am. After half a minute it changed to 6:48. No leaping around haphazardly. I picked up the two-column conference paper by my nightstand. “Hash-and-Reduce: A New Approach to Distributed Proximity Queries in the Cloud.” I scanned the abstract and first few paragraphs. It wasn’t nonsense—at least, no more so than the other papers that I still sometimes reviewed. The external world still ticked with clockwork regularity. This was no dream.

Nervously, I got up. I saw that my whole body was glowing and translucent. My pajamas, too. A second instance of my body, inert and not translucent, remained in the bed. I looked into the mirror: I had no reflection. The mirror showed a bedroom unoccupied but for the corpse on the bed.

OK, so I was a ghost.

Just then I heard my nurse enter through the front door. “Bob, how you feeling this morning?” I met her in the foyer. “Linda, look what happened! I’m a ghost now, but interestingly enough, I can still..”

Linda walked right through me and into the bedroom. She let out a small gasp when she saw the corpse, then started making phone calls.

Over the following days, I accompanied my body to the morgue. I attended my little memorial session at the university, made note of which of my former colleagues didn’t bother to show up. I went to my funeral. At the wake, I stood with my estranged wife and grown children, who mostly remained none the wiser—except when they talked about how eerie it was, how it felt like I was still there with them. Or maybe I’d say something, and get no response from my family, but then five minutes later their conversation would mysteriously veer toward the topic I’d broached. It seemed that I still had full input from the world of the living, but that my only output channel was doing spooky haunted things that still maintained plausible deniability about my existence.

Questions flooded my mind: were there other ghosts? Why was I in this purgatory … or whatever it was? Would I be here forever? And: what was that column of yellow stars in the left of my visual field, the stars that followed me everywhere?

Once it seemed clear that I was here to stay, for some definition of “here,” I figured I might as well do the same stuff that filled my waking hours when I was alive. I pulled up a chair and sat at my laptop. I hit up The Washington Post, The Onion, xkcd, SMBC Comics, Slate Star Codex. They all worked fine.

Then I switched to the Gmail tab. Hundreds of new messages. Former students asking for recommendation letters, prospective students wanting to work with me, grant managers howling about overdue progress reports, none of them bothering to check if I was dead.

I replied to one randomly-chosen email:

Dear Ashish,
Thanks for your interest in joining our group. Alas, I’m currently dead and walking the earth as a translucent wraith. For that reason, I’m unable to take on new PhD students at this time.
Best of luck!
–Bob

I clicked “Send” and—part of me was expecting this—got an error. Message not sent. Email couldn’t cross the barrier from the dead to the living: too obvious.

Next I opened my “Starred” folder. I was greeted by 779 starred messages: each one a pressing matter that I’d promised myself I’d get to while alive but didn’t.

Dear Bob,
Hope you’re well. I think I’ve found another error in your 2002 paper ‘Cache-Oblivious Approximation Algorithms for Sparse Linear Algebra on Big Data.’ Specifically, in the proof of Lemma 4.2, you assume a spectral bound [har har, spectral], even though your earlier definition of the matrix A_i seems to allow arbitrary norm…

I chuckled. Well, I did spend most of my life on this stuff, didn’t I? Shouldn’t I sort this out, just for the sake of my intellectual conscience?

I opened up my old paper in Ghostview (what else?) and found the offending lemma. Then I took out pen and paper—they worked, luckily, although presumably my scribblings remained invisible to the living—and set to work. After an hour, I’d satisfied myself that the alleged error was nothing too serious, just a gap requiring a few sentences of clarification. I sadly had no direct way to tell my years-ago correspondent that, assuming the correspondent was still even alive and research-active and at the same email address. But still: good for my peace of mind, right?

Then something happened: the first intimation of what my life, or rather undeath, was to consist of from then on. Faintly but unmistakably, one of the tiny yellow stars in the left of my visual field became a blue-gray outline. It was no longer filled with yellow.

Excitedly, I clicked through more starred emails. Some I saw no easy way to deal with. But every time I could satisfy myself that an email was no longer relevant—whether it was an invitation to a long-ago workshop, a grant that I never applied for, a proposed research collaboration rendered moot by subsequent work—one of those yellow stars in my visual field lost its yellow filling. Before long there were ten blue-gray outline stars, then twenty.

One day, while I invisibly attended an old haunt (har har)—the weekly faculty lunch in my former department—I encountered a fellow ghost: a former senior colleague of mine, who’d died twenty years prior. He and I got to talking.

For the most part, my fellow specter confirmed what I’d already guessed. Yes, in some long-ago past, purgatory no doubt had a different character. Yes, it’s no doubt different for others, who lived different lives and faced different psychic burdens. For us, though, for the faculty, purgatory is neither more nor less than the place where you must reply to every last email that was still starred “important” when you died.

In the afterlife, it turns out, it doesn’t matter how “virtuous” you were, unless altruism happens to have been your obsession while alive. What matters is just that you free yourself from whatever burdened you every night when you went to sleep, that you finish what you started. Those unable to do so remain ghosts forever.

“So,” I asked the other polter-guest at the faculty lunch, “how long does it take a professor to finish answering a lifetime’s worth of emails?”

“Depends. I’ve been doing it for twenty years.  Hoping to finish in twenty more.”

“I see. And when you’ve dealt with the last email, what then?”

“You pass to another place. None of us know exactly where. But”—and here his voice dropped to a whisper, as if anyone else present could hear ghosts—“it’s said to be a place of breathtaking tranquility. Where researchers like us wear flowing robes, and sit under olive trees, and contemplate truth and beauty with Plato and Euclid, and then go out for lunch buffet. Where there’s no email, no deadlines, no journals, no grant applications, no responsibilities but one: to explore whatever has captured your curiosity in the present moment. Some call it the Paradise of Productivity.”

“Does everyone have to pass through purgatory first, before they go there?”

“It’s said that, among all the computer scientists who’ve lived, only Alan Turing went straight to Paradise. And he died before email was even invented. When his time comes, Donald Knuth might also escape purgatory, since he forswore email in 1990. But Knuth, alas, might spend tens of thousands of years in a different purgatory, finishing Volume 4 of The Art of Computer Programming.

“As for the rest of us, we all spend more or less time here with our wretched emails—for most of us, more. For one computer scientist—an Umesh Vazi-something, I believe, from Berkeley—it’s rumored that when he enters this place, even a trillion years won’t suffice to leave it. It’s said that the Sun will swallow the Earth, the night sky will go dark, and yet there Umesh will be, still clearing his inbox.”

After a few years, I’d knocked off all the easy stuff in my Starred folder. Then, alas, I was left with missives like this:

Hey, earth to Bob!
The rest of us have done our part in writing up the paper. We’re all waiting on you to integrate the TeX files, and to craft an introduction explaining why anyone cared about the problem in the first place. Also, would you mind making a detailed pass through Sections 4.3 and 5.2?

Ugh. There were so many slightly different TeX files. Which were the most recent? This could take a while.

Nevertheless, after weeks of … ghosting on the project, I got to work revising the paper. There was, of course, the practical difficulty that I couldn’t directly communicate my edits back to the world of the living. Fortunately, I could still do haunted stuff. One day, for example, one of my former coauthors opened her old TeX file, and “discovered” that I’d actually done way more work on the paper while I was alive than anyone remembered I had. The mysteries of when exactly I did that work, and why no one knew about it at the time, were never satisfactorily resolved.

Finally, after fourteen years, I’d succeeded in putting to rest 731 of my 779 starred emails. In the corner of my visual field was a vast array of blue-gray stars—but still, ominously, 48 yellow stars scattered among them.

“God in Heaven!” I cried. “Whoever you are! I can’t handle any of the remaining starred emails, and thereby pass to the Paradise of Productivity, without sending replies back into the world of the living. Please, I beg you: let me breach this metaphysical wall.”

A booming voice came down from on high. “YEA, BOB, WHAT THOU REQUESTETH IS POSSIBLE.  THOU WOULDST NOT EVEN BE THE FIRST GHOUL FOR WHOM I WOULDST GRANTETH THIS REQUEST: FAR FROM IT.  BUT I MUST WARN THEE: BREACHING THE WALL BETWEEN LIVING AND DEAD WILL BRINGETH FRUITS THAT THOU MAYST NOT LIKE.”

“I think I’ll take my chances with those fruits.”

“VERY WELL,” said God.

And that’s how it is that, half a century after my death, I remain in purgatory still, my days now filled with missives like the following:

Dear Bob,
Thanks for the reply! I’m sorry to hear that you’re now a ghost condemned to answer emails before he can pass to the next world. My sympathies. Having said that, I have to confess that I still don’t understand Section 4.2 of your paper. When you get a chance, could you please clarify? I’ve cc’ed my coauthors, who might have additional followup questions.


Note: To anyone who emailed me lately, I apologize for the delay in replying. I was writing this story. –SA

The Karp-Lipton Advice Column

Wednesday, November 15th, 2017

Today, Shtetl-Optimized is extremely lucky to have the special guest blogger poly: the ‘adviser’ in the computational complexity class P/poly (P with polynomial-sized advice string), defined by Richard Karp and Richard Lipton in 1982.

As an adviser, poly is known for being infinitely wise and benevolent, but also for having a severe limitation: namely, she’s sensitive only to the length of her input, and not to any other information about it.  Her name comes from the fact that her advice is polynomial-size, which is the problem that prevents her from simply listing the answers to every possible question in a gigantic lookup table, the way she’d like to.

Without further ado, let’s see what advice poly is able to offer her respondents.


Dear poly,

When my husband and I first started dating, we were going at it like rabbits!  Lately, though, he seems to have no interest in sex.  That’s not normal for a guy, is it?  What can I do to spice things up in the bedroom?

Sincerely,
Frustrated Wife

Dear Frustrated Wife,

Unfortunately, I don’t know exactly what your question is.  All I was told is that the question was 221 characters long.  But here’s something that might help: whenever you’re stuck in a rut, sometimes you can “shake things up” with the use of randomness.  So, please accept, free of charge, the following string of 221 random bits:

111010100100010010101111110010111101011010001
000111100101000111111011101110100110000110100
0010010010000010110101100100100111000010110
111001011001111111101110100010000010100111000
0111101111001101001111101000001010110101101

Well, it’s not really “random,” since everyone else with a 221-character question would’ve gotten the exact same string.  But it’s random enough for many practical purposes.  I hope it helps you somehow … good luck!

Sincerely,
poly


Dear poly,

I’m a 29-year-old autistic male: a former software entrepreneur currently worth about $400 million, who now spends his time donating to malaria prevention and women’s rights in the developing world.  My issue is that I’ve never been on a date, or even kissed anyone.  I’m terrified to make an advance.  All I read in the news is an endless litany of male sexual misbehavior: Harvey Weinstein, Louis C. K., Leon Wieseltier, George H. W. Bush, Roy Moore, the current president (!), you name it.  And I’m consumed by the urge not to be a pig like those guys.  Like, obviously I’m no more likely to start stripping or masturbating or something in front of some woman I just met, than I am to morph into a koala bear.  But from reading Slate, Salon, Twitter, my Facebook news feed, and so forth, I’ve gotten the clear sense that there’s nothing I could do that modern social mores would deem appropriate and non-creepy—at least, not a guy like me, who wasn’t lucky enough to be born instinctively understanding these matters.  I’m grateful to society for enabling my success, and have no desire to break any of its written or unwritten rules.  But here I genuinely don’t know what society wants me to do.  I’m writing to you because I remember you from my undergrad CS classes—and you’re the only adviser I ever encountered whose advice could be trusted unconditionally.

Yours truly,
Sensitive Nerd

Dear Sensitive Nerd,

I see your that letter is 1369 characters long.  Based on that, here are a few things I can tell you that might be helpful:

  • The Riemann Hypothesis is true.
  • ZFC set theory is consistent.
  • The polynomial hierarchy is contained in PP.

Write me a 3592-character letter the next time, and I’ll give you an even longer list of true mathematical statements!  (I actually know how to solve the halting problem—no joke!—but am condemned to drip, drip, drip out the solutions, a few per input length.)

But I confess: no sooner did I list these truths than I reflected that they, or even a longer list, might not help much with your problem, whatever it might have been.  It’s even possible to have a problem for which no amount of truth helps in solving it.  So, I dunno: maybe try not worrying so much, and write back to let me know if that helped?  (Not that I expect to understand your reply, or would be able to change any of my advice at this point even if I did.)

Good luck!
–poly


Dear poly,

c34;c’y9v3x

Sincerely,
Unhappy in Unary

Dear Unhappy in Unary,

Finally, someone who writes to me in a language I can understand!  Your question is 11 characters long.  I understand that to be a code expressing that you’re bankrupt, and are filing for Chapter 11 bankruptcy protection.  Financial insolvency isn’t easy for anyone.  But here’s some advice: put everything you have into Bitcoin, and sell out a year from now.  Unfortunately, I don’t know exactly when you’re writing to me, but at least at the time my responses were hardwired in, this was some damn good advice.

You’re welcome,
poly


poly’s polynomial-sized advice column is syndicated in newspapers nationwide, and can also be accessed by simply moving your tape head across your advice tape. You’re welcome to comment on this post, but I might respond only to the lengths of the comments, rather than anything else about them. –SA

The problem with Uber

Thursday, October 19th, 2017

I just spent a wonderful and exhausting five days in the Bay Area: meeting friends, holding the first-ever combined SlateStarCodex/Shtetl-Optimized meetup, touring quantum computing startups, meeting with Silicon Valley folks about quantum computing, and giving a public lecture for the Simons Institute in Berkeley.  I’ll probably say more about some of these events in future posts, but for now: thanks so much to everyone who helped them happen!

Alas, my experiences getting around the Bay this week convinced me that there’s a real problem with Uber.  And no, I’m not talking about their corporate culture, or the personality of ousted CEO Travis Kalanick, or the hardball lobbying of municipalities to allow ride-sharing, or the taxi companies needing to adapt to survive, or even Uber having an unsustainable business model (they could charge more and I’d still use it…).

The problem is: when you order an Uber, like 2/3 of the time you and the driver can’t find each other without a lot of back and forth.

Firstly, because you can’t specify where you are with enough accuracy.  When you try, the app does this thing where it literally moves the “you are here” pointer to a place where you’re not. And then, even if the little dot correctly indicates your location, for some reason the driver will think you’re somewhere totally different.

Secondly, because Uber cars are typically unmarked.  Yes, the app tells you that it’s a white Ford or whatever—but there’s a lot of white cars, and it’s hard (at least for me) to distinguish models at a distance, so you can then face a stressful “Where’s Waldo?” problem involving hundreds of cars.

Thirdly, because the drivers understandably have their phones mounted on their dashboards—the result being that, when you call to try to figure out where they are, nothing they say can be distinguished from “mmph hrmph mmph.”  And of course they can’t text while driving.

To be clear, these gripes arise only because ride-sharing apps generally work so damn well, and are such an advance over what preceded them, that they’ve changed our expectations about the convenience of getting from place to place.  Because of Uber and Lyft and so on, it’s tempting to plan your life around the assumption that you can be anywhere in a greater metro area, and within 3 minutes a car will magically arrive to take you to wherever else in that area you need to be—while your brain remains uncluttered with transportation logistics, among the most excruciating of all topics.  This is a problem borne of success.

But—good news, everyone!—I have an idea to solve the problem, which I hereby offer free of charge to any ride-sharing service that wants to adopt it.  Namely, when you order a ride, why doesn’t the app—with your explicit permission, of course—use your phone’s camera to send a selfie of you, together with the location where you’re waiting, to the driver?  Is there some obvious reason I’m missing why this wouldn’t work?  Have any ride-sharing companies tried it?  (I only learned today that I can update my Uber profile to include my photo.  Hopefully that will help drivers find me—but a photo of the intersection, or the side of the building where I am, etc. could help even more.)

Also against individual IQ worries

Sunday, October 1st, 2017

Scott Alexander recently blogged “Against Individual IQ Worries.”  Apparently, he gets many readers writing to him terrified that they scored too low on an IQ test, and therefore they’ll never be able to pursue their chosen career, or be a full-fledged intellectual or member of the rationalist community or whatever.  Amusingly, other Scott says, some of these readers have even performed their own detailed Bayesian analysis of what it might mean that their IQ score is only 90, cogently weighing the arguments and counterarguments while deploying the full vocabulary of statistical research.  It somehow reminds me of the joke about the talking dog, who frets to his owner that he doesn’t think he’s articulate enough to justify all the media attention he’s getting.

I’ve long had mixed feelings about the entire concept of IQ.

On the one hand, I know all the studies that show that IQ is highly heritable, that it’s predictive of all sorts of life outcomes, etc. etc.  I’m also aware of the practical benefits of IQ research, many of which put anti-IQ leftists into an uncomfortable position: for example, the world might never have understood the risks of lead poisoning without studies showing how they depressed IQ.  And as for the thousands of writers who dismiss the concept of IQ in favor of grit, multiple intelligences, emotional intelligence, or whatever else is the flavor of the week … well, I can fully agree about the importance of the latter qualities, but can’t go along with many of those writers’ barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter.

On the other hand … well, have you actually looked at an IQ test?  To anyone with a scientific or mathematical bent, the tests are vaguely horrifying.  “Which of these pictures is unlike the others?”  “What number comes next in the sequence?”  Question after question that could have multiple defensible valid answers, but only one that “counts”—and that, therefore, mostly tests the social skill of reverse-engineering what the test-writer had in mind.  As a teacher, I’d be embarrassed to put such questions on an exam.

I sometimes get asked what my IQ is.  The truth is that, as far as I know, I was given one official IQ test, when I was four years old, and my score was about 106.  The tester earnestly explained to my parents that, while I scored off the chart on some subtests, I completely bombed others, and averaging yielded 106.  As a representative example of what I got wrong, the tester offered my parents the following:

Tester: “Suppose you came home, and you saw smoke coming out of your neighbor’s roof.  What would you do?”

Me: “Probably nothing, because it’s just the chimney, and they have a fire in their fireplace.”

Tester: “OK, but suppose it wasn’t the chimney.”

Me: “Well then, I’d either call for help or not, depending on how much I liked my neighbor…”

Apparently, my parents later consulted other psychologists who were of the opinion that my IQ was higher.  But the point remains: if IQ is defined as your score on a professionally administered IQ test, then mine is about 106.

Richard Feynman famously scored only 124 on a childhood IQ test—above average, but below the cutoff for most schools’ “gifted and talented” programs.  After he won the Nobel Prize in Physics, he reportedly said that the prize itself was no big deal; what he was really proud of was to have received one despite a merely 124 IQ.  If so, then it seems to me that I can feel equally proud, to have completed a computer science PhD at age 22, become a tenured MIT professor, etc. etc. despite a much lower IQ even than Feynman’s.

But seriously: how do we explain Feynman’s score?  Well, when you read IQ enthusiasts, you find what they really love is not IQ itself, but rather “g“, a statistical construct derived via factor analysis: something that positively correlates with just about every measurable intellectual ability, but that isn’t itself directly measurable (at least, not by any test yet devised).  An IQ test is merely one particular instrument that happens to correlate well with g—not necessarily the best one for all purposes.  The SAT also correlates with g—indeed, almost as well as IQ tests themselves do, despite the idea (or pretense?) that the SAT measures “acquired knowledge.”  These correlations are important, but they allow for numerous and massive outliers.

So, not for the first time, I find myself in complete agreement with Scott Alexander, when he advises people to stop worrying.  We can uphold every statistical study that’s ever been done correlating IQ with other variables, while still affirming that fretting about your own low IQ score is almost as silly as fretting that you must be dumb because your bookshelf is too empty (a measurable variable that also presumably correlates with g).

More to the point: if you want to know, let’s say, whether you can succeed as a physicist, then surely the best way to find out is to start studying physics and see how well you do.  That will give you a much more accurate signal than a gross consumer index like IQ will—and conditioned on that signal, I’m guessing that your IQ score will provide almost zero additional information.  (Though then again, what would a guy with a 106 IQ know about such things?)

HTTPS / Kurtz / eclipse / Charlottesville / Blum / P vs. NP

Friday, August 25th, 2017

This post has a grab bag of topics, unified only by the fact that I can no longer put off blogging about them. So if something doesn’t interest you, just scroll down till you find something that does.


Great news, everyone: following a few reader complaints about the matter, the scottaaronson.com domain now supports https—and even automatically redirects to it! I’m so proud that Shtetl-Optimized has finally entered the technological universe of 1994. Thanks so much to heroic reader Martin Dehnel-Wild for setting this up for me.

Update 26/08/2017: Comments should now be working again; comments are now coming through to the moderated view in the blog’s control panel, so if they don’t show up immediately it might just be awaiting moderation. Thanks for your patience.


Last weekend, I was in Columbia, South Carolina, for a workshop to honor the 60th birthday of Stuart Kurtz, theoretical computer scientist at the University of Chicago.  I gave a talk about how work Kurtz was involved in from the 1990s—for example, on defining the complexity class GapP, and constructing oracles that satisfy conflicting requirements simultaneously—plays a major role in modern research on quantum computational supremacy: as an example, my recent paper with Lijie Chen.  (Except, what a terrible week to be discussing the paths to supremacy!  I promise there are no tiki torches involved, only much weaker photon sources.)

Coincidentally, I don’t know if you read anything about this on social media, but there was this total solar eclipse that passed right over Columbia at the end of the conference.

I’d always wondered why some people travel to remote corners of the earth to catch these.  So the sky gets dark for two minutes, and then it gets light again, in a way that’s been completely understood and predictable for centuries?

Having seen it, I can now tell you the deal, if you missed it and prefer to read about it here rather than 10500 other places online.  At risk of stating the obvious: it’s not the dark sky; it’s the sun’s corona visible around the moon.  Ironically, it’s only when the sun’s blotted out that you can actually look at the sun, at all the weird stuff going on around its disk.

OK, but totality is “only” to eclipses as orgasms are to sex.  There’s also the whole social experience of standing around outside with friends for an hour as the moon gradually takes a bigger bite out of the sun, staring up from time to time with eclipse-glasses to check its progress—and then everyone breaking into applause as the sky finally goes mostly dark, and you can look at the corona with the naked eye.  And then, if you like, standing around for another hour as the moon gradually exits the other way.  (If you’re outside the path of totality, this standing around and checking with eclipse-glasses is the whole experience.)

One cool thing is that, a little before and after totality, shadows on the ground have little crescents in them, as if the eclipse is imprinting its “logo” all over the earth.

For me, the biggest lesson the eclipse drove home was the logarithmic nature of perceived brightness (see also Scott Alexander’s story).  Like, the sun can be more than 90% occluded, and yet it’s barely a shade darker outside.  And you can still only look up with glasses so dark that they blot out everything except the sliver of sun, which still looks pretty much like the normal sun if you catch it out of the corner of your unaided eye.  Only during totality, and a few minutes before and after, is the darkening obvious.


Another topic at the workshop, unsurprisingly, was the ongoing darkening of the United States.  If it wasn’t obvious from my blog’s name, and if saying so explicitly will make any difference for anything, let the record state:

Shtetl-Optimized condemns Nazis, as well as anyone who knowingly marches with Nazis or defends them as “fine people.”

For a year, this blog has consistently described the now-president as a thug, liar, traitor, bully, sexual predator, madman, racist, and fraud, and has urged decent people everywhere to fight him by every peaceful and legal means available.  But if there’s some form of condemnation that I accidentally missed, then after Charlottesville, and Trump’s unhinged quasi-defenses of violent neo-Nazis, and defenses of his previous defenses, etc.—please consider Shtetl-Optimized to have condemned Trump that way also.

At least Charlottesville seems to have set local decisionmakers on an unstoppable course toward removing the country’s remaining Confederate statues—something I strongly supported back in May, before it had become the fully thermonuclear issue that it is now.  In an overnight operation, UT Austin has taken down its statues of Robert E. Lee, Albert Johnston, John Reagan, and Stephen Hogg.  (I confess, the postmaster general of the Confederacy wouldn’t have been my #1 priority for removal.  And, genuine question: what did Texas governor Stephen Hogg do that was so awful for his time, besides naming his daughter Ima Hogg?)


A final thing to talk about—yeah, we can’t avoid it—is Norbert Blum’s claimed proof of P≠NP.  I suppose I should be gratified that, after my last post, there were commenters who said, “OK, but enough about gender politics—what about P vs. NP?”  Here’s what I wrote on Tuesday the 15th:

To everyone who keeps asking me about the “new” P≠NP proof: I’d again bet $200,000 that the paper won’t stand, except that the last time I tried that, it didn’t achieve its purpose, which was to get people to stop asking me about it. So: please stop asking, and if the thing hasn’t been refuted by the end of the week, you can come back and tell me I was a closed-minded fool.

Many people misunderstood me to be saying that I’d again bet $200,000, even though the sentence said the exact opposite.  Maybe I should’ve said: I’m searching in vain for the right way to teach the nerd world to get less excited about these claims, to have the same reaction that the experts do, which is ‘oh boy, not another one’—which doesn’t mean that you know the error, or even that there is an error, but just means that you know the history.

Speaking of which, some friends and I recently had an awesome idea.  Just today, I registered the domain name haspvsnpbeensolved.com.  I’d like to set this up with a form that lets you type in the URL of a paper claiming to resolve the P vs. NP problem.  The site will then take 30 seconds or so to process the paper—with a status bar, progress updates, etc.—before finally rendering a verdict about the paper’s correctness.  Do any readers volunteer to help me create this?  Don’t worry, I’ll supply the secret algorithm to decide correctness, and will personally vouch for that algorithm for as long as the site remains live.

I have nothing bad to say about Norbert Blum, who made important contributions including the 3n circuit size lower bound for an explicit Boolean function—something that stood until very recently as the world record—and whose P≠NP paper was lucidly written, passing many of the most obvious checks.  And I received a bit of criticism for my “dismissive” stance.  Apparently, some right-wing former string theorist who I no longer read, whose name rhymes with Mubos Lotl, even accused me of being a conformist left-wing ideologue, driven to ignore Blum’s proof by an irrational conviction that any P≠NP proof will necessarily be so difficult that it will need to “await the Second Coming of Christ.”  Luca Trevisan’s reaction to that is worth quoting:

I agree with [Mubos Lotl] that the second coming of Jesus Christ is not a necessary condition for a correct proof that P is different from NP. I am keeping an open mind as to whether it is a sufficient condition.

On reflection, though, Mubos has a point: all of us, including me, should keep an open mind.  Maybe P≠NP (or P=NP!) is vastly easier to prove than most experts think, and is susceptible to a “fool’s mate.”

That being the case, it’s only intellectual honesty that compels me to report that, by about Friday of last week—i.e., exactly on my predicted schedule—a clear consensus had developed among experts that Blum’s P≠NP proof was irreparably flawed, and the consensus has stood since that time.

I’ve often wished that, even just for an hour or two, I could be free from this terrifying burden that I’ve carried around since childhood: the burden of having the right instincts about virtually everything.  Trust me, this “gift” is a lot less useful than it sounds, especially when reality so often contradicts what’s popular or expedient to say.

The background to Blum’s attempt, the counterexample that shows the proof has to fail somewhere, and the specifics of what appears to go wrong have already been covered at length elsewhere: see especially Luca’s post, Dick Lipton’s post, John Baez’s post, and the CS Theory StackExchange thread.

Very briefly, though: Blum claims to generalize some of the most celebrated complexity results of the 1980s—namely, superpolynomial lower bounds on the sizes of monotone circuits, which consist entirely of Boolean AND and OR gates—so that they also work for general (non-monotone) circuits, consisting of AND, OR, and NOT gates.  Everyone agrees that, if this succeeded, it would imply P≠NP.

Alas, another big discovery from the 1980s was that there are monotone Boolean functions (like Perfect Matching) that require superpolynomial-size monotone circuits, even though they have polynomial-size non-monotone circuits.  Why is that such a bummer?  Because it means our techniques for proving monotone circuit lower bounds can’t possibly work in as much generality as one might’ve naïvely hoped: if they did, they’d imply not merely that P doesn’t contain NP, but also that P doesn’t contain itself.

Blum was aware of all this, and gave arguments as to why his approach evades the Matching counterexample.  The trouble is, there’s another counterexample, which Blum doesn’t address, called Tardos’s function.  This is a weird creature: it’s obtained by starting with a graph invariant called the Lovász theta function, then looking at a polynomial-time approximation scheme for the theta function, and finally rounding the output of that PTAS to get a monotone function.  But whatever: in constructing this function, Tardos achieved her goal, which was to produce a monotone function that all known lower bound techniques for monotone circuits work perfectly fine for, but which is nevertheless in P (i.e., has polynomial-size non-monotone circuits).  In particular, if Blum’s proof worked, then it would also work for Tardos’s function, and that gives us a contradiction.

Of course, this merely tells us that Blum’s proof must have one or more mistakes; it doesn’t pinpoint where they are.  But the latter question has now been addressed as well.  On CS StackExchange, an anonymous commenter who goes variously by “idolvon” and “vloodin” provides a detailed analysis of the proof of Blum’s crucial Theorem 6.  I haven’t gone through every step myself, and there might be more to say about the matter than “vloodin” has, but several experts who are at once smarter, more knowledgeable, more cautious, and more publicity-shy than me have confirmed for me that vloodin correctly identified the erroneous region.

To those who wonder what gave me the confidence to call this immediately, without working through the details: besides the Cassandra-like burden that I was born with, I can explain something that might be helpful.  When Razborov achieved his superpolynomial monotone lower bounds in the 1980s, there was a brief surge of excitement: how far away could a P≠NP proof possibly be?  But then people, including Razborov himself, understood much more deeply what was going on—an understanding that was reflected in the theorems they proved, but also wasn’t completely captured by those theorems.

What was going on was this: monotone circuits are an interesting and nontrivial computational model.  Indeed for certain Boolean functions, such as the “slice functions,” they’re every bit as powerful as general circuits.  However, insofar as it’s possible to prove superpolynomial lower bounds on monotone circuit size, it’s possible only because monotone circuits are ridiculously less expressive than general Boolean circuits for the problems in question.  E.g., it’s possible only because monotone circuits aren’t expressing pseudorandom functions, and therefore aren’t engaging the natural proofs barrier or most of the other terrifying beasts that we’re up against.

So what can we say about the prospect that a minor tweak to the monotone circuit lower bound techniques from the 1980s would yield P≠NP?  If, like Mubos Lotl, you took the view that discrete math and theory of computation are just a mess of disconnected, random statements, then such a prospect would seem as likely to you as not.  But if you’re armed with the understanding above, then this possibility is a lot like the possibility that the OPERA experiment discovered superluminal neutrinos: no, not a logical impossibility, but something that’s safe to bet against at 10,000:1 odds.

During the discussion of Deolalikar’s earlier P≠NP claim, I once compared betting against a proof that all sorts of people are calling “formidable,” “solid,” etc., to standing in front of a huge pendulum—behind the furthest point that it reached the last time—even as it swings toward your face.  Just as certain physics teachers stake their lives on the conservation of energy, so I’m willing to stake my academic reputation, again and again, on the conservation of circuit-lower-bound difficulty.  And here I am, alive to tell the tale.

Me at the Science March today, in front of the Texas Capitol in Austin

Saturday, April 22nd, 2017