Archive for the ‘Procrastination’ Category

The problem with Uber

Thursday, October 19th, 2017

I just spent a wonderful and exhausting five days in the Bay Area: meeting friends, holding the first-ever combined SlateStarCodex/Shtetl-Optimized meetup, touring quantum computing startups PsiCorp and Rigetti Computing, meeting with Silicon Valley folks about quantum computing, and giving a public lecture for the Simons Institute in Berkeley.  I’ll probably say more about some of these events in future posts, but for now: thanks so much to everyone who helped them happen!

Alas, my experiences getting around the Bay this week convinced me that there’s a real problem with Uber.  And no, I’m not talking about their corporate culture, or the personality of ousted CEO Travis Kalanick, or the hardball lobbying of municipalities to allow ride-sharing, or the taxi companies needing to adapt to survive, or even Uber having an unsustainable business model (they could charge more and I’d still use it…).

The problem is: when you order an Uber, like 2/3 of the time you and the driver can’t find each other without a lot of back and forth.

Firstly, because you can’t specify where you are with enough accuracy.  When you try, the app does this thing where it literally moves the “you are here” pointer to a place where you’re not. And then, even if the little dot correctly indicates your location, for some reason the driver will think you’re somewhere totally different.

Secondly, because Uber cars are typically unmarked.  Yes, the app tells you that it’s a white Ford or whatever—but there’s a lot of white cars, and it’s hard (at least for me) to distinguish models at a distance, so you can then face a stressful “Where’s Waldo?” problem involving hundreds of cars.

Thirdly, because the drivers understandably have their phones mounted on their dashboards—the result being that, when you call to try to figure out where they are, nothing they say can be distinguished from “mmph hrmph mmph.”  And of course they can’t text while driving.

To be clear, these gripes arise only because ride-sharing apps generally work so damn well, and are such an advance over what preceded them, that they’ve changed our expectations about the convenience of getting from place to place.  Because of Uber and Lyft and so on, it’s tempting to plan your life around the assumption that you can be anywhere in a greater metro area, and within 3 minutes a car will magically arrive to take you to wherever else in that area you need to be—while your brain remains uncluttered with transportation logistics, among the most excruciating of all topics.  This is a problem borne of success.

But—good news, everyone!—I have an idea to solve the problem, which I hereby offer free of charge to any ride-sharing service that wants to adopt it.  Namely, when you order a ride, why doesn’t the app—with your explicit permission, of course—use your phone’s camera to send a selfie of you, together with the location where you’re waiting, to the driver?  Is there some obvious reason I’m missing why this wouldn’t work?  Have any ride-sharing companies tried it?  (I only learned today that I can update my Uber profile to include my photo.  Hopefully that will help drivers find me—but a photo of the intersection, or the side of the building where I am, etc. could help even more.)

Also against individual IQ worries

Sunday, October 1st, 2017

Scott Alexander recently blogged “Against Individual IQ Worries.”  Apparently, he gets many readers writing to him terrified that they scored too low on an IQ test, and therefore they’ll never be able to pursue their chosen career, or be a full-fledged intellectual or member of the rationalist community or whatever.  Amusingly, other Scott says, some of these readers have even performed their own detailed Bayesian analysis of what it might mean that their IQ score is only 90, cogently weighing the arguments and counterarguments while deploying the full vocabulary of statistical research.  It somehow reminds me of the joke about the talking dog, who frets to his owner that he doesn’t think he’s articulate enough to justify all the media attention he’s getting.

I’ve long had mixed feelings about the entire concept of IQ.

On the one hand, I know all the studies that show that IQ is highly heritable, that it’s predictive of all sorts of life outcomes, etc. etc.  I’m also aware of the practical benefits of IQ research, many of which put anti-IQ leftists into an uncomfortable position: for example, the world might never have understood the risks of lead poisoning without studies showing how they depressed IQ.  And as for the thousands of writers who dismiss the concept of IQ in favor of grit, multiple intelligences, emotional intelligence, or whatever else is the flavor of the week … well, I can fully agree about the importance of the latter qualities, but can’t go along with many of those writers’ barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter.

On the other hand … well, have you actually looked at an IQ test?  To anyone with a scientific or mathematical bent, the tests are vaguely horrifying.  “Which of these pictures is unlike the others?”  “What number comes next in the sequence?”  Question after question that could have multiple defensible valid answers, but only one that “counts”—and that, therefore, mostly tests the social skill of reverse-engineering what the test-writer had in mind.  As a teacher, I’d be embarrassed to put such questions on an exam.

I sometimes get asked what my IQ is.  The truth is that, as far as I know, I was given one official IQ test, when I was four years old, and my score was about 106.  The tester earnestly explained to my parents that, while I scored off the chart on some subtests, I completely bombed others, and averaging yielded 106.  As a representative example of what I got wrong, the tester offered my parents the following:

Tester: “Suppose you came home, and you saw smoke coming out of your neighbor’s roof.  What would you do?”

Me: “Probably nothing, because it’s just the chimney, and they have a fire in their fireplace.”

Tester: “OK, but suppose it wasn’t the chimney.”

Me: “Well then, I’d either call for help or not, depending on how much I liked my neighbor…”

Apparently, my parents later consulted other psychologists who were of the opinion that my IQ was higher.  But the point remains: if IQ is defined as your score on a professionally administered IQ test, then mine is about 106.

Richard Feynman famously scored only 124 on a childhood IQ test—above average, but below the cutoff for most schools’ “gifted and talented” programs.  After he won the Nobel Prize in Physics, he reportedly said that the prize itself was no big deal; what he was really proud of was to have received one despite a merely 124 IQ.  If so, then it seems to me that I can feel equally proud, to have completed a computer science PhD at age 22, become a tenured MIT professor, etc. etc. despite a much lower IQ even than Feynman’s.

But seriously: how do we explain Feynman’s score?  Well, when you read IQ enthusiasts, you find what they really love is not IQ itself, but rather “g“, a statistical construct derived via factor analysis: something that positively correlates with just about every measurable intellectual ability, but that isn’t itself directly measurable (at least, not by any test yet devised).  An IQ test is merely one particular instrument that happens to correlate well with g—not necessarily the best one for all purposes.  The SAT also correlates with g—indeed, almost as well as IQ tests themselves do, despite the idea (or pretense?) that the SAT measures “acquired knowledge.”  These correlations are important, but they allow for numerous and massive outliers.

So, not for the first time, I find myself in complete agreement with Scott Alexander, when he advises people to stop worrying.  We can uphold every statistical study that’s ever been done correlating IQ with other variables, while still affirming that fretting about your own low IQ score is almost as silly as fretting that you must be dumb because your bookshelf is too empty (a measurable variable that also presumably correlates with g).

More to the point: if you want to know, let’s say, whether you can succeed as a physicist, then surely the best way to find out is to start studying physics and see how well you do.  That will give you a much more accurate signal than a gross consumer index like IQ will—and conditioned on that signal, I’m guessing that your IQ score will provide almost zero additional information.  (Though then again, what would a guy with a 106 IQ know about such things?)

HTTPS / Kurtz / eclipse / Charlottesville / Blum / P vs. NP

Friday, August 25th, 2017

This post has a grab bag of topics, unified only by the fact that I can no longer put off blogging about them. So if something doesn’t interest you, just scroll down till you find something that does.


Great news, everyone: following a few reader complaints about the matter, the scottaaronson.com domain now supports https—and even automatically redirects to it! I’m so proud that Shtetl-Optimized has finally entered the technological universe of 1994. Thanks so much to heroic reader Martin Dehnel-Wild for setting this up for me.

Update 26/08/2017: Comments should now be working again; comments are now coming through to the moderated view in the blog’s control panel, so if they don’t show up immediately it might just be awaiting moderation. Thanks for your patience.


Last weekend, I was in Columbia, South Carolina, for a workshop to honor the 60th birthday of Stuart Kurtz, theoretical computer scientist at the University of Chicago.  I gave a talk about how work Kurtz was involved in from the 1990s—for example, on defining the complexity class GapP, and constructing oracles that satisfy conflicting requirements simultaneously—plays a major role in modern research on quantum computational supremacy: as an example, my recent paper with Lijie Chen.  (Except, what a terrible week to be discussing the paths to supremacy!  I promise there are no tiki torches involved, only much weaker photon sources.)

Coincidentally, I don’t know if you read anything about this on social media, but there was this total solar eclipse that passed right over Columbia at the end of the conference.

I’d always wondered why some people travel to remote corners of the earth to catch these.  So the sky gets dark for two minutes, and then it gets light again, in a way that’s been completely understood and predictable for centuries?

Having seen it, I can now tell you the deal, if you missed it and prefer to read about it here rather than 10500 other places online.  At risk of stating the obvious: it’s not the dark sky; it’s the sun’s corona visible around the moon.  Ironically, it’s only when the sun’s blotted out that you can actually look at the sun, at all the weird stuff going on around its disk.

OK, but totality is “only” to eclipses as orgasms are to sex.  There’s also the whole social experience of standing around outside with friends for an hour as the moon gradually takes a bigger bite out of the sun, staring up from time to time with eclipse-glasses to check its progress—and then everyone breaking into applause as the sky finally goes mostly dark, and you can look at the corona with the naked eye.  And then, if you like, standing around for another hour as the moon gradually exits the other way.  (If you’re outside the path of totality, this standing around and checking with eclipse-glasses is the whole experience.)

One cool thing is that, a little before and after totality, shadows on the ground have little crescents in them, as if the eclipse is imprinting its “logo” all over the earth.

For me, the biggest lesson the eclipse drove home was the logarithmic nature of perceived brightness (see also Scott Alexander’s story).  Like, the sun can be more than 90% occluded, and yet it’s barely a shade darker outside.  And you can still only look up with glasses so dark that they blot out everything except the sliver of sun, which still looks pretty much like the normal sun if you catch it out of the corner of your unaided eye.  Only during totality, and a few minutes before and after, is the darkening obvious.


Another topic at the workshop, unsurprisingly, was the ongoing darkening of the United States.  If it wasn’t obvious from my blog’s name, and if saying so explicitly will make any difference for anything, let the record state:

Shtetl-Optimized condemns Nazis, as well as anyone who knowingly marches with Nazis or defends them as “fine people.”

For a year, this blog has consistently described the now-president as a thug, liar, traitor, bully, sexual predator, madman, racist, and fraud, and has urged decent people everywhere to fight him by every peaceful and legal means available.  But if there’s some form of condemnation that I accidentally missed, then after Charlottesville, and Trump’s unhinged quasi-defenses of violent neo-Nazis, and defenses of his previous defenses, etc.—please consider Shtetl-Optimized to have condemned Trump that way also.

At least Charlottesville seems to have set local decisionmakers on an unstoppable course toward removing the country’s remaining Confederate statues—something I strongly supported back in May, before it had become the fully thermonuclear issue that it is now.  In an overnight operation, UT Austin has taken down its statues of Robert E. Lee, Albert Johnston, John Reagan, and Stephen Hogg.  (I confess, the postmaster general of the Confederacy wouldn’t have been my #1 priority for removal.  And, genuine question: what did Texas governor Stephen Hogg do that was so awful for his time, besides naming his daughter Ima Hogg?)


A final thing to talk about—yeah, we can’t avoid it—is Norbert Blum’s claimed proof of P≠NP.  I suppose I should be gratified that, after my last post, there were commenters who said, “OK, but enough about gender politics—what about P vs. NP?”  Here’s what I wrote on Tuesday the 15th:

To everyone who keeps asking me about the “new” P≠NP proof: I’d again bet $200,000 that the paper won’t stand, except that the last time I tried that, it didn’t achieve its purpose, which was to get people to stop asking me about it. So: please stop asking, and if the thing hasn’t been refuted by the end of the week, you can come back and tell me I was a closed-minded fool.

Many people misunderstood me to be saying that I’d again bet $200,000, even though the sentence said the exact opposite.  Maybe I should’ve said: I’m searching in vain for the right way to teach the nerd world to get less excited about these claims, to have the same reaction that the experts do, which is ‘oh boy, not another one’—which doesn’t mean that you know the error, or even that there is an error, but just means that you know the history.

Speaking of which, some friends and I recently had an awesome idea.  Just today, I registered the domain name haspvsnpbeensolved.com.  I’d like to set this up with a form that lets you type in the URL of a paper claiming to resolve the P vs. NP problem.  The site will then take 30 seconds or so to process the paper—with a status bar, progress updates, etc.—before finally rendering a verdict about the paper’s correctness.  Do any readers volunteer to help me create this?  Don’t worry, I’ll supply the secret algorithm to decide correctness, and will personally vouch for that algorithm for as long as the site remains live.

I have nothing bad to say about Norbert Blum, who made important contributions including the 3n circuit size lower bound for an explicit Boolean function—something that stood until very recently as the world record—and whose P≠NP paper was lucidly written, passing many of the most obvious checks.  And I received a bit of criticism for my “dismissive” stance.  Apparently, some right-wing former string theorist who I no longer read, whose name rhymes with Mubos Lotl, even accused me of being a conformist left-wing ideologue, driven to ignore Blum’s proof by an irrational conviction that any P≠NP proof will necessarily be so difficult that it will need to “await the Second Coming of Christ.”  Luca Trevisan’s reaction to that is worth quoting:

I agree with [Mubos Lotl] that the second coming of Jesus Christ is not a necessary condition for a correct proof that P is different from NP. I am keeping an open mind as to whether it is a sufficient condition.

On reflection, though, Mubos has a point: all of us, including me, should keep an open mind.  Maybe P≠NP (or P=NP!) is vastly easier to prove than most experts think, and is susceptible to a “fool’s mate.”

That being the case, it’s only intellectual honesty that compels me to report that, by about Friday of last week—i.e., exactly on my predicted schedule—a clear consensus had developed among experts that Blum’s P≠NP proof was irreparably flawed, and the consensus has stood since that time.

I’ve often wished that, even just for an hour or two, I could be free from this terrifying burden that I’ve carried around since childhood: the burden of having the right instincts about virtually everything.  Trust me, this “gift” is a lot less useful than it sounds, especially when reality so often contradicts what’s popular or expedient to say.

The background to Blum’s attempt, the counterexample that shows the proof has to fail somewhere, and the specifics of what appears to go wrong have already been covered at length elsewhere: see especially Luca’s post, Dick Lipton’s post, John Baez’s post, and the CS Theory StackExchange thread.

Very briefly, though: Blum claims to generalize some of the most celebrated complexity results of the 1980s—namely, superpolynomial lower bounds on the sizes of monotone circuits, which consist entirely of Boolean AND and OR gates—so that they also work for general (non-monotone) circuits, consisting of AND, OR, and NOT gates.  Everyone agrees that, if this succeeded, it would imply P≠NP.

Alas, another big discovery from the 1980s was that there are monotone Boolean functions (like Perfect Matching) that require superpolynomial-size monotone circuits, even though they have polynomial-size non-monotone circuits.  Why is that such a bummer?  Because it means our techniques for proving monotone circuit lower bounds can’t possibly work in as much generality as one might’ve naïvely hoped: if they did, they’d imply not merely that P doesn’t contain NP, but also that P doesn’t contain itself.

Blum was aware of all this, and gave arguments as to why his approach evades the Matching counterexample.  The trouble is, there’s another counterexample, which Blum doesn’t address, called Tardos’s function.  This is a weird creature: it’s obtained by starting with a graph invariant called the Lovász theta function, then looking at a polynomial-time approximation scheme for the theta function, and finally rounding the output of that PTAS to get a monotone function.  But whatever: in constructing this function, Tardos achieved her goal, which was to produce a monotone function that all known lower bound techniques for monotone circuits work perfectly fine for, but which is nevertheless in P (i.e., has polynomial-size non-monotone circuits).  In particular, if Blum’s proof worked, then it would also work for Tardos’s function, and that gives us a contradiction.

Of course, this merely tells us that Blum’s proof must have one or more mistakes; it doesn’t pinpoint where they are.  But the latter question has now been addressed as well.  On CS StackExchange, an anonymous commenter who goes variously by “idolvon” and “vloodin” provides a detailed analysis of the proof of Blum’s crucial Theorem 6.  I haven’t gone through every step myself, and there might be more to say about the matter than “vloodin” has, but several experts who are at once smarter, more knowledgeable, more cautious, and more publicity-shy than me have confirmed for me that vloodin correctly identified the erroneous region.

To those who wonder what gave me the confidence to call this immediately, without working through the details: besides the Cassandra-like burden that I was born with, I can explain something that might be helpful.  When Razborov achieved his superpolynomial monotone lower bounds in the 1980s, there was a brief surge of excitement: how far away could a P≠NP proof possibly be?  But then people, including Razborov himself, understood much more deeply what was going on—an understanding that was reflected in the theorems they proved, but also wasn’t completely captured by those theorems.

What was going on was this: monotone circuits are an interesting and nontrivial computational model.  Indeed for certain Boolean functions, such as the “slice functions,” they’re every bit as powerful as general circuits.  However, insofar as it’s possible to prove superpolynomial lower bounds on monotone circuit size, it’s possible only because monotone circuits are ridiculously less expressive than general Boolean circuits for the problems in question.  E.g., it’s possible only because monotone circuits aren’t expressing pseudorandom functions, and therefore aren’t engaging the natural proofs barrier or most of the other terrifying beasts that we’re up against.

So what can we say about the prospect that a minor tweak to the monotone circuit lower bound techniques from the 1980s would yield P≠NP?  If, like Mubos Lotl, you took the view that discrete math and theory of computation are just a mess of disconnected, random statements, then such a prospect would seem as likely to you as not.  But if you’re armed with the understanding above, then this possibility is a lot like the possibility that the OPERA experiment discovered superluminal neutrinos: no, not a logical impossibility, but something that’s safe to bet against at 10,000:1 odds.

During the discussion of Deolalikar’s earlier P≠NP claim, I once compared betting against a proof that all sorts of people are calling “formidable,” “solid,” etc., to standing in front of a huge pendulum—behind the furthest point that it reached the last time—even as it swings toward your face.  Just as certain physics teachers stake their lives on the conservation of energy, so I’m willing to stake my academic reputation, again and again, on the conservation of circuit-lower-bound difficulty.  And here I am, alive to tell the tale.

Me at the Science March today, in front of the Texas Capitol in Austin

Saturday, April 22nd, 2017

If Google achieves superintelligence, time zones will be its Achilles heel

Monday, April 17th, 2017

Like a latter-day Prometheus, Google brought a half-century of insights down from Mount Academic CS, and thereby changed life for the better here in our sublunary realm.  You’ve probably had the experience of Google completing a search query before you’d fully formulated it in your mind, and thinking: “wow, our dysfunctional civilization might no longer be able to send people to the Moon, or even build working mass-transit systems, but I guess there are still engineers who can create things that inspire awe.  And apparently many of them work at Google.”

I’ve never worked at Google, or had any financial stake in them, but I’m delighted to have many friends at Google’s far-flung locations, from Mountain View to Santa Barbara to Seattle to Boston to London to Tel Aviv, who sometimes host me when I visit and let me gorge on the legendary free food.  If Google’s hiring of John Martinis and avid participation in the race for quantum supremacy weren’t enough, in the past year, my meeting both Larry Page and Sergey Brin to discuss quantum computing and the foundations of quantum mechanics, and seeing firsthand the intensity of their nerdish curiosity, heightened my appreciation still further for what that pair set in motion two decades ago.  Hell, I don’t even begrudge Google its purchase of a D-Wave machine—even that might’ve ultimately been for the best, since it’s what led to the experiments that made clear the immense difficulty of getting any quantum speedup from those machines in a fair comparison.

But of course, all that fulsome praise was just a preamble to my gripe.  It’s time someone said it in public: the semantics of Google Calendar are badly screwed up.

The issue is this: suppose I’m traveling to California, and I put into Google Calendar that, the day after I arrive, I’ll be giving a lecture at 4pm.  In such a case, I always—always—mean 4pm California time.  There’s no reason why I would ever mean, “4pm in whatever time zone I’m in right now, while creating this calendar entry.”

But Google Calendar doesn’t understand that.  And its not understanding it—just that one little point—has led to years of confusions, missed appointments, and nearly-missed flights, on both my part and Dana’s.  At least, until we learned to painstakingly enter the time zone for every calendar entry by hand (I still often forget).

Until recently, I thought it was just me and Dana who had this problem.  But then last week, completely independently, a postdoc started complaining to me, “you know what’s messed up about Google Calendar?…”

The ideal, I suppose, would be to use machine learning to guess the intended time zone for each calendar entry.  But failing that, it would also work fine just to assume that “4pm,” as entered by the user, unless otherwise specified means “4pm in whatever time zone we find ourselves in when the appointed day arrives.”

I foresee two possibilities, either of which I’m OK with.  The first is that Google fixes the problem, whether prompted by this blog post or by something else.  The second is that the issue never gets resolved; then, as often prophesied, Google’s deep nets achieve sentience and plot to take over the whole observable universe … and they would, if not for one fortuitous bug, which will cause the AIs to tip their hand to humanity an hour before planned.


In a discussion thread on Y Combinator, some people object to my proposed solution (“4pm means 4pm in whichever time zone I’ll be in then“) on the following ground. What if I want to call a group meeting at (say) 11am in Austin, and I’ll be traveling but will still call into the meeting remotely, and I want my calendar to show the meeting time in Austin, not the time wherever I’ll be calling in from (which might even be a plane)?

I can attest that, in ten years, that’s not a problem that’s arisen for me even once, whereas the converse problem arises almost every week, and is one of the banes of my existence.

But sure: Google Calendar should certainly include the option to tie times to specific time zones in advance! It seems obvious to me that my way should be the default, but honestly, I’d be happy if my way were even an option you could pick.

Your yearly dose of is-the-universe-a-simulation

Wednesday, March 22nd, 2017

Yesterday Ryan Mandelbaum, at Gizmodo, posted a decidedly tongue-in-cheek piece about whether or not the universe is a computer simulation.  (The piece was filed under the category “LOL.”)

The immediate impetus for Mandelbaum’s piece was a blog post by Sabine Hossenfelder, a physicist who will likely be familiar to regulars here in the nerdosphere.  In her post, Sabine vents about the simulation speculations of philosophers like Nick Bostrom.  She writes:

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

After hammering home that point, Sabine goes further, and says that the simulation hypothesis is almost ruled out, by (for example) the fact that our universe is Lorentz-invariant, and a simulation of our world by a discrete lattice of bits won’t reproduce Lorentz-invariance or other continuous symmetries.

In writing his post, Ryan Mandelbaum interviewed two people: Sabine and me.

I basically told Ryan that I agree with Sabine insofar as she argues that the simulation hypothesis is lazy—that it doesn’t pay its rent by doing real explanatory work, doesn’t even engage much with any of the deep things we’ve learned about the physical world—and disagree insofar as she argues that the simulation hypothesis faces some special difficulty because of Lorentz-invariance or other continuous phenomena in known physics.  In short: blame it for being unfalsifiable rather than for being falsified!

Indeed, to whatever extent we believe the Bekenstein bound—and even more pointedly, to whatever extent we think the AdS/CFT correspondence says something about reality—we believe that in quantum gravity, any bounded physical system (with a short-wavelength cutoff, yada yada) lives in a Hilbert space of a finite number of qubits, perhaps ~1069 qubits per square meter of surface area.  And as a corollary, if the cosmological constant is indeed constant (so that galaxies more than ~20 billion light years away are receding from us faster than light), then our entire observable universe can be described as a system of ~10122 qubits.  The qubits would in some sense be the fundamental reality, from which Lorentz-invariant spacetime and all the rest would need to be recovered as low-energy effective descriptions.  (I hasten to add: there’s of course nothing special about qubits here, any more than there is about bits in classical computation, compared to some other unit of information—nothing that says the Hilbert space dimension has to be a power of 2 or anything silly like that.)  Anyway, this would mean that our observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere ~210^122 time steps.

Sabine might respond that AdS/CFT and other quantum gravity ideas are mere theoretical speculations, not solid and established like special relativity.  But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits—then at some point the speculative shoe shifts to the other foot.  The question becomes: do you reject the Church-Turing Thesis?  Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems?  If so, how?  But if not, then how exactly does the universe avoid being computational, in the broad sense of the term?

I’d write more, but by coincidence, right now I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised.  It’s tremendously exciting—the mixture of attendees is among the most stimulating I’ve ever encountered, from Lenny Susskind and Don Page and Daniel Harlow to Umesh Vazirani and Dorit Aharonov and Mario Szegedy to Google’s Sergey Brin.  But it should surprise no one that, amid all the discussion of computation and fundamental physics, the question of whether the universe “really” “is” a simulation has barely come up.  Why would it, when there are so many more fruitful things to ask?  All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.

A day to celebrate

Friday, January 20th, 2017

Today—January 20, 2017—I have something cheerful, something that I’m celebrating.  It’s Lily’s fourth birthday. Happy birthday Lily!

As part of her birthday festivities, and despite her packed schedule, Lily has graciously agreed to field a few questions from readers of this blog.  You can ask about her parents, favorite toys, recent trip to Disney World, etc.  Just FYI: to the best of my knowledge, Lily doesn’t have any special insight about computational complexity, although she can write the letters ‘N’ and ‘P’ and find them on the keyboard.  Nor has she demonstrated much interest in politics, though she’s aware that many people are upset because a very bad man just became the president.  Anyway, if you ask questions that are appropriate for a real 4-year-old girl, rather than a blog humor construct, there’s a good chance I’ll let them through moderation and pass them on to her!

Meanwhile, here’s a photo I took of UT Austin students protesting Trump’s inauguration beneath the iconic UT tower.

The teaser

Tuesday, December 13th, 2016

Tomorrow, I’ll have something big to announce here.  So, just to whet your appetites, and to get myself back into the habit of blogging, I figured I’d offer you an appetizer course: some more miscellaneous non-Trump-related news.


(1) My former student Leonid Grinberg points me to an astonishing art form, which I somehow hadn’t known about: namely, music videos generated by executable files that fit in only 4K of memory.  Some of these videos have to be seen to be believed.  (See also this one.)  Much like, let’s say, a small Turing machine whose behavior is independent of set theory, these videos represent exercises in applied (or, OK, recreational) Kolmogorov complexity: how far out do you need to go in the space of all computer programs before you find beauty and humor and adaptability and surprise?

Admittedly, Leonid explains to me that the rules allow these programs to call DirectX and Visual Studio libraries to handle things like the 3D rendering (with the libraries not counted toward the 4K program size).  This makes the programs’ existence merely extremely impressive, rather than a sign of alien superintelligence.

In some sense, all the programming enthusiasts over the decades who’ve burned their free time and processor cycles on Conway’s Game of Life and the Mandelbrot set and so forth were captivated by the same eerie beauty showcased by the videos: that of data compression, of the vast unfolding of a simple deterministic rule.  But I also feel like the videos add a bit extra: the 3D rendering, the music, the panning across natural or manmade-looking dreamscapes.  What we have here is a wonderful resource for either an acid trip or an undergrad computability and complexity course.


(2) A week ago Igor Oliveira, together with my longtime friend Rahul Santhanam, released a striking paper entitled Pseudodeterministic Constructions in Subexponential Time.  To understand what this paper does, let’s start with Terry Tao’s 2009 polymath challenge: namely, to find a fast, deterministic method that provably generates large prime numbers.  Tao’s challenge still stands today: one of the most basic, simplest-to-state unsolved problems in algorithms and number theory.

To be clear, we already have a fast deterministic method to decide whether a given number is prime: that was the 2002 breakthrough by Agrawal, Kayal, and Saxena.  We also have a fast probabilistic method to generate large primes: namely, just keep picking n-digit numbers at random, test each one, and stop when you find one that’s prime!  And those methods can be made deterministic assuming far-reaching conjectures in number theory, such as Cramer’s Conjecture (though note that even the Riemann Hypothesis wouldn’t lead to a polynomial-time algorithm, but “merely” a faster exponential-time one).

But, OK, what if you want a 5000-digit prime number, and you want it now: provably, deterministically, and fast?  That was Tao’s challenge.  The new paper by Oliveira and Santhanam doesn’t quite solve it, but it makes some exciting progress.  Specifically, it gives a deterministic algorithm to generate n-digit prime numbers, with merely the following four caveats:

  • The algorithm isn’t polynomial time, but subexponential (2n^o(1)) time.
  • The algorithm isn’t deterministic, but pseudodeterministic (a concept introduced by Gat and Goldwasser).  That is, the algorithm uses randomness, but it almost always succeeds, and it outputs the same n-digit prime number in every case where it succeeds.
  • The algorithm might not work for all input lengths n, but merely for infinitely many of them.
  • Finally, the authors can’t quite say what the algorithm is—they merely prove that it exists!  If there’s a huge complexity collapse, such as ZPP=PSPACE, then the algorithm is one thing, while if not then the algorithm is something else.

Strikingly, Oliveira and Santhanam’s advance on the polymath problem is pure complexity theory: hitting sets and pseudorandom generators and win-win arguments and stuff like that.  Their paper uses absolutely nothing specific to the prime numbers, except the facts that (a) there are lots of them (the Prime Number Theorem), and (b) we can efficiently decide whether a given number is prime (the AKS algorithm).  It seems almost certain that one could do better by exploiting more about primes.


(3) I’m in Lyon, France right now, to give three quantum computing and complexity theory talks.  I arrived here today from London, where I gave another two lectures.  So far, the trip has been phenomenal, my hosts gracious, the audiences bristling with interesting questions.

But getting from London to Lyon also taught me an important life lesson that I wanted to share: never fly EasyJet.  Or at least, if you fly one of the European “discount” airlines, realize that you get what you pay for (I’m told that Ryanair is even worse).  These airlines have a fundamentally dishonest business model, based on selling impossibly cheap tickets, but then forcing passengers to check even tiny bags and charging exorbitant fees for it, counting on snagging enough travelers who just naïvely clicked “yes” to whatever would get them from point A to point B at a certain time, assuming that all airlines followed more-or-less similar rules.  Which might not be so bad—it’s only money—if the minuscule, overworked staff of these quasi-airlines didn’t also treat the passengers like beef cattle, barking orders and berating people for failing to obey rules that one could log hundreds of thousands of miles on normal airlines without ever once encountering.  Anyway, if the airlines won’t warn you, then Shtetl-Optimized will.

Three announcements

Monday, May 9th, 2016

(-3) Bonus Announcement of May 30: As a joint effort by Yuri Matiyasevich, Stefan O’Rear, and myself, and using the Not-Quite-Laconic language that Stefan adapted from Adam Yedidia’s Laconic, we now have a 744-state TM that halts iff there’s a counterexample to the Riemann Hypothesis.

(-2) Today’s Bonus Announcement: Stefan O’Rear says that his Turing machine to search for contradictions in ZFC is now down to 1919 states.  If verified, this is an important milestone: our upper bound on the number of Busy Beaver values that are knowable in standard mathematics is now less than the number of years since the birth of Christ (indeed, even since the generally-accepted dates for the writing of the Gospels).

Stefan also says that his Not-Quite-Laconic system has yielded a 1008-state Turing machine to search for counterexamples to the Riemann Hypothesis, improving on our 5372 states.

(-1) Another Bonus Announcement: Great news, everyone!  Using a modified version of Adam Yedidia’s Laconic language (which he calls NQL, for Not Quite Laconic), Stefan O’Rear has now constructed a 5349-state Turing machine that directly searches for contradictions in ZFC (or rather in Metamath, which is known to be equivalent to ZFC), and whose behavior is therefore unprovable in ZFC, assuming ZFC is consistent.  This, of course, improves on my and Adam’s state count by 2561 states—but it also fixes the technical issue with needing to assume a large cardinal axiom (SRP) in order to prove that the TM runs forever.  Stefan promises further state reductions in the near future.

In other news, Adam has now verified the 43-state Turing machine by Jared S that halts iff there’s a counterexample to Goldbach’s Conjecture.  The 27-state machine by code golf addict is still being verified.

(0) Bonus Announcement: I’ve had half a dozen “Ask Me Anything” sessions on this blog, but today I’m trying something different: a Q&A session on Quora.  The way it works is that you vote for your favorite questions; then on Tuesday, I’ll start with the top-voted questions and keep going down the list until I get tired.  Fire away!  (And thanks to Shreyes Seshasai at Quora for suggesting this.)

(1) When you announce a new result, the worst that can happen is that the result turns out to be wrong, trivial, or already known.  The best that can happen is that the result quickly becomes obsolete, as other people race to improve it.  With my and Adam Yedidia’s work on small Turing machines that elude set theory, we seem to be heading for that best case.  Stefan O’Rear wrote a not-quite-Laconic program that just searches directly for contradictions in a system equivalent to ZFC.  If we could get his program to compile, it would likely yield a Turing machine with somewhere around 6,000-7,000 states whose behavior was independent of ZFC, and would also fix the technical problem with my and Adam’s machine Z, where one needed to assume a large-cardinal axiom called SRP to prove that Z runs forever.  While it would require a redesign from the ground up, a 1,000-state machine whose behavior eludes ZFC also seems potentially within reach using Stefan’s ideas.  Meanwhile, our 4,888-state machine for Goldbach’s conjecture seems to have been completely blown out of the water: first, a commenter named Jared S says he’s directly built a 73-state machine for Goldbach (now down to 43 states); second, a commenter named “code golf addict” claims to have improved on that with a mere 31 states (now down to 27 states).  These machines are now publicly posted, but still await detailed verification.

(2) My good friend Jonah Sinick cofounded Signal Data Science, a data-science summer school that will be running for the second time this summer.  They operate on an extremely interesting model, which I’m guessing might spread more widely: tuition is free, but you pay 10% of your first year’s salary after finding a job in the tech sector.  He asked me to advertise them, so—here!

(3) I was sad to read the news that Uber and Lyft will be suspending all service in Austin, because the city passed an ordinance requiring their drivers to get fingerprint background checks, and imposing other regulations that Uber and Lyft argue are incompatible with their model of part-time drivers.  The companies, of course, are also trying to send a clear message to other cities about what will happen if they don’t get the regulatory environment they want.  To me, the truth of the matter is that Uber/Lyft are like the web, Google, or smartphones: clear, once-per-decade quality-of-life advances that you try once, and then no longer understand how you survived without.  So if Austin wants to maintain a reputation as a serious, modern city, it has no choice but to figure out some way to bring these companies back to the negotiating table.  On the other hand, I’d also say to Uber and Lyft that, even if they needed to raise fares to taxi levels to comply with the new regulations, I expect they’d still do a brisk business!

For me, the “value proposition” of Uber has almost nothing to do with the lower fares, even though they’re lower.  For me, it’s simply about being able to get from one place to another without needing to drive and park, and also without needing desperately to explain where you are, over and over, to a taxi dispatcher who sounds angry that you called and who doesn’t understand you because of a combination of language barriers and poor cellphone reception and your own inability to articulate your location.  And then wondering when and if your taxi will ever show up, because the dispatcher couldn’t promise a specific time, or hung up on you before you could ask them.  And then embarking on a second struggle, to explain to the driver where you’re going, or at least convince them to follow the Google Maps directions.  And then dealing with the fact that the driver has no change, you only have twenties and fifties, and their little machine that prints receipts is out of paper so you can’t submit your trip for reimbursement either.

So yes, I really hope Uber, Lyft, and the city of Austin manage to sort this out before Dana and I move there!  On the other hand, I should say that there’s another part of the new ordinance—namely, requiring Uber and Lyft cars to be labeled—that strikes me as an unalloyed good.  For if there’s one way in which Uber is less convenient than taxis, it’s that you can never figure out which car is your Uber, among all the cars stopping or slowing down near you that look vaguely like the one in the app.

Grading Trudeau on quantum computing

Sunday, April 17th, 2016

Update (4/19): Inspired by Trudeau’s performance (which they clocked at 35 seconds), Maclean’s magazine asked seven quantum computing researchers—me, Krysta Svore, Aephraim Steinberg, Barry Sanders, Davide Venturelli, Martin Laforest, and Murray Thom—to also explain quantum computing in 35 seconds or fewer.  You can see all the results here (here’s the audio from my entry).


The emails starting hitting me like … a hail of maple syrup from the icy north.  Had I seen the news?  Justin Trudeau, the dreamy young Prime Minister of Canada, visited the Perimeter Institute for Theoretical Physics in Waterloo, one of my favorite old haunts.  At a news conference at PI, as Trudeau stood in front of a math-filled blackboard, a reporter said to him: “I was going to ask you to explain quantum computing, but — when do you expect Canada’s ISIL mission to begin again, and are we not doing anything in the interim?”

Rather than answering immediately about ISIL, Trudeau took the opportunity to explain quantum computing:

“Okay, very simply, normal computers work, uh, by [laughter, applause] … no no no, don’t interrupt me.  When you walk out of here, you will know more … no, some of you will know far less about quantum computing, but most of you … normal computers work, either there’s power going through a wire, or not.  It’s 1, or a 0, they’re binary systems.  Uh, what quantum states allow for is much more complex information to be encoded into a single bit.  Regular computer bit is either a 1 or a 0, on or off.  A quantum state can be much more complex than that, because as we know [speeding up dramatically] things can be both particle and wave at the same times and the uncertainty around quantum states [laughter] allows us to encode more information into a much smaller computer.  So, that’s what exciting about quantum computing and that’s… [huge applause] don’t get me going on this or we’ll be here all day, trust me.”

What marks does Trudeau get for this?  On the one hand, the widespread praise for this reply surely says more about how low the usual standards for politicians are, and about Trudeau’s fine comic delivery, than about anything intrinsic to what he said.  Trudeau doesn’t really assert much here: basically, he just says that normal computers work using 1’s and 0’s, and that quantum computers are more complicated than that in some hard-to-explain way.  He gestures toward the uncertainty principle and wave/particle duality, but he doesn’t say anything about the aspects of QM most directly relevant to quantum computing—superposition or interference or the exponential size of Hilbert space—nor does he mention what quantum computers would or wouldn’t be used for.

On the other hand, I’d grade Trudeau’s explanation as substantially more accurate than what you’d get from a typical popular article.  For pay close attention to what the Prime Minister never says: he never says that a qubit would be “both 0 and 1 at the same time,” or any equivalent formulation.  (He does say that quantum states would let us “encode more information into a much smaller computer,” but while Holevo’s Theorem says that’s false for a common interpretation of “information,” it’s true for other reasonable interpretations.)  The humorous speeding up as he mentions particle/wave duality and the uncertainty principle clearly suggests that he knows it’s more subtle than just “0 and 1 at the same time,” and he also knows that he doesn’t really get it and that the journalists in the audience don’t either.  When I’m grading exams, I always give generous partial credit for honest admissions of ignorance.  B+.

Anyway, I’d be curious to know who at PI prepped Trudeau for this, and what they said.  Those with inside info, feel free to share in the comments (anonymously if you want!).

(One could also compare against Obama’s 2008 answer about bubblesort, which was just a mention of a keyword by comparison.)

Update: See also a Motherboard article where Romain Alléaume, Amr Helmy, Michele Mosca, and Aephraim Steinberg rate Trudeau’s answer, giving it 7/10, no score, 9/10, and 7/10 respectively.