Archive for the ‘Procrastination’ Category
Like a latter-day Prometheus, Google brought a half-century of insights down from Mount Academic CS, and thereby changed life for the better here in our sublunary realm. You’ve probably had the experience of Google completing a search query before you’d fully formulated it in your mind, and thinking: “wow, our dysfunctional civilization might no longer be able to send people to the Moon, or even build working mass-transit systems, but I guess there are still engineers who can create things that inspire awe. And apparently many of them work at Google.”
I’ve never worked at Google, or had any financial stake in them, but I’m delighted to have many friends at Google’s far-flung locations, from Mountain View to Santa Barbara to Seattle to Boston to London to Tel Aviv, who sometimes host me when I visit and let me gorge on the legendary free food. If Google’s hiring of John Martinis and avid participation in the race for quantum supremacy weren’t enough, in the past year, my meeting both Larry Page and Sergey Brin to discuss quantum computing and the foundations of quantum mechanics, and seeing firsthand the intensity of their nerdish curiosity, heightened my appreciation still further for what that pair set in motion two decades ago. Hell, I don’t even begrudge Google its purchase of a D-Wave machine—even that might’ve ultimately been for the best, since it’s what led to the experiments that made clear the immense difficulty of getting any quantum speedup from those machines in a fair comparison.
But of course, all that fulsome praise was just a preamble to my gripe. It’s time someone said it in public: the semantics of Google Calendar are badly screwed up.
The issue is this: suppose I’m traveling to California, and I put into Google Calendar that, the day after I arrive, I’ll be giving a lecture at 4pm. In such a case, I always—always—mean 4pm California time. There’s no reason why I would ever mean, “4pm in whatever time zone I’m in right now, while creating this calendar entry.”
But Google Calendar doesn’t understand that. And its not understanding it—just that one little point—has led to years of confusions, missed appointments, and nearly-missed flights, on both my part and Dana’s. At least, until we learned to painstakingly enter the time zone for every calendar entry by hand (I still often forget).
Until recently, I thought it was just me and Dana who had this problem. But then last week, completely independently, a postdoc started complaining to me, “you know what’s messed up about Google Calendar?…”
The ideal, I suppose, would be to use machine learning to guess the intended time zone for each calendar entry. But failing that, it would also work fine just to assume that “4pm,” as entered by the user, unless otherwise specified means “4pm in whatever time zone we find ourselves in when the appointed day arrives.”
I foresee two possibilities, either of which I’m OK with. The first is that Google fixes the problem, whether prompted by this blog post or by something else. The second is that the issue never gets resolved; then, as often prophesied, Google’s deep nets achieve sentience and plot to take over the whole observable universe … and they would, if not for one fortuitous bug, which will cause the AIs to tip their hand to humanity an hour before planned.
In a discussion thread on Y Combinator, some people object to my proposed solution (“4pm means 4pm in whichever time zone I’ll be in then“) on the following ground. What if I want to call a group meeting at (say) 11am in Austin, and I’ll be traveling but will still call into the meeting remotely, and I want my calendar to show the meeting time in Austin, not the time wherever I’ll be calling in from (which might even be a plane)?
I can attest that, in ten years, that’s not a problem that’s arisen for me even once, whereas the converse problem arises almost every week, and is one of the banes of my existence.
But sure: Google Calendar should certainly include the option to tie times to specific time zones in advance! It seems obvious to me that my way should be the default, but honestly, I’d be happy if my way were even an option you could pick.
Yesterday Ryan Mandelbaum, at Gizmodo, posted a decidedly tongue-in-cheek piece about whether or not the universe is a computer simulation. (The piece was filed under the category “LOL.”)
The immediate impetus for Mandelbaum’s piece was a blog post by Sabine Hossenfelder, a physicist who will likely be familiar to regulars here in the nerdosphere. In her post, Sabine vents about the simulation speculations of philosophers like Nick Bostrom. She writes:
Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.
After hammering home that point, Sabine goes further, and says that the simulation hypothesis is almost ruled out, by (for example) the fact that our universe is Lorentz-invariant, and a simulation of our world by a discrete lattice of bits won’t reproduce Lorentz-invariance or other continuous symmetries.
In writing his post, Ryan Mandelbaum interviewed two people: Sabine and me.
I basically told Ryan that I agree with Sabine insofar as she argues that the simulation hypothesis is lazy—that it doesn’t pay its rent by doing real explanatory work, doesn’t even engage much with any of the deep things we’ve learned about the physical world—and disagree insofar as she argues that the simulation hypothesis faces some special difficulty because of Lorentz-invariance or other continuous phenomena in known physics. In short: blame it for being unfalsifiable rather than for being falsified!
Indeed, to whatever extent we believe the Bekenstein bound—and even more pointedly, to whatever extent we think the AdS/CFT correspondence says something about reality—we believe that in quantum gravity, any bounded physical system (with a short-wavelength cutoff, yada yada) lives in a Hilbert space of a finite number of qubits, perhaps ~1069 qubits per square meter of surface area. And as a corollary, if the cosmological constant is indeed constant (so that galaxies more than ~20 billion light years away are receding from us faster than light), then our entire observable universe can be described as a system of ~10122 qubits. The qubits would in some sense be the fundamental reality, from which Lorentz-invariant spacetime and all the rest would need to be recovered as low-energy effective descriptions. (I hasten to add: there’s of course nothing special about qubits here, any more than there is about bits in classical computation, compared to some other unit of information—nothing that says the Hilbert space dimension has to be a power of 2 or anything silly like that.) Anyway, this would mean that our observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere ~210^122 time steps.
Sabine might respond that AdS/CFT and other quantum gravity ideas are mere theoretical speculations, not solid and established like special relativity. But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits—then at some point the speculative shoe shifts to the other foot. The question becomes: do you reject the Church-Turing Thesis? Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems? If so, how? But if not, then how exactly does the universe avoid being computational, in the broad sense of the term?
I’d write more, but by coincidence, right now I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised. It’s tremendously exciting—the mixture of attendees is among the most stimulating I’ve ever encountered, from Lenny Susskind and Don Page and Daniel Harlow to Umesh Vazirani and Dorit Aharonov and Mario Szegedy to Google’s Sergey Brin. But it should surprise no one that, amid all the discussion of computation and fundamental physics, the question of whether the universe “really” “is” a simulation has barely come up. Why would it, when there are so many more fruitful things to ask? All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.
Today—January 20, 2017—I have something cheerful, something that I’m celebrating. It’s Lily’s fourth birthday. Happy birthday Lily!
As part of her birthday festivities, and despite her packed schedule, Lily has graciously agreed to field a few questions from readers of this blog. You can ask about her parents, favorite toys, recent trip to Disney World, etc. Just FYI: to the best of my knowledge, Lily doesn’t have any special insight about computational complexity, although she can write the letters ‘N’ and ‘P’ and find them on the keyboard. Nor has she demonstrated much interest in politics, though she’s aware that many people are upset because a very bad man just became the president. Anyway, if you ask questions that are appropriate for a real 4-year-old girl, rather than a blog humor construct, there’s a good chance I’ll let them through moderation and pass them on to her!
Meanwhile, here’s a photo I took of UT Austin students protesting Trump’s inauguration beneath the iconic UT tower.
Tomorrow, I’ll have something big to announce here. So, just to whet your appetites, and to get myself back into the habit of blogging, I figured I’d offer you an appetizer course: some more miscellaneous non-Trump-related news.
(1) My former student Leonid Grinberg points me to an astonishing art form, which I somehow hadn’t known about: namely, music videos generated by executable files that fit in only 4K of memory. Some of these videos have to be seen to be believed. (See also this one.) Much like, let’s say, a small Turing machine whose behavior is independent of set theory, these videos represent exercises in applied (or, OK, recreational) Kolmogorov complexity: how far out do you need to go in the space of all computer programs before you find beauty and humor and adaptability and surprise?
Admittedly, Leonid explains to me that the rules allow these programs to call DirectX and Visual Studio libraries to handle things like the 3D rendering (with the libraries not counted toward the 4K program size). This makes the programs’ existence merely extremely impressive, rather than a sign of alien superintelligence.
In some sense, all the programming enthusiasts over the decades who’ve burned their free time and processor cycles on Conway’s Game of Life and the Mandelbrot set and so forth were captivated by the same eerie beauty showcased by the videos: that of data compression, of the vast unfolding of a simple deterministic rule. But I also feel like the videos add a bit extra: the 3D rendering, the music, the panning across natural or manmade-looking dreamscapes. What we have here is a wonderful resource for either an acid trip or an undergrad computability and complexity course.
(2) A week ago Igor Oliveira, together with my longtime friend Rahul Santhanam, released a striking paper entitled Pseudodeterministic Constructions in Subexponential Time. To understand what this paper does, let’s start with Terry Tao’s 2009 polymath challenge: namely, to find a fast, deterministic method that provably generates large prime numbers. Tao’s challenge still stands today: one of the most basic, simplest-to-state unsolved problems in algorithms and number theory.
To be clear, we already have a fast deterministic method to decide whether a given number is prime: that was the 2002 breakthrough by Agrawal, Kayal, and Saxena. We also have a fast probabilistic method to generate large primes: namely, just keep picking n-digit numbers at random, test each one, and stop when you find one that’s prime! And those methods can be made deterministic assuming far-reaching conjectures in number theory, such as Cramer’s Conjecture (though note that even the Riemann Hypothesis wouldn’t lead to a polynomial-time algorithm, but “merely” a faster exponential-time one).
But, OK, what if you want a 5000-digit prime number, and you want it now: provably, deterministically, and fast? That was Tao’s challenge. The new paper by Oliveira and Santhanam doesn’t quite solve it, but it makes some exciting progress. Specifically, it gives a deterministic algorithm to generate n-digit prime numbers, with merely the following four caveats:
- The algorithm isn’t polynomial time, but subexponential (2n^o(1)) time.
- The algorithm isn’t deterministic, but pseudodeterministic (a concept introduced by Gat and Goldwasser). That is, the algorithm uses randomness, but it almost always succeeds, and it outputs the same n-digit prime number in every case where it succeeds.
- The algorithm might not work for all input lengths n, but merely for infinitely many of them.
- Finally, the authors can’t quite say what the algorithm is—they merely prove that it exists! If there’s a huge complexity collapse, such as ZPP=PSPACE, then the algorithm is one thing, while if not then the algorithm is something else.
Strikingly, Oliveira and Santhanam’s advance on the polymath problem is pure complexity theory: hitting sets and pseudorandom generators and win-win arguments and stuff like that. Their paper uses absolutely nothing specific to the prime numbers, except the facts that (a) there are lots of them (the Prime Number Theorem), and (b) we can efficiently decide whether a given number is prime (the AKS algorithm). It seems almost certain that one could do better by exploiting more about primes.
(3) I’m in Lyon, France right now, to give three quantum computing and complexity theory talks. I arrived here today from London, where I gave another two lectures. So far, the trip has been phenomenal, my hosts gracious, the audiences bristling with interesting questions.
But getting from London to Lyon also taught me an important life lesson that I wanted to share: never fly EasyJet. Or at least, if you fly one of the European “discount” airlines, realize that you get what you pay for (I’m told that Ryanair is even worse). These airlines have a fundamentally dishonest business model, based on selling impossibly cheap tickets, but then forcing passengers to check even tiny bags and charging exorbitant fees for it, counting on snagging enough travelers who just naïvely clicked “yes” to whatever would get them from point A to point B at a certain time, assuming that all airlines followed more-or-less similar rules. Which might not be so bad—it’s only money—if the minuscule, overworked staff of these quasi-airlines didn’t also treat the passengers like beef cattle, barking orders and berating people for failing to obey rules that one could log hundreds of thousands of miles on normal airlines without ever once encountering. Anyway, if the airlines won’t warn you, then Shtetl-Optimized will.
(-3) Bonus Announcement of May 30: As a joint effort by Yuri Matiyasevich, Stefan O’Rear, and myself, and using the Not-Quite-Laconic language that Stefan adapted from Adam Yedidia’s Laconic, we now have a 744-state TM that halts iff there’s a counterexample to the Riemann Hypothesis.
(-2) Today’s Bonus Announcement: Stefan O’Rear says that his Turing machine to search for contradictions in ZFC is now down to 1919 states. If verified, this is an important milestone: our upper bound on the number of Busy Beaver values that are knowable in standard mathematics is now less than the number of years since the birth of Christ (indeed, even since the generally-accepted dates for the writing of the Gospels).
Stefan also says that his Not-Quite-Laconic system has yielded a 1008-state Turing machine to search for counterexamples to the Riemann Hypothesis, improving on our 5372 states.
(-1) Another Bonus Announcement: Great news, everyone! Using a modified version of Adam Yedidia’s Laconic language (which he calls NQL, for Not Quite Laconic), Stefan O’Rear has now constructed a 5349-state Turing machine that directly searches for contradictions in ZFC (or rather in Metamath, which is known to be equivalent to ZFC), and whose behavior is therefore unprovable in ZFC, assuming ZFC is consistent. This, of course, improves on my and Adam’s state count by 2561 states—but it also fixes the technical issue with needing to assume a large cardinal axiom (SRP) in order to prove that the TM runs forever. Stefan promises further state reductions in the near future.
In other news, Adam has now verified the 43-state Turing machine by Jared S that halts iff there’s a counterexample to Goldbach’s Conjecture. The 27-state machine by code golf addict is still being verified.
(0) Bonus Announcement: I’ve had half a dozen “Ask Me Anything” sessions on this blog, but today I’m trying something different: a Q&A session on Quora. The way it works is that you vote for your favorite questions; then on Tuesday, I’ll start with the top-voted questions and keep going down the list until I get tired. Fire away! (And thanks to Shreyes Seshasai at Quora for suggesting this.)
(1) When you announce a new result, the worst that can happen is that the result turns out to be wrong, trivial, or already known. The best that can happen is that the result quickly becomes obsolete, as other people race to improve it. With my and Adam Yedidia’s work on small Turing machines that elude set theory, we seem to be heading for that best case. Stefan O’Rear wrote a not-quite-Laconic program that just searches directly for contradictions in a system equivalent to ZFC. If we could get his program to compile, it would likely yield a Turing machine with somewhere around 6,000-7,000 states whose behavior was independent of ZFC, and would also fix the technical problem with my and Adam’s machine Z, where one needed to assume a large-cardinal axiom called SRP to prove that Z runs forever. While it would require a redesign from the ground up, a 1,000-state machine whose behavior eludes ZFC also seems potentially within reach using Stefan’s ideas. Meanwhile, our 4,888-state machine for Goldbach’s conjecture seems to have been completely blown out of the water: first, a commenter named Jared S says he’s directly built a 73-state machine for Goldbach (now down to 43 states); second, a commenter named “code golf addict” claims to have improved on that with a mere 31 states (now down to 27 states). These machines are now publicly posted, but still await detailed verification.
(2) My good friend Jonah Sinick cofounded Signal Data Science, a data-science summer school that will be running for the second time this summer. They operate on an extremely interesting model, which I’m guessing might spread more widely: tuition is free, but you pay 10% of your first year’s salary after finding a job in the tech sector. He asked me to advertise them, so—here!
(3) I was sad to read the news that Uber and Lyft will be suspending all service in Austin, because the city passed an ordinance requiring their drivers to get fingerprint background checks, and imposing other regulations that Uber and Lyft argue are incompatible with their model of part-time drivers. The companies, of course, are also trying to send a clear message to other cities about what will happen if they don’t get the regulatory environment they want. To me, the truth of the matter is that Uber/Lyft are like the web, Google, or smartphones: clear, once-per-decade quality-of-life advances that you try once, and then no longer understand how you survived without. So if Austin wants to maintain a reputation as a serious, modern city, it has no choice but to figure out some way to bring these companies back to the negotiating table. On the other hand, I’d also say to Uber and Lyft that, even if they needed to raise fares to taxi levels to comply with the new regulations, I expect they’d still do a brisk business!
For me, the “value proposition” of Uber has almost nothing to do with the lower fares, even though they’re lower. For me, it’s simply about being able to get from one place to another without needing to drive and park, and also without needing desperately to explain where you are, over and over, to a taxi dispatcher who sounds angry that you called and who doesn’t understand you because of a combination of language barriers and poor cellphone reception and your own inability to articulate your location. And then wondering when and if your taxi will ever show up, because the dispatcher couldn’t promise a specific time, or hung up on you before you could ask them. And then embarking on a second struggle, to explain to the driver where you’re going, or at least convince them to follow the Google Maps directions. And then dealing with the fact that the driver has no change, you only have twenties and fifties, and their little machine that prints receipts is out of paper so you can’t submit your trip for reimbursement either.
So yes, I really hope Uber, Lyft, and the city of Austin manage to sort this out before Dana and I move there! On the other hand, I should say that there’s another part of the new ordinance—namely, requiring Uber and Lyft cars to be labeled—that strikes me as an unalloyed good. For if there’s one way in which Uber is less convenient than taxis, it’s that you can never figure out which car is your Uber, among all the cars stopping or slowing down near you that look vaguely like the one in the app.
Update (4/19): Inspired by Trudeau’s performance (which they clocked at 35 seconds), Maclean’s magazine asked seven quantum computing researchers—me, Krysta Svore, Aephraim Steinberg, Barry Sanders, Davide Venturelli, Martin Laforest, and Murray Thom—to also explain quantum computing in 35 seconds or fewer. You can see all the results here (here’s the audio from my entry).
The emails starting hitting me like … a hail of maple syrup from the icy north. Had I seen the news? Justin Trudeau, the dreamy young Prime Minister of Canada, visited the Perimeter Institute for Theoretical Physics in Waterloo, one of my favorite old haunts. At a news conference at PI, as Trudeau stood in front of a math-filled blackboard, a reporter said to him: “I was going to ask you to explain quantum computing, but — when do you expect Canada’s ISIL mission to begin again, and are we not doing anything in the interim?”
Rather than answering immediately about ISIL, Trudeau took the opportunity to explain quantum computing:
“Okay, very simply, normal computers work, uh, by [laughter, applause] … no no no, don’t interrupt me. When you walk out of here, you will know more … no, some of you will know far less about quantum computing, but most of you … normal computers work, either there’s power going through a wire, or not. It’s 1, or a 0, they’re binary systems. Uh, what quantum states allow for is much more complex information to be encoded into a single bit. Regular computer bit is either a 1 or a 0, on or off. A quantum state can be much more complex than that, because as we know [speeding up dramatically] things can be both particle and wave at the same times and the uncertainty around quantum states [laughter] allows us to encode more information into a much smaller computer. So, that’s what exciting about quantum computing and that’s… [huge applause] don’t get me going on this or we’ll be here all day, trust me.”
What marks does Trudeau get for this? On the one hand, the widespread praise for this reply surely says more about how low the usual standards for politicians are, and about Trudeau’s fine comic delivery, than about anything intrinsic to what he said. Trudeau doesn’t really assert much here: basically, he just says that normal computers work using 1’s and 0’s, and that quantum computers are more complicated than that in some hard-to-explain way. He gestures toward the uncertainty principle and wave/particle duality, but he doesn’t say anything about the aspects of QM most directly relevant to quantum computing—superposition or interference or the exponential size of Hilbert space—nor does he mention what quantum computers would or wouldn’t be used for.
On the other hand, I’d grade Trudeau’s explanation as substantially more accurate than what you’d get from a typical popular article. For pay close attention to what the Prime Minister never says: he never says that a qubit would be “both 0 and 1 at the same time,” or any equivalent formulation. (He does say that quantum states would let us “encode more information into a much smaller computer,” but while Holevo’s Theorem says that’s false for a common interpretation of “information,” it’s true for other reasonable interpretations.) The humorous speeding up as he mentions particle/wave duality and the uncertainty principle clearly suggests that he knows it’s more subtle than just “0 and 1 at the same time,” and he also knows that he doesn’t really get it and that the journalists in the audience don’t either. When I’m grading exams, I always give generous partial credit for honest admissions of ignorance. B+.
Anyway, I’d be curious to know who at PI prepped Trudeau for this, and what they said. Those with inside info, feel free to share in the comments (anonymously if you want!).
(One could also compare against Obama’s 2008 answer about bubblesort, which was just a mention of a keyword by comparison.)
Update: See also a Motherboard article where Romain Alléaume, Amr Helmy, Michele Mosca, and Aephraim Steinberg rate Trudeau’s answer, giving it 7/10, no score, 9/10, and 7/10 respectively.
Haha, kidding! I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).
By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined. By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton). But this was detected, in possibly the highest-precision measurement ever made.
As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space? I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2. Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close. Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3). Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter. Would you feel that? Not sure. At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point. (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.) Now, the black holes themselves were orbiting about 200 miles from each other before they merged. So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!
In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight. People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.” But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.
As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong. Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.
In my previous post, I linked to seven Closer to Truth videos of me spouting about free will, Gödel’s Theorem, black holes, etc. etc. I also mentioned that there was a segment of me talking about why the universe exists that for some reason they didn’t put up. Commenter mjgeddes wrote, “Would have liked to hear your views on the existence of the universe question,” so I answered in another comment.
But then I thought about it some more, and it seemed inappropriate to me that my considered statement about why the universe exists should only be available as part of a comment thread on my blog. At the very least, I thought, such a thing ought to be a top-level post.
So, without further ado:
My view is that, if we want to make mental peace with the “Why does the universe exist?” question, the key thing we need to do is forget about the universe for a while, and just focus on the meaning of the word “why.” I.e., when we ask a why-question, what kind of answer are we looking for, what kind of answer would make us happy?
Notice, in particular, that there are hundreds of other why-questions, not nearly as prestigious as the universe one, yet that seem just as vertiginously unanswerable. E.g., why is 5 a prime number? Why does “cat” have 3 letters?
Now, the best account of “why”—and of explanation and causality—that I know about is the interventionist account, as developed for example in Judea Pearl’s work. In that account, to ask “Why is X true?” is simply to ask: “What could we have changed in order to make X false?” I.e., in the causal network of reality, what are the levers that turn X on or off?
This question can sometimes make sense even in pure math. For example: “Why is this theorem true?” “It’s true only because we’re working over the complex numbers. The analogous statement about real numbers is false.” A perfectly good interventionist answer.
On the other hand, in the case of “Why is 5 prime?,” all the levers you could pull to make 5 composite involve significantly more advanced machinery than is needed to pose the question in the first place. E.g., “5 is prime because we’re working over the ring of integers. Over other rings, like Z[√5], it admits nontrivial factorizations.” Not really an explanation that would satisfy a four-year-old (or me, for that matter).
And then we come to the question of why anything exists. For an interventionist, this translates into: what causal lever could have been pulled in order to make nothing exist? Well, whatever lever it was, presumably the lever itself was something—and so you see the problem right there.
Admittedly, suppose there were a giant red button, somewhere within the universe, that when pushed would cause the entire universe (including the button itself) to blink out of existence. In that case, we could say: the reason why the universe continues to exist is that no one has pushed the button yet. But even then, that still wouldn’t explain why the universe had existed.
Non-Lily-Related Updates (Jan. 22)
Uri Bram posted a cute little article about whether he was justified, as a child, to tell his parents that he wouldn’t clean up his room because doing so would only increase the universe’s entropy and thereby hasten its demise. The article quotes me, Sean Carroll, and others about that important question.
On Wednesday I gave a TCS+ online seminar about “The Largest Possible Quantum Speedups.” If you’re interested, you can watch the YouTube video here.
(I promised a while ago that I’d upload some examples of Lily’s MOMA-worthy modern artworks. So, here are two!)
A few quotable quotes:
Daddy, when you were little, you were a girl like me!
I’m feeling a bit juicy [thirsty for juice].
Saba and Safta live in Israel. They’re mommy’s friends! [Actually they’re mommy’s parents.]
Me: You’re getting bigger every day!
Lily: But I’m also getting smaller every day!
Me: Then Goldilocks tasted the third bowl, which was Baby Bear’s, and it was just right! So she ate it all up. Then Goldilocks went…
Lily: No, then Goldilocks ate some cherries in the kitchen before she went to the bedroom. And blueberries.
Me: Fine, so she ate cherries and blueberries. Then she went to the bedroom, and she saw that there were three beds…
Lily: No, four beds!
Me: Fine, four beds. So she laid in the first bed, but she said, “this bed is too hard.”
Lily: No, it was too comfortable!
Me: Too comfortable? Is she some kind of monk?
Me [pointing to a taxidermed black bear in a museum]: What’s that?
Lily: A bear!
Me: Is it Winnie the Pooh?
Lily: No, it’s a different kind of bear.
Me [pointing to a tan bear in the next case]: So what about that one? Is that Winnie?
Lily: Yes! That’s Winnie the Pooh!
[Looking at it more closely] No, it’s a different kind of Winnie.
Lily: Why is it dark outside?
Me: Because it’s night time.
Lily: Why is it night time?
Me: Because the sun went to the other side of the world.
Lily: It went to China!
Me: Yes! It did in fact go to China.
Lily: Why did the sun go to China?
Me: Well, more accurately, it only seemed to go there, because the world that we’re on is spinning.
Lily: Why is the world spinning?
Me: Because of the conservation of angular momentum.
Lily: Why is the … consibation of amomomo?
Me: I suppose because of Noether’s Theorem, and the fact that our laws of physics are symmetric under spatial rotations.
Lily: Why is…
Me: That’s enough for today Lily!