Coming to Nerd Central

While I’m generally on sabbatical in Tel Aviv this year, I’ll be in the Bay Area from Saturday Oct. 14 through Wednesday Oct. 18, where I look forward to seeing many friends new and old.  On Wednesday evening, I’ll be giving a public talk in Berkeley, through the Simons Institute’s “Theoretically Speaking” series, entitled Black Holes, Firewalls, and the Limits of Quantum Computers.  I hope to see at least a few of you there!  (I do have readers in the Bay Area, don’t I?)

But there’s more: on Saturday Oct. 14, I’m thinking of having a first-ever Shtetl-Optimized meetup, somewhere near the Berkeley campus.  Which will also be a Slate Star Codex meetup, because Scott Alexander will be there too.  We haven’t figured out many details yet, except that it will definitively involve getting fruit smoothies from one of the places I remember as a grad student.  Possible discussion topics include what the math, CS, and physics research communities could be doing better; how to advance Enlightenment values in an age of recrudescent totalitarianism; and (if we’re feeling really ambitious) the interpretation of quantum mechanics.  If you’re interested, shoot me an email, let me know if there are times that don’t work; then other Scott and I will figure out a plan and make an announcement.

On an unrelated note, some people might enjoy my answer to a MathOverflow question about why one should’ve expected number theory to be so rife with ridiculously easy-to-state yet hard-to-prove conjectures, like Fermat’s Last Theorem and the Goldbach Conjecture.  As I’ve discussed on this blog before, I’ve been deeply impressed with MathOverflow since the beginning, but never more so than today, when a decision to close the question as “off-topic” was rightfully overruled.  If there’s any idea that unites all theoretical computer scientists, I’d say it’s the idea that what makes a given kind of mathematics “easy” or “hard” is, itself, a proper subject for mathematical inquiry.

54 Responses to “Coming to Nerd Central”

  1. Shecky R Says:

    So will you and Scott Alexander actually be in the same room at the same time (or does one of you have to step inside a phone booth for the other one to appear)?

  2. Scott Says:

    Shecky #1: LOL! I have no clue how Scott Alexander does all of his stuff as one person (blogging, writing Unsong, being a practicing psychiatrist…), let alone my stuff as well. And there are even dozens of people who can attest to our having been in the same room (I attended some of his events in Michigan and in Cambridge MA).

  3. Haribo Freak Says:

    Can ya’ll make sure Donald Knuth shows up to the party?

  4. Scott Says:

    Haribo Freak #3: I’d email him an invite, but … oh, wait, he doesn’t use email, does he? 🙂

  5. anon Says:

    The strangest thing about MathOverflow is that there seems to be some kind of fundamentalist sect that has taken as their task defending the purity of MathOverflow from hordes of ‘off topic’ questions. I don’t think you would’ve been the first notable contributor who has left because of that. Luckily the decision to close the question was overruled in this case.

  6. Sniffnoy Says:

    I can personally attest that Scott A. and Scott A. are not the same person!

    It seems odd to me for the bay area to be considered as “nerd central” even if these days there are more nerds there than somewhere I’d consider more centrally nerdy like, say, Boston/Cambridge. Because like on the one hand, like, the bay area’s kind of, well, fuzzy, isn’t it? And then on the other hand you’ve got all the startup people bringing a lot of suit-ish qualities.

  7. Scott Says:

    Sniffnoy #6: I suppose its status as Nerd Central was cemented when a large fraction of the world’s rationalist and effective altruist communities moved there—and not just to the Bay, but (apparently, I haven’t seen it firsthand yet) to one specific street in Berkeley.

    Yes, Boston/Cambridge would be the main other entrant in this competition.

  8. william e emba Says:

    Regarding your answer on MO, 3-manifolds have not been classified

    Geometrization is a major step, but it doesn’t address the question of what are the possible hyperbolic 3-manifolds that show up in geometrization, and thus is not a classification. About ten years ago, a few years after Perelman, Agol-Calegari-Gabai classified hyperbolic 3-manifolds that have finitely generated fundamental groups. The full classification problem is unsolved.

    You may have been thinking of the Homeomorphism Problem–when are two 3-manifolds homeomorphic?–which Thurston claimed was in fact computable modulo geometrization. A year or two ago, Kuperberg explicitly confirmed Thurston’s statement (using Thurston era techniques) and then improved it using newer methods, getting a bounded stack of exponential bound.

    For a striking example of the difference between the two, there is obviously a trivial algorithm for showing when two finite simple groups, as given by their multiplication tables, are isomorphic, but their classification is famously far from obvious.

  9. Scott Says:

    William #8: Thanks so much for that super-interesting clarification (even if it doesn’t materially change my argument)! Yes, the distinction between decidable homeomorphism problem and having a full classification is clear once you’ve pointed it out. Questions:

    – Can we give a formal characterization of what it means for a set of manifolds (or other objects) to have been “classified”? Can we at least say that a decidable homeomorphism problem is necessary for this, if not sufficient?

    – Is the classification problem for 3-manifolds recognized as a major open problem that people continue to work on?

  10. Shmi Says:

    You point out that “short computer programs already give rise to absurdly complicated behavior.” And that the number theory is a lot like a universal computer. So.. wonder if you can offer any insights into why the field of computer science (and, by extension, the number theory) has these very long complexity filaments stretching out who knows how far, and starting so close to the simple-to-understand “center”?

  11. I beg to differ on nerd central Says:

    Scott,

    I don’t know when was the last time you were in the Bay Area for an extended period of time -ie beyond a visit lasting a few days- but calling the Bay Area “Nerd Central” doesn’t seem appropriate anymore, at least not in 2017. It lost that status during the 2007-2008 financial crisis.

    Up until then, it was truly a nerds paradise filled with people interested in technology and hard science for its own sake. Then the raise of Google, Facebook and Apple happened. Currently the Bay Area is “Hipster Central”. The kind of people who came here after the financial crisis is the kind of people who, during the 1990s or early 2000s, would have rather had a career as investment bankers or management consultants. While some see that as a positive development -particularly those who worship IQ testing-, I think the influx of said people killed the Bay Area as I had known it because these people also brought with them the “investment banking values” that puts greed ahead of everything else.

    At first I thought it was going to be a temporary passing fad, but now I fear that there has been a permanent shift in the Bay Area’s demographics as to make the change if not permanent, at least long lasting. To be clear, I am not claiming that all nerd culture has disappeared but if you go to the events, conferences and meetups that one would call “geeky”, you’ll find there mostly middle aged and older people. The hipsters are nowhere to be found. They are somewhere plotting the next “get-rich-quick scheme”, a far cry from the times of the Homebrew Computer Club.

  12. Scott Says:

    Shmi #10: Well, we know that the Busy Beaver function grows uncomputably rapidly—which implies that the “maximum complexity” of proving mathematical statements of length n, must grow in a “totally uncontrolled” way as a function of n. Indeed, Gödel observed already in 1936 that there can be no computable function f, such that every theorem of ZFC that’s at most n symbols long has a proof at most f(n) symbols long, anticipating Rado’s definition of the Busy Beaver function by a few decades.

    I don’t know if that’s a full explanation for your “very long complexity filaments stretching who knows how far,” but it seems clearly relevant to it.

  13. Scott Says:

    beg to differ #11: Thanks for the useful insight! I’m actually happy to hear everything that’s screwed up about the Bay Area nowadays (of which there’s a lot, I know), because it makes me feel less regret about living somewhere else.

  14. Sniffnoy Says:

    Can we give a formal characterization of what it means for a set of manifolds (or other objects) to have been “classified”? Can we at least say that a decidable homeomorphism problem is necessary for this, if not sufficient?

    I honestly don’t think you can. How would you formally state the problem of classifying finite simple groups, for instance, in a way that doesn’t render it trivial?

    (And conversely, any computation-based definition will have the problem of being unable to handle classifications that contain a parameter varying over uncountably many possibilities.)

  15. Sniffnoy Says:

    Let me add on to my comment above, because I don’t think I wrote it clearly the first time — you have a set S of things you are trying to classify. I’d say a classification of S is a set T that we understand, and a surjection f:T->S that we understand, whose fibers we understand (I’d say further that ideally the fibers should be finite, though that might be asking too much in the general case). But I don’t think there’s any way you can formalize “the that we understand” condition without trivializing things. If we assume that we “understand” things that are computable, you mislabel hard problems as trivial (and mislabel as impossible problems involving uncountables).

    Maybe it’s possible to handle the countable case by restricting “what we understand” to a smaller class of functions that all computable functions. But I’m a bit doubtful of this; polynomial time already seems too expansive, considering all the things that can be computed in polynomial time. Something more like linear time seems more fitting. I’m doubtful you can even make this approach work though (and then we get into issues of, what does that even mean, when the set we’re trying to classify is something like finite simple groups? The implicit “up to isomorphism” may be a significant and tricky point here…)

  16. atreat Says:

    Through the link to stackoverflow I found an old review you did of Wolfram’s book wherein I found a curious quote from you that seemed to imply that you thought physical observables must be fundamentally discrete as opposed to the quantum state describing them. Do you still believe so? Do you think our world is fundamental scale non-continuous?

  17. Scott Says:

    atreat #16: I do still think physical observables should have a discrete set of possible values. The main reason to think that is the Bekenstein bound and related results from quantum gravity, while a secondary reason is that continuous observables would seem to open up the possibility of physical hypercomputers.

    But even assuming that’s right, I don’t know whether to say it means that “reality is fundamentally discrete”—that’s simply the old chestnut of whether you regard the wavefunction or the measurement results as “more fundamental.”

  18. DavidC Says:

    Sniffnoy #15: Linear time feels right to me too. A “classification” sounds like something you should just be able to write down and then read off the page – it shouldn’t take any more real work.

    So if we’re describing groups by generators and relations, maybe we say that you need to give it a description whose size is linear w.r.t. to the smallest such description, and you need to produce that description in linear time w.r.t. its size.

  19. Peter Morgan Says:

    If you do get ambitious enough to discuss the interpretation of the quantum, let it be about quantum fields, not yet more recycling of QM. If you like the math in the paper enough, though it may be too Nerd-y even for here, you might let your discussion be guided by my https://arxiv.org/abs/1709.06711.

    Abstract: “Manifestly Lorentz covariant representations of the algebras of the quantized electromagnetic field and of the observables of the quantized Dirac spinor field are constructed that act on Hilbert spaces that are generated using classical random fields acting on a vacuum state, allowing a comparatively classical interpretation of the states of the theory. For the quantized Dirac spinor field, the Lie algebra D of globally U(1) invariant observables can be constructed as a subalgebra of a bosonic raising and lowering algebra D⊂B (as well as the usual construction as a subalgebra of a fermionic raising and lowering algebra D⊂F). The usual vacuum state over D, constructed as a restriction of the usual vacuum state over F, can be extended (here, trivially) to act over B, and some elementary properties are presented.”
    How aggressively one interprets the math I present in this paper is very much up for discussion. I vacillate between the hard line of saying that QFT can be understood, even rather well, as a classical stochastic signal processing formalism (that’s an engineering-y terminology; in more math physics-y terminology, one would say a random field formalism), and the much more conservative realization that one works with exactly the same Hilbert space over the complex field, so there’s really not much to see here. In either case, however, I hope that in a few years people will feel it necessary to say about QFT only stuff and nonsense that is consistent with or at least informed by the math in this paper.

  20. Ashley Says:

    Scott,

    Mathematicians do not arbitrarily build up sentences and then check out their difficulty level. Is there something about the manner in which they actually come up with problems, that particularly helps them in discovering such easy to state but difficult to prove problems. For example, if A => B is a famous problem, will B => A be more likely to be famous than an ‘arbitrarily picked’ one? Does the question forming process improve the ‘probability distribution’? What would your experience tell you?

    (Note: I am not a mathematician, hence the question :-)).

  21. Martijn Says:

    Hi Scott! I would love to see your talk on Black Holes, Firewalls, and the Limits of Quantum Computers, but I cannot physically make it there.

    Do you know whether the Simmons Institute will record and archive the event on their youtube channel?

  22. Scott Says:

    Ashley #20: There’s always the trial-and-error method: if you happen on such a question, tell your colleagues about it, who will then if they like it tell their colleagues, etc., as with a dance spreading through a beehive that encodes the location of a promising new food source.

  23. Scott Says:

    Martijn #21: Yes, I’m pretty sure they’ll record it, but I don’t know details.

  24. william e emba Says:

    In general, “classification” within mathematics means one specifies a “moduli” or “parameter” space, along with a map onto the objects of interest, and with known multiplicities. If the map and space are easier to work with than the collection of objects, one has achieved progress.

    For example, computational complexity of finite simple groups is either done by working with the general notion, which sometimes works, or by invoking the Classification and assuming the group is alternating or of Lie type, as the complications of the sporadics are non-asymptotic. Meanwhile, people do come up with specialized algorithms for the sporadics.

    Computability is usually considered a bonus, and not a requirement for classification. (Usually.) Jordan canonical form classifies square matrices, and its utility is not lessened by questions of matrices with non-computable entries. Meanwhile, classifying two matrices together, now that is extremely difficult, and asking for computability here is missing the point. There are theorems to the effect that problems in algebra are either “tame” or “wild”, based on whether they turn on the two-matrix problem. Most mathematicians consider the wild cases to be unclassifiable, but Mumford in his Geometric Invariant Theory disagrees, saying he would be happy with a complicated moduli space and a reasonable map.

    In this vein, I do not like the common wisdom that 4-manifolds, which can have unsolvable word problems in their fundamental groups, are “unclassifiable”. A clearcut reduction to blackbox pathologies would be very informative, even thought it can’t inform us of everything. Complexity theorists would love to map out the zone between P and NP and so on up the polynomial hierarchy and beyond. Why topologists just give up when faced with their own complexity challenge baffles me.

    A more extreme example is the Mandelbrot set, which classifies certain connected Julia sets. The Mandelbrot set is immensely complicated, but working with it instead of general Julia sets seems to be the correct approach.

    I don’t think topologists consider classification, per se, as their goal. They just find out more and more about 3-manifolds of more kinds as they go on. Classification would simply be an obvious corollary of the “important” stuff.

    For example, finite group theorists study the “extension problem”: what do you need to recover G from H and G/H? Solving that is equivalent to classifying finite groups.

    (I should probably have also credited Brock-Canary-Minsky along with A-C-G above. There were two deep theorems that together implied the indicated classification, and were proved at roughly the same time.)

  25. Ashley Says:

    Scott,

    My question was probably stated in a bit confusing manner. What I meant to ask was this:

    An individual (human) mathematician may come up with a number of problems ‘at his desk’ – some of them he/she would find are not so easy and would share with his colleagues etc., the others do not get so popular.

    Now imagine a computer program which, when presented with a list of definitions from some field of mathematics, lists out all sensible statements that could be formed from them of some maximum size. Again a number of them would turn out to be easy and then others not.

    My question was, will in either case the distribution of difficulty of the problems be about the same? Or will in the human case there be more percentage of difficult problems being discovered?

  26. Scott Says:

    Ashley #25: I don’t know, but I conjecture that yes, the human would generate a higher fraction of hard questions, because the computer could easily spend immense amounts of time enumerating trivial questions that a human would never even consider.

  27. Douglas Knight Says:

    When you threatened to boycott Mathoverflow, did you consider the asymmetry between questions and answers? If your alternative to posting questions on MO is to post them on your blog, there is little cost and little to lose by (also) posting them on MO. If they get closed, you just copy them to your blog. And you can copy answers, if there were any risk of deletion, or just to collect everything in one place.

    I suspect you have a moral principal enforcing symmetry, that you should not post questions and thus encourage people to answer, if you would not answer yourself. Do you?

  28. Scott Says:

    Douglas #27: To be honest, I didn’t think it through that far! If there’s a moral principle involved, it’s simply that I no longer do free labor for any entity that acts like I need to justify my privilege of volunteering for it. Life’s too short.

  29. william e emba Says:

    Computers have been generating graph-theoretic conjectures for over thirty years now. That’s actually rather easy, and there are dozens of papers based on these conjectures. One considers a set of graph invariants and finds all relations that are satisfied by them on a large database of graphs. The hard part is that perhaps half of them are trivial.

  30. Foster Boondoggle Says:

    “I no longer do free labor for any entity that acts like I need to justify my privilege of volunteering for it.”

    There is no “entity”. There are a bunch of people with different interests, goals, values, agendas, etc. Your willingness and ability to explain complicated ideas in clear terms to the (interested) masses is widely appreciated. But there will always be some jerks in any group who do their best to spoil it for the rest. (Viz, campus no-platformers, Stein voters, purity trolls, etc.) No reason to give them a veto.

    Looking forward to seeing your talk in Berkeley next week. And as someone who’s been here since 1981, I can testify that the Bay Area is more crowded, with all that implies (homelessness, high housing costs, impossible commutes), but it’s still better than anywhere else, so you should still feel bad about not being here. I mean, where else can you be hiking in Tilden in the morning and hear a talk on black holes and computation in the afternoon?

  31. Scott Says:

    Foster Boondoggle #30: Thanks! Incidentally, I didn’t want to say this too loudly lest I give anyone ideas, but I really hope my talk in Berkeley won’t get disrupted by protesters, as other talks at Berkeley and elsewhere have been!

    Conditioned on that happening, I have a very hard time guessing whether the protesters would be feminists, antifa, anti-Zionists, white-nationalist antisemites, Trump supporters, quantum mechanics deniers, D-Wave investors, anti-Platonists, or people who believe that P=NP or that black hole evaporation is non-unitary.

  32. No need to worry about Trump supporters Says:

    Scott #31,

    Obviously Trump supporters are the last people you should be worried about.

    We, Silicon Valley dwellers who voted for Trump and are delighted with everything he has been doing since he assumed office are a very peaceful bunch. In fact, so peaceful that we have learned to be invisible to the irascible Hillary Clinton voters -who happen to be a majority around here- that remind us everyday how much they hate living with him as president and who would like to deport us from our own land. For some reason, none of the Hillary Clinton voters I know who promised to move to Canada if Trump ever became president followed through their pledge. Too bad we are stuck with them for the foreseeable future. I think that if all of them left for Canada, the Bay Area would again become Nerd Central.

  33. Joshua Zelinsky Says:

    I may have to show up to protest. I have a lot of things to protest. Top of the list is you having too busy a schedule for us to be able to have you up for the seminar at UMaine. I’m also going to protest that you were supported latkes over hamantashen at that MIT debate. And finally, I’m going to protest that you are organizing this event at a time and location I’m not able to show up. Hmm, I guess I’m going to have to do my protesting remotely.

  34. William Hird Says:

    @Scott #31:
    (people who believe P=NP)

    Scott, here’s a hypothetical for you. Suppose that a thousand years from now that humans have evolved to the point where let’s say there are half a dozen people in the world who can solve the TSP problem in their head for 200 cities instantly without error every time you give them an instance of the problem. Almost like the 5 or 6 six people in the world now who can recall every day of their life (the actress Mary Lou Henner is one of them I believe). Would you have to say that P=NP because the brains of these people obviously have some algorithm for solving the TSP in polynomial time even though non-biological computers still cannot ?

  35. Scott Says:

    Joshua #33: LOL! I promise you I’ll get to UMaine one of these days, so you can protest me in person.

  36. Scott Says:

    William #34: No, P vs NP (by definition) is the strictly mathematical question of whether a deterministic, classical Turing machine can solve all NP problems in polynomial time—not the even bigger, broader question of whether all NP problems are efficiently solvable in the physical world. In your hypothetical scenario, I suppose we’d have strong evidence for the latter—but it would be up to us whether to interpret that as evidence for P=NP, or for human brains (or at least some human brains! 🙂 ) not being classical Turing machines.

  37. Joshua Zelinsky Says:

    Scott, Unfortunately I’m no longer at UMaine- this year and next year I’m at Iowa State. Also I don’t have input into the seminar and colloquium organizing here. But next year you’ll actually be in Texas yes? In that case, I could talk to our organizers since you won’t be so incredibly far away and we can maybe work something out.

  38. Raoul Ohio Says:

    Three of our favorite topics — Trump, IQ tests, and Mensa — might collide in a perfect storm:

    https://www.usatoday.com/story/news/politics/onpolitics/2017/10/10/trump-says-hes-willing-compare-iq-tests-rex-tillerson/749112001/

    You can’t make this stuff up!!

  39. Raoul Ohio Says:

    #3 and #4: The best way to communicate to Donald Knuth is to find a few math mistakes in TAOCP, write up the corrections, mail it to him, and put in a PS at the end of the letter.

  40. Sniffnoy Says:

    Conditioned on that happening, I have a very hard time guessing whether the protesters would be feminists, antifa, anti-Zionists, white-nationalist antisemites, Trump supporters, quantum mechanics deniers, D-Wave investors, anti-Platonists, or people who believe that P=NP or that black hole evaporation is non-unitary.

    It’ll be Bell’s-Inequality-violation-deniers, for sure. 😛

  41. Raoul Ohio Says:

    Learning that Michael Cohen chose to work on hard problems in practical linear algebra encourages me to ask the theorists out there to consider such a problem that seems to me to be hard.

    Definitions. Matrix B majorizes matrix A iff A \leq B, and the spectral radius of B, \rho (B), is the maximum of the absolute values of the eigenvalues of B.

    Raoul Ohio Challenge Problem: For a given n x n nonnegative matrix A, and a positive integer r less than n, find or otherwise characterize the rank r matrix B of minimum spectral radius that majorizes A.

    Note 1: Rank 1 is of special interest.

    Note 2: An easy solution to the related question, “Find the L2 closest B to A”, is given in Volume 1 of Horn & Johnson in terms of the Singular Value Decomposition.

    Note 3. If you attack with lagrange multipliers, you will find “Matrix Differential Calculus” by Magnus & Neudecker to be useful. This invaluable reference is available as a .pdf at Magnus’s web site.
    Note 4: Alternative and similar versions of this problem are of interest.

  42. atreat Says:

    Scott, “continuous observables would seem to open up the possibility of physical hypercomputers” … could you sketch this argument out or have a handy link?

  43. Scott Says:

    atreat #42: If you could measure time to infinite precision, you’d then need to explain why you couldn’t construct a computer that did the first step in 1 second, the second in 1/2 second, the third in 1/4 second, etc., so that it would do infinitely many steps in 2 seconds. Likewise, if physical observables were fundamentally continuous, you’d need to explain why infinitely many bits couldn’t be stored in a finite region of space.

  44. Peter Morgan Says:

    Scott #42, Doesn’t the free energy cost for constructing computers at progressively smaller scales, specifically relative to human length, free energy, and entropy scales, rule out *us* making and being able to use infinitely finely constructed computers? Alternatively, one can just say that we talk about models, not about “how the world really is”, to resolve the problem. Alternatively again, even if the entropy of even a finite region of space-time is infinite, still one can talk about other measures that might be finite, distinguishing, say, between the number of words in this comment and the perhaps infinite details of the chemistry and nuclear physics and turtles-all-the-way-down in my head that were wasted to make it (then we can talk about how much smaller my thought processes are than other people’s and how IQ is or is not a good measure).

  45. Scott Says:

    Peter #44: Saying that we only talk about models, not the world as it really is, is obviously a cop-out, which contributes nothing whatsoever to resolving the problem. If your model says hypercomputing is possible, but you don’t think it’s possible in reality, then you need a better model, because your current model fails to explain something that it needs to explain.

    Yes, energy limitations and so forth are possible ways to resolve the problem, even in a universe with continuous observables. But crucially, the burden is on the advocate of such a theory to explain why the problem is not a mere engineering limitation, and why it makes their theory consistent with the physical Church-Turing Thesis even in principle. Either that, or else come out and clearly say they deny the CT Thesis, and ideally suggest how to build a physical hypercomputer. But don’t pussyfoot around the issue with circumlocutions.

  46. Atreat Says:

    Wow, I hadn’t expected the comp sci version of Zeno’s paradox!

    “But crucially, the burden is on the advocate of such a theory to explain why the problem is not a mere engineering limitation”…

    Forget comp-sci, why not just harken back to Zeno and place the burden on someone to explain how we walk from point a to point b since we are forced to do a binary search of the infinitely many points in between?

  47. Peter Morgan Says:

    Scott #45: I would say my approach to continuum physics more-or-less does not allow the axiom of choice. One can have infinite sequences, but they have to have finite constructions. One can have \pi, \sqrt{2}, closure of finitely constructed sequences in a norm, but nothing that makes measure theory fall apart. This doesn’t preclude always being able to introduce a more elaborate, better model (or a better theory). In particular in QFT, where measure theory is essential to allow a probability interpretation, allowing the axiom of choice is problematic. My apologies if this still seems like pussyfooting. I’m very much out of my depth, but does hypercomputing require the axiom of choice to get off the ground? I would take storing infinite information in a finite (or infinite) volume of space-time to require the axiom of choice.

  48. none Says:

    Don’t travel to the Bay area this month. The air is full of smoke because of the wildfires and it’s at unhealthy levels everywhere. Wait for it to clear up.

  49. jonas Says:

    See also Scott Alexander’s announcement of the meeting at http://slatestarcodex.com/2017/10/12/ssc-meetup-bay-area-1014/ .

  50. Scott Says:

    none #48: Indeed, but alas everything is already scheduled! Maybe I should carry around a wet cloth to breathe through?

  51. Daniel Ziegler Says:

    Scott #50: Apparently N95 masks are the way to go. http://www.sfgate.com/bayarea/article/Will-your-face-mask-protect-you-from-wildfire-12267359.php

  52. Yovel Says:

    Scott-
    I’ve been a lurker for some months in your blog and I really like it.
    Do you intend to give lectures in Tel Aviv University in the next year? I would really like to personally hear one of your lectures (I’m an udergrad student in the Hebrew Huniversity, but I could easily make the trip).

  53. Scott Says:

    Yovel #52: Not only will I be lecturing at Tel Aviv University, I’ll also be doing so at Hebrew University! Currently scheduled for Wednesday Nov. 8 in the CS department — look it up!

  54. Yovel Says:

    Cool!
    I actually can’t find it. Do you happen to know where they published it?

Leave a Reply