My second podcast with Lex Fridman

Here it is—enjoy! (I strongly recommend listening at 2x speed.)

We recorded it a month ago—outdoors (for obvious covid reasons), on a covered balcony in Austin, as it drizzled all around us. Topics included:

  • Whether the universe is a simulation
  • Eugene Goostman, GPT-3, the Turing Test, and consciousness
  • Why I disagree with Integrated Information Theory
  • Why I disagree with Penrose’s ideas about physics and the mind
  • Intro to complexity theory, including P, NP, PSPACE, BQP, and SZK
  • The US’s catastrophic failure on covid
  • The importance of the election
  • My objections to cancel culture
  • The role of love in my life (!)

Thanks so much to Lex for his characteristically probing questions, apologies as always for my verbal tics, and here’s our first podcast for those who missed that one.

41 Responses to “My second podcast with Lex Fridman”

  1. Ethan Says:

    I said that Lex Fridman is a great way to listen to great minds unfiltered :-). Watching it right now!

  2. fred Says:

    In the BQP is in PSpace question, I don’t understand how looking only at the branches for a particular output, we go from an exponential space (a binary tree has O(2^N) entries) to a subset that’s only polynomial space (N^x).

  3. Nick Says:

    Suppose a new abstract computational model is proposed and claimed to be Turing-complete. Normally such a claim would be proved by showing that the model is capable of simulating another model that is known to be Turing-complete (Turing machines, lambda calculus, etc).

    But suppose that instead of being given such a simulation, you’re given a sequence defined on the length of programs in that model, and that that sequence appears to exhibit BB-like growth. To what extent would you feel comfortable in assuming, solely on the basis of this sequence, that the model is Turing-complete?

  4. Mathias Bonde Says:

    One of the better episodes I’ve seen with Lex for sure!

    You’re very good at explaining things such that someone like me, who is not smart enough to follow along most of the math in this blog, can understand difficult to comprehend topics such as complexity spaces and zero knowledge proofs.

    Thanks so much for taking the time to doing this!

  5. Scott Says:

    fred #2: The point is that, at the end of the day, you only need to solve the same problem that the QC is solving: e.g., “what are the prime factors of this number?” “what’s the rate of this chemical reaction?” And to do that, it suffices to calculate the final amplitudes one at a time (via a PSPACE computation), noting what probabilities they give rise to. There’s never any need to store the whole amplitude vector in memory at once.

  6. Scott Says:

    Nick #3: It sounds like that would be suggestive numerical evidence that the model might be Turing-universal, but in no way would it be a proof. And since proofs of Turing-universality are often pretty straightforward, it would be totally reasonable to ask for the latter.

  7. Scott Says:

    Mathias #4: Thanks!!

  8. fred Says:

    Scott #5

    The symmetry (or lack of symmetry) between time and space is interesting (compared to their equivalence in GR).

    E.g. with a system that has exponential growth (like the idea of the self replicating vonneumann probes), the full tree could be built and stored in polynomial time?
    But in practice the limit of the speed of light would make this impossible – a 3D sphere doesn’t cover an exponentially bigger volume when its radius grows at constant speed.

  9. Gerard Says:

    Scott #5

    So as I understand it with a classical #P oracle you can simulate a quantum system for some long period of time and all you actually compute is the answer to some measurement made at the end of that time period. However if you were simulating the system on an actual quantum computer the system, or rather its simulation, would actually be running during all of that time (ie. going through all the intermediate steps).

    I’m not quite sure how to formulate it but that raises questions in my mind about whether those two types of simulation are “really” doing the same thing.

  10. Scott Says:

    Gerard #9: The key point is that whatever answer you could get by actually measuring the state of the actual QC, you can also get that answer from the simulation. Keep in mind that you never see the whole amplitude vector anyway, but just a single sample of |x⟩ with probability |αx|2 ! And a BPP#P simulation can sample from the latter distribution as well as the quantum computer itself can, and a P#P machine can even answer any yes/no question about the magnitudes of the probabilities, which is more than you could use the actual QC to do without exponentially many repetitions.

  11. Job Says:

    Simulation theory is interesting to me depending on constraints.

    For example, we might be able to prove that we’re not living in an interactive, multi-user simulation running on Venus. E.g. where Earth is a fictional planet within a very similar universe, with simulated time units no more than some constant factor relative to the real thing.

    With similar limits governing transmission of information, the simulation would have to ensure that the state of the universe is consistent for all users, which would scale differently depending on implementation.

    Especially considering that the simulation itself can spawn new participants. That’s kind of curious on its own. It’s like GMail’s early referral program, where you get new users by sending them an email.

    The time conversion factor would could be significant. And maybe that impacts which simulation you choose to join.

    Say the Earth simulation right now has a time conversion factor of a few thousand. That would be several thousand years for most people. Maybe there’s another simulation that requires less commitment.

    Presumably, living in Venus, you’re not going to be around forever.

  12. duck_master Says:

    (Hello world. First time commenter, but longtime reader.)

    Now seems like a particularly apropos time as any to ask a question that’s been conspicuously missing from all of the resources I’ve found on Turing machines: It’s relatively obvious that programs in models of computation that are at least as powerful as the Python programming language* (or Lisp, Haskell, JS, etc.) can all simulate each other and Turing machines via standard techniques in the theory of compilers. However, it is extremely non-obvious to me** that Turing machines can simulate the Python programming language. So, how would we engineer Turing machines to simulate arbitrary Python programs.

    *not sponsored or whatever; I just happen to like it a lot

    **OK, maybe it was obvious to Alan Turing

    P.S. Job #11: Simulation theory in its full generality is epistemically sterile (i.e. it makes no falsifiable predictions). This is why I don’t think about this very often.

  13. Scott Says:

    duck_master #12: No, I will not be nerd-sniped into learning Python and then writing a Python interpreter in Turing machine. 🙂 But it can be done.

    The easiest way to convince yourself that it must be possible is to google around for other examples of what Turing machines can do — check primality, interpret other Turing machines, search for inconsistencies in set theory, etc etc. You’ll see that Turing machine is basically just a slow and tedious machine language or assembly language. As such, it can express everything for essentially the same reasons why all programming languages, however high-level, can be compiled down to the machine languages you know.

  14. mjgeddes Says:

    Odd interview, it looked like you and Lex were in some kind of shack with rain hitting a tin roof, in the middle of the night.

    Excellent dissection of the Penrose nonsense about consciousness; indeed Penrose asking us to basically believe 6 impossible things before breakfast, the combination of which would be unlikely in the extreme.

    I was not aware of the importance of SZK, interesting bit about that.

    Scott , you say that likely no one in 2500 years has had a good intuition about consciousness. I’m the first 😀

    I always liked the idea that it’s only *time* that’s truly fundamental. By ‘time’, I’m not referring to time in the sense of a coordinate system (as in spacetime), but rather time in the sense of an arrow or directionality. The idea that only process or change is fundamental was first suggested by Heraclitus, and later developed a bit further by Alfred North Whitehead as ‘process philosophy’. What I’m suggesting is that Heraclitus and Whitehead are right!

    Here, let me just introduce my notion of *temporal flow* as the basic unit of ‘change’, and suggest that a set of these forms a sort of *generalized complexity class*. A temporal flow, I think, is something rather like the basic building block of a complex system in the Santa Fe sense , but more general and better defined. So let me suggest that a *temporal flow* is what the Santa Fe idea of complexity would be , if one day they manage to property define it 😉

    Now if we think of sets of *temporal flows* as a *generalized complexity class*, then we have the beginnings of a theory that could unify complex systems , computational complexity and cognitive science. And then we could say that time literally *is* complexity. I think that consciousness will turn out to be best understood in terms of these putative *temporal flows*. That is to say, I think that cognition is the interaction and combination of temporal flows.

    All around you are the temporal flows! See them, and you will take your first steps into the post-human world.

  15. Derrick Says:

    Since you guys didn’t even get to aliens yet I’m assuming we can expect a Part 3?

  16. Rahul Says:

    A lot has been written about “The US’s catastrophic failure on covid”.

    I want to read more about the failure of modern science and technology. Yes, yes I read about all the vaccines and monoclonal antibodies and advances like that but somehow pre covid I thought our capabilities were a lot higher than what they turned out to be.

    Someone all the bio seminars and what they seemed to promise seem a bit blah in hindsight.

  17. fred Says:

    Scott #13

    What’s never mentioned by the theory is that computer languages are not all equivalent when it comes to how well defined their grammatical rules are, i.e. some are way more ambiguous than others.
    But, still, you can always build a better more coherent programming language on top of a less coherent, crappier language.
    This is not intuitive unless one tries to do this as an exercise – obviously the trick is to be very conservative when writing the compiler/interpreter, which can impact final performance.

    It’s also especially illuminating to implement a “toy” functional programming language on top of classical imperative programming language.

  18. CMU person Says:

    Off-topic, but here is a blog post by a CMU CS professor explaining why he thinks it’s important to vote for Trump:


  19. Gerard Says:

    Scott #10

    Sure if all you are interested in is the outcome of some measurement that makes sense, but what are the implications at the level of fundamentals/philosophy of QM ?

    Suppose you are God and you decide to simulate a universe. You can do it in one of two ways, either using (1) your bignum-qbit QC or (2) your trusty #P oracle. You want to run it for about 14 billion years and then measure some photon field counts so you can look at a picture of a corner of the universe that happens to fall in the vicinity of Earth. In both cases you will get the same picture but using method (1) billions upon billions of beings were born and died in the simulator, wars were fought, Trump was elected, etc. In short stuff happened. On the other hand using method (2) all you did was count paths.

    One thing I wonder about this viewpoint is what happened to the measurement problem ? I realize that at the scale of real experiments that already seems to have been explained away by decoherence but I thought there was still supposed to be an issue at the level of the “wave function of the universe”. Does the solution to the measurement problem really come down to someone making just a single measurement at the end of time ?

    The other issue is what about all of the stuff that seems to have happened (or at least been simulated) in case (1) but not in case (2) ? I guess this would be more of a problem for those who believe that consciousness is computable (I do not, though, I think, for different reasons than Penrose).

    Any thoughts on this ?

  20. Scott Says:

    mjgeddes #14:

      Odd interview, it looked like you and Lex were in some kind of shack with rain hitting a tin roof, in the middle of the night.

    We were! At night, because Lex was only in Austin for a day and was with Joe Rogan (!) earlier. Outdoors, because I asked for that, because covid. Rain, because it was rainy season in Austin. Tin roof, because when Lex rented an AirBnB with an outdoor covered patio, he didn’t think to ask what material the roof was made of (I wouldn’t have either).

  21. Nick Says:

    duck_master #12

    The canonical Python interpreter is written in C. C itself can be compiled down to assembly code. So if you believe that assembly code can be simulated by a Turing machine program, then it follows that Python can be too.

    An important point to keep in mind is that when it comes to programming languages, the word “power” has two distinct uses. The first use has to do with what a language can express at all. Computability theory tells us that the ceiling here is low. If a language has while-loops and unlimited variables, it’s “powerful” enough to express everything that can be expressed, and no Turing-complete language is any more “powerful” than any other.

    The other use of “power”, which is more common in programming language discussions and arguments, has to do with what a language can express conveniently, or what a language can express in programs up to some length. If somebody says that Lisp is “more powerful” than Java, roughly what they mean is that Lisp programs are generally shorter than their Java equivalents.

    The Turing machine language is “powerful” enough to implement a Python interpreter, but not “powerful” enough to do so in a program of reasonable size.

  22. Scott Says:

    Derrick #15:

      Since you guys didn’t even get to aliens yet I’m assuming we can expect a Part 3?

    LOL! Lex told me he would’ve continued the conversation for 6 hours if I was up for it (I wasn’t), so presumably we can do Part 3 whenever we’re in the same city again. But he chooses the topics, not me.

  23. fred Says:

    I personally think that consciousness is related to the formation of memories (as a side effect of it).

    But (some wilder speculation) it’s also possible that consciousness could be related to some more subtle QM process, like the interference between the different branches of the evolution of the brain on a very short time scale.
    If we assume that consciousness is some sort of emergent process, it’s interesting to note that what characterizes classical computers (aka “computations”) is that they are systems that exhibit very precise macro states (i.e. the state of the CPU and all the RAM) that are somewhat robust to the details of their microstates. This is why we can all run the same Excel spreadsheet on our own laptops, even though all those laptops are very different at the atomic level. And, at any given moment, a unique computer, as a complex QM system, will split into many different versions/branches (assuming MW interpretation), just like any other QM system, but on all/most of those branches some macrostates are all the same (the same could be said about a crystal, but in that case the macrostate are not “interesting”). Such macrostate that’s robust in the face of microstate variations can be considered as some type of emergence.
    The same could be happening with a brain, i.e. in the very short term, all its versions in the MW branches implement the same macrostates. If consciousness is linked to those macrostates, then it’s valid to say that consciousness is a property that spans multiple branches of the MW (at least in the very short term).
    From there maybe you can say that the sense of “free will” is a result of the divergence of the macrostates in more and more branches as time passes, so free will would be the feeling associated with a sort of “macro level decoherence” of the brain – don’t we feel the sense of free will more vividly when we are at a very sharp “fork” in the road (like, your brain is trying to decide whether to fight or flee)?
    One problem with this is that macrostates are in the eye of the beholder – e.g. the macrostates of a computer are only relevant to the humans that designed it, mapping onto other macrostates in the brain of those humans, e.g. what we call “programs”.

  24. Scott Says:

    Gerard #19: Which kinds of computations would suffice to bring conscious experience into being is a vastly harder question than which complexity classes are contained in PSPACE (and the latter can be far from trivial!). Without entering into the ultimate mysteries, I’ll just observe that if P=PSPACE via an efficient algorithm, then there would be very little point in building quantum computers, since whatever we could learn from the QCs, we could also learn from the simulation.

  25. Raoul Ohio Says:

    CMU Person #18:

    The author does a good job of clearly and logically stating what he or she likes about Trump. What the author thinks are good axioms lie somewhere on the [rightwing … fascist] spectrum. If you buy those premises, it is logical to be a Trump supporter.

    Nevertheless, I suspect the author would be pretty lonely at a Trump rally, where logical thought is about as common as a facemask.

  26. JimV Says:

    I think I have previously exposed my intuitions about consciousness here, so apparently they didn’t make the cut, or else were rediscovered ancient ones. In case they were forgotten though, I’ll try to summarize them briefly.

    The function of consciousness is similar to that of a computer’s operating system: to interact with the external environment by receiving external inputs, making decisions about them with the help of lower-level systems, and triggering some external outputs. (Those that aren’t automatic.)

    Yes, but why does it “feel” the way it feels? Because if it didn’t feel like anything it wouldn’t be experiencing anything. No feel = no experience = no external input.

    But why does it feel specifically the way it feels? At some point you just have to say, that’s the way such things feel to our physical makeup in this universe. Why do electrons exist? To me, the “hard problem of consciousness” devolves to the hard problem of why electrons exist.

  27. Gerard Says:

    Scott #24

    OK, let me try a, hopefully, more concrete question.

    Can anything like measurement/decoherence happen inside a quantum simulation (ie. a simulation of a quantum system performed by a QC) or does the system remain in a pure state (ie. describable by a wave function rather than a density matrix) until the simulation ends (assume that the QC itself is perfect and doesn’t decohere) ?

    If the latter than it’s hard to see why any “entities” that exist inside the simulation would perceive a world anything like the one we perceive where we observe concrete outcomes rather than probability distributions (let alone ones that can interfere).

    I think in a sense what I’m asking is: can a wave function measure itself ?

  28. venky Says:

    Great video! If you turn CC on and run it at 2x speed, Scott’s delivery becomes pitch perfect. Re: universality, I agree w Scott that it’s magical, but why isn’t there something even ‘simpler‘ than the Turing machine or Boolean logic that’s universal? Is it because you need at least an alphabet of size 2 with all possible binary operations (NAND will do)?

  29. 1Zer0 Says:

    Listening to this interview put me in a highly philosophical mood, now I have no other way but to write my thoughts down here.
    When it comes to finding the computationally simplest way (as in, least time complexity) to solve a problem, I always wonder if either the human mind or the universe itself limits the possible ways we can even think about the problem in question.
    Like sorting a list of n elements alphabetically is a trivial task and we discover some algorithm with certain “fundamental operations”, count them, and put the problem in the respective complexity class.
    But we can only really think about the “fundamental operations” the universe somehow lets us conceive. Without ever seeing the color “red”, there would be no way to mentally process it or describe it to someone. Analogous, could our mind be missing some primitive operations, for example a “hyperoperation” which would let it instantly decide for example the mortal matrix problem for a pair of 3×3 matrices?
    Many models of computation in some way reference the concepts of space or time, like a Turing Machine operating on a potentially (or actually) infinite tape. Suppose in other possible worlds, there are other fundamental unimaginable concepts (no Space, Time, anything else we use to define computation here) in terms of which a theory of computation could be formulated. We would never be able to imagine.
    More generally and not limited to computation:
    What if there is a platonic mathematical realm, but our formal systems and models of computation are insufficient to fully grasp its vastness, for example because the fundamental concepts of our “host universe” lack the properties to allow a conscious entity to even think about certain things about that platonic realm? I am not only referring to things that are mentally unimaginable, but also to things within that platonic realm, that are inaccessible by formal methods all together and due to limitations of either of our host universe cognitively out of reach. We can’t imagine a 5 dimensional sphere but at least we can logically reason about it within a formal system, however I am not assured those are sufficient to grasp every platonic object either and as we know,
    some formal systems are certainly too weak (Can’t use presburger arithmetic to talk about hyperspheres). Most generally asked; In the possible vastness of existence, is it really plausible to think that our little species could mentally investigate every question in the first place?

  30. Scott Says:

    venky #28:

      If you turn CC on and run it at 2x speed, Scott’s delivery becomes pitch perfect.

    I completely agree, and added a recommendation to that effect to the post! At 2x, I sound so much smarter and more articulate, the thoughts pouring out rapid-fire… 😀

      Re: universality, I agree w Scott that it’s magical, but why isn’t there something even ‘simpler‘ than the Turing machine or Boolean logic that’s universal? Is it because you need at least an alphabet of size 2 with all possible binary operations (NAND will do)?

    I mean, how much simpler than Turing machines or Boolean logic do you want? And crucially, what counts as “simpler”? Conway’s Game of Life and Wolfram’s Rule 110 are also Turing-universal, but are they “simpler”? By what metric?

  31. Scott Says:

    Gerard #27: If a quantum circuit consists entirely of unitary gates (Hadamard, CNOT, etc.), then by definition the evolution will be unitary, but we can still simulate measurements, by using CNOT gates to record one part of the wavefunction in another part. According to believers in the Many-Worlds Interpretation, this is all that “measurement” in the real world ever consists of anyway.

  32. Futureseek Daily Link Review; 14 October 2020 | Futureseek Link Digest Says:

    […] My second podcast with Lex Fridman >> * Space Mining Should Be a Global Project—But It’s Not Starting Off That Way >> * Talent […]

  33. John Says:

    @ Raoul

    Nevertheless, I suspect the author would be pretty lonely at a Trump rally, where logical thought is about as common as a facemask.

    Just another sneer, just another mindless “boo, my outgroup” comment.

  34. Bertie Says:

    Beautiful interview 👍

  35. OhMyGoodness Says:

    I enjoyed the interview and then read your paper Ghost in the Quantum Touring Machine (very clever title) and enjoyed that also. Your discussion of freebits and Knightian Uncertainty was the best discussion I have seen honestly dealing with incredibly difficult problems of human cognition.

    Some thoughts I had while reading your paper-

    Einstein’s apparent lack of understanding of QM is surprising and his use of “Spooky…“ to describe the lack of knowledge of initial conditions at the time of entanglement has had long lasting consequences. It created an entire industry for physicists to publish popular mysterious quantum mechanics books and it consigned Alice and Bob to a horrible life of reading secret messages, opening strange boxes with dead cats inside, and maybe worst of all now being thrown into a black hole. Their treatment has been inhumane. I have thought that of course God played dice since if not it isn’t very interesting.

    My view has been more along the lines that the observables of human cognition are actions in the classical world that follow from something akin to wave collapse. Just as with quantum observables it is impossible to deterministically predict the action. It could even be the case (as you discuss) that the person has chosen to use some well known external non deterministic process to help decide his action such as use of a fair coin or probabilistic quantum measurement.

    The schemes I have seen that depend on some set of possible rational choices and then application of some optimizing scheme really not even close to capturing the range of individual human action. In the case of Newcomb’s Paradox I would expect that the forecaster’s email would be hacked and the forecaster himself held at gunpoint until he divulged his prediction.

    Thanks for the interview and the paper. You absolutely didn’t seem like a member of the Red Guard in the interview 🙂 and freebits and Knightian Uncertainty provide interesting subjects for thought.

    Also I realty liked Agent 3203.7’s last comment and regret that I didn’t note that under the guest post.

  36. mjgeddes Says:

    #35 OhmyGoodness

    Yes, Scott’s Ghost In The QTM is insightful, but I favor some kind of classical effect to supply the Knightian uncertainty, rather than quantum mechanics. I’ve said it once, I’ll say it again: I don’t think we understand classical physics!

    The AdS/CFT correspondence should have been the tip-off. Exactly the *same* quantum rules can give rise to two *different* valid classical descriptions of spacetime! That shows you that even if our understanding of quantum mechanics is exactly right, it’s still logically possible for our classical theories to be badly wrong!

    I think what’s missing is the understanding of compositionality, or how many simple parts combine to form complex systems.

    I agree that the notion of intelligence as ‘set of possible rational choices and then application of some optimizing scheme’ is laughable. There’s simply no separation between the goal system and the world model. I see the alignment problem and I also see the outline of the solution at this point. The preferences (values) of minds are directly related to the complexity of the world models (ontologies), there’s no question. The value system is generated by the compositionality of the ontology (how the ontology grows over time).

    At this point, I believe that I am now having faint perceptual awareness of actual post-human consciousness. It’s super-weird, as we expected.

    In a nutshell, I think that it’s time that’s fundamental, and there’s a much more general theory of complexity than ordinary computational complexity. Time itself is multi-dimensional and the number of dimensions is not fixed, but varies according to the scale that you look at it. It’s something like the growing block-universe: the past is fixed, but not the future.

  37. Nykon Says:

    Thank you for sharing

  38. Deepa Says:

    Wonderful questions. Great conversation. I think I’m going to enjoy listening to various interviews on this channel.

  39. duck_master Says:

    Scott #13, Nick #21: I was going to write a snarky comment about how disconnected computer science discourse around Turing machines is from actually engineering Turing machines to do computable things, but then I scrutinized this GitHub repo (which I hereby declare to be required reading for any and convinced myself. (I consider it to be obvious that Laconic is equivalent in power to Python, or to other programming languages used in practice.) Suffice it to write that, in (relatively) nearby counterfactual branches of the multiverse, the standard cliché for how mathematicians think of computation is (infinite versions of) Babbage’s Analytical Engine, or deterministic pushdown automata with queues, or cyclic tag systems, or kappa calculus, or Brainfuck*; our timeline’s choice of Turing machines is in some deep sense extremely arbitrary.

    Also, possibly important observation: Even in a world where vanilla NP-complete problems (such as 3SAT, graph coloring, etc.) could be solved efficiently, it is relatively well known that P is *not* equal to NP relative to a random oracle, which suggests that unconditionally provably hard cryptography does exist relative to a random oracle. Indeed, Optimal Asymmetric Encryption Padding, combined with RSA**, is provably secure***. So, assuming that computer scientists can find unconditionally provably hard cryptography relative a random oracle, and organizations can coordinate with each other enough to set up a public, trusted, unhackable random oracle, we can enjoy our security in peace even if NP oracles start raining from the heavens.

    *this is ok, right?

    **this is not proven to be hard, although popularly conjectured to be

    ***under the assumption that RSA itself is secure

  40. ajc Says:

    Not sure if you’ve covered this topic before, but I recently saw a paper related to QC and cosmic rays saying that we should consider building QC’s underground. Seems plausible enough on face-value and was wondering if the claim holds up to scrutiny. Would make for an interesting short blog post I imagine.

  41. Dmitri Urbanowicz Says:

    Gerard #27:

    I think this is exactly what you need:

    In short, you can simulate a single shot of QC while maintaining definite state at every step of the computation. If the gate is classical, then you simply transform the state. If it is, say, a Hadamard gate, then you need to take into account all possible histories that could lead to both possible resulting states and then choose randomly in accordance with their computed amplitudes.

    In other words, the universe simulated like this don’t need “measurements”. Its inhabitants never experience superpositions or world-split directly.

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.