## My video interview with Lex Fridman at MIT about philosophy and quantum computing

Here it is (about 90 minutes; I recommend the 1.5x speed)

I had buried this as an addendum to my previous post on the quantum supremacy lecture tour, but then decided that a steely-eyed assessment of what’s likely to have more or less interest for this blog’s readers probably militated in favor of a separate post.

Thanks so much to Lex for arranging the interview and for his questions!

### 85 Responses to “My video interview with Lex Fridman at MIT about philosophy and quantum computing”

1. New top story on Hacker News: My video interview with Lex Fridman (MIT) about philosophy and quantum computing – protipsss Says:

[…] My video interview with Lex Fridman (MIT) about philosophy and quantum computing 4 by furcyd | 0 comments on Hacker News. […]

2. fred Says:

Hi Scott, thanks for the link!

A question – do we have some understanding of why so many problems that seem tractable in 2D (2SAT, Matrix operations) are intractable once we add one more dimension (3SAT, Tensor operations)?
If not, could this be at the core of proving P!=NP?

thanks!

3. Vikram Chandrasekhar Says:

Hi Scott, fascinating interview. The best part, to me, was your beautiful overview on the difficulties in scaling up the quantum computing capabilities while simultaneously ensuring good fidelity of qubits through error correcting codes. I was wondering if a key missing piece today is someone like Shannon providing a (theoretical) blueprint of what is the minimum required error correction (e.g. ratio of physical to logical qubits) given a certain noise level. This is a quantum analogy to Shannon’s pioneering work on Information Theory which created a clear “can versus cannot” map for communication systems designers to build modern digital communication systems. Without this road map, it seems to me the current situation is akin to being in a dark room searching for a light switch.

4. fred Says:

Scott,
Regarding free will, you often go down the line that QM always injects some core randomness that makes the brain impossible to simulate even in principle.

But the irony is that computer science has achieved the exact opposite of that: computers are made of trillions and trillions of atoms subjected to QM chaos, but we somehow figured how to assemble them so that some of their macro states are perfectly reproducible (that’s why you and I can run the same excel spreadsheet without worrying about every atom in our laptops being in the same state).

So, either the same actually holds for the human brain (only macro states matter), or classical computers will never have free will? …but maybe QC will?

5. William Hird Says:

Hi Scott,
You haven’t mentioned the most important part of your talk at MIT, when Daniela Rus was introducing you she (unwittingly for her, humorously for us ) miss-named your blog “Shtetl Optimization”, as you graciously glossed over her blooper by not publicly correcting her ! So now we have a new term in computer science, “shtetl optimization”, but no phenomena to receive the newly minted name. Should we have a contest to see who can find a computer science problem that we can call “shtetl optimization” ? The winner should receive an all-expenses paid trip to…………….President Donald Trump’s Second Inaugural Ball. 🙂

6. arch1 Says:

“Regarding free will, you often go down the line that QM always injects some core randomness that makes the brain impossible to simulate even in principle.”

Fred #4: This is straying OT, but since you teed it up so well – I’ve never understood how either deterministic or random behavior (or any mixture thereof) can give rise to anything deserving the label “free will.”

7. Scott Says:

fred #2: For each NP-complete problem, one can give an explanation for the “3,” although it will be slightly different explanations for different problems. For 3SAT, for example, the explanation is that we’re encoding universal Boolean circuits, and gates in Boolean circuits (like NAND or NOR) have 2 input bits and 1 output bit, and 2+1=3. A different, complementary explanation would arise from the algorithm that solves 2SAT in linear time (namely, depth-first or breadth-first search to look for a contradiction in the directed graph of implications).

No, I don’t think that this is a key to the P vs NP problem. I think it’s a property of certain NP-complete problems that arises in them for well-understood reasons and that doesn’t even scratch the surface of P vs NP.

8. mjgeddes Says:

Excellent interview Scott,

Reading between the lines and based on what you’ve said in the past, I sense that you’re fairly skeptical that big philosophical problems are soluble. but as you point out (correctly I think), a lot of philosophy puzzles are tied in with the puzzle of consciousness, which is my pet philosophy topic 😉 So clear progress would be made if consciousness could be explained.

As I commented on Twitter, I now think that consciousness is explainable within a physicalist framework, and I’m quite confident of this. Chalmer’s ‘Hard Problem’ I think is misleading, because it sets an unrealistic standard for what constitutes an explanation.

Philosophy does have it’s place, but what I now think it’s about is how we *model* reality: I think philosophy is about *ontology*; the concepts we use to describe reality, and the different levels of model abstraction, and the mappings between them in. In a nutshell, I’ll say that philosophical explanations are *translations* (mappings) between different levels of model abstraction. From this perspective, the ‘hard problem’ of Chalmers dissolves into a problem of ontology, rather being about a fundamental aspect of reality.

You know I’d been kicking around some ideas about consciousness in the comments on this blog, gradually becoming more coherent over a period of time; now my ideas have suddenly sharpened quite substantially… I might just have the drop on consciousness 😀

I see a clear approach to a possible solution; although a proper scientific theory of consciousness is still not in sight, I feel I can state some basic principles that might put us in the ballpark of some sort of answer. In a recent tweet thread I stated 10 hypotheses, each individually quite plausible, which together constitute a sketch answer:

(1) Consciousness is a physical process, it’s a form of information processing. As the brain is a dynamical system, it’s complex systems theory I think we should look at. A dynamical system is *quasi-abstract*, it extends through *time* rather than space

(2) The properties of a dynamical system are best described using statistical mechanics. And as a dynamical system extends through time, we want to look at the *temporal* properties , relating to the time evolution of the system.

(3) See ‘Thermodynamic Theory of the brain’ at link below:

https://technologynetworks.com/neuroscience/news/thermodynamic-theory-of-the-brain-aims-to-understand-consciousness-330383

“a general framework of how changes in free energy – the amount of energy available inside a system helps temporarily synchronize the activity in neural networks.in neural networks.”

“Energy is dissipated as more neurons become connected,”

(4) An intelligent system forms representations , so I go with higher-order theory here:

“consciousness consists in perceptions or thoughts about first-order mental states – phenomenal consciousness is thought to be higher-order representation of perceptual contents”

(5) The other feature of an intelligent system is that there are low-level representations , constituting sensory data, and high-level representations, constituting prior expectations

Predictive coding is based on comparing high-level priors with low-level sensory info

(6) Combine the previous ideas: complex systems theory, statistical mechanics and the thermodynamic brain, higher-order theory and predictive coding:

Some dynamical systems (like the brain) will create a *self-representation* – a model of it’s own time-evolution.

(7) Low-level sensory data tells the brain what’s happening in the here-and-now (the physical state of the world).

But the brain can also generate *counterfactuals*-abstract models about the past and future (memory and imagination) about what *could* happen – generative models

(8) I think the self-model is a model of time that combines the low-level sensory data about what’s happening right now (present moment) with the high-level counterfactual info (possible pasts and futures).

(9) What (1)-(8) imply is that consciousness is a new measure of *time* (an arrow of time) for dynamical systems.
It is an internal self-model or representation that describes the relationship between the present moment (the here-and-now), and the past&future.

(10) Think of your current self, and compare that with your past and future selves. Consciousness is what integrates these self-models together over time. It pinpoints the relationship between your *current* self, and your *past* and *future* selves.

Relating this back to the thermodynamic brain discussed at link above, I leave you with this quote:

“Energy is dissipated as more neurons become connected …Models using thermodynamic equations show that healthy and conscious states have a tendency toward greater dissipation.”

Consciousness is an arrow of time!

9. Scott Says:

Vikram #3: Yes, there’s still a gap of several orders of magnitude between the performance of the best known quantum fault tolerance schemes, and the known limits on what’s theoretically possible with such schemes. Closing that gap is widely recognized as one of the central open problems of the entire field, certainly among problems at the interface between theory and practice.

Having said that, the lack of a quantum analogue of Shannon’s theory is not the relevant holdup here. Since the 1990s, we have had a quantum analogue of Shannon’s theory; it’s just that it doesn’t answer this particular question. The brief reason why is that, in determining the overall performance of a quantum fault tolerance scheme, a lot matters besides the “Shannon quantities” like distance and rate: most notably, the computational difficulty of error detection and correction, and of applying quantum gates to the encoded qubits.

10. Scott Says:

fred #4 (and arch1 #6): Dude, did you even read The Ghost in the Quantum Turing Machine? 🙂

If you did, you’ll remember that the relevant issue is not the injection of randomness, which surely occurs but is surely insufficient for anything one could call libertarian free will. Instead, the relevant issue is the possible injection of Knightian uncertainty. That might or might not occur—one could regard that as a currently-open, partly-empirical question at the interface of neurobiology, chemistry, quantum mechanics, and cosmology (!!)—but if it did occur, I argue that it would very plausibly be relevant.

11. fred Says:

mjgeddes #8

(don’t want to be hijacking this thread with constant back and forth, so I’m not gonna post after this)

The difficulty about consciousness is that it’s entirely subjective (only consciousness is aware of itself, but it can’t probe really far into itself…), but the science method is based on building things up from basic measurable components and postulates, and there’s never anything “emergent” at all (in the way consciousness is emergent), every “macro” effect can always be reduced/explained to the effects of its components, until you reach down to some basic “given” postulates, by definition.

So, as soon as you tie consciousness to something “physical”, say “vortices of a certain size in the quantum wave function” or “amount of self-similarity within a lump of atoms” or “consciousness is the physical process of forming memories”,… all such physical systems/measures can be simulated by a computer, so, regardless of what they are, it always all boils down to manipulating numbers, and then we’re just saying “some ways of manipulating numbers give rise to consciousness, as a magical emergent process”.

Another way to phrase this is that information processing is in the eye of the beholder.
The very notion of a computation can’t be absolute. We can’t measure the amount of computation going on in an arbitrary system.
Only humans recognize their own computers as such because these machines are an extension of their own consciousness, in the sense that we picture “software” as running “the hardware”, but it’s just so because we identify software as an extension of abstractions/concepts in our own minds. From the outside, it’s all atoms following the rules of physics in a “dumb” deterministic way – no matter how complex a system is, according to science, it’s still following the “dumb” deterministic rules of physics.

Maybe once we have advanced AIs, and once they claim to be conscious, they may be able to probe further into their own nature because they’ll be able to manipulate their own brain in ways we can’t.

12. Mark Says:

Hi Scott, I really enjoyed this interview so checked out your blog and read all the IIT stuff. Thanks!

I’m posting because I see you presented at a recent FQXi meeting at which Karl Friston was present. I know this isn’t the focus of your work but I am sure I would not be the only one interested to hear your thoughts on his ideas (if you’ve got any! If you’re so inclined!).

If not thanks for the stimulation anyway and I look forward to following your work.

13. Scott Says:

Mark #12: I attended Karl Friston’s talk, but I confess that I didn’t understand it well. With some things I don’t understand well (nuclear physics, representation theory), there are nevertheless short “certificates / NP witnesses of importance” that prove to me that the effort to understand them would be amply repaid. With other dense bodies of thought (orthodox Marxism, post-structuralism, much of theology, superdeterminist physics, IIT…) there have been “anti-certificates,” in the sense of clear implications drawn by the principals that struck me as so transparently off-base as to convince me that any months or years that I spent studying the theories would not be repaid, except possibly with “meta-insights” about where everything went off the rails. And then, alas, there are bodies of thought for which I’ve found neither certificates or anti-certificates—like category theory, or programming language semantics, or Friston’s ideas. For those I simply wish the theorizers well, and wait around for someone who will show me why I can’t not study what they found.

By far the closest I got to “engaging” Friston’s ideas was to read Scott Alexander’s sequence of blog posts about them—but I don’t know whether Friston himself would regard that as having even landed me on the same continent.

14. fred Says:

Scott #7,

but how come for many problems in 2 dimensions that have efficient solutions (like 2D matching, or NxN Matrix eigenvalues), adding one more dimension (3D matching or NxNxN Tensor eigenvalues) the jump in complexity is so massive?

Do we know of any type of problems that would be efficient to say, 5 dimensions, but then going to 6 dimensions makes them intractable? Or is it always from going to 2D to 3D?
But maybe I’m being mislead by the name “dimension”, many problems only have solutions for some parameter smaller than k (like Fermat’s conjecture for 3).

Scott #10,
Hah, I did read The Ghost In the Quantum Turing Machine, but it was many years ago… time to read it again I guess! 🙂

My general point about computers and AIs is that, so far, we’ve never built one that we couldn’t replicate as many times as we wanted (we can always reverse engineer any system made of transistors), and as soon as we can replicate a system we can basically “simulate” it by just running its clone ahead of time for the same inputs… that will probably be very disturbing to the sense of free will of our future AI overlords!

15. Sandro Says:

arch1 #6:

I’ve never understood how either deterministic or random behavior (or any mixture thereof) can give rise to anything deserving the label “free will.”

That’s because most people hold either a physicists’ or Christian’s understanding of this term, which has unfortunately muddied the philosophical debate around this topic.

The free will debate is roughly about whether “choice implies moral responsibility” has a coherent interpretation. This typically takes two main forms, incompatibilism (our choices need not always follow from antecedent causes for us to have free will), and compatibilism (whether our choices follow from antecedent causes are irrelevant to whether or not we have free will).

The trouble is, incompatibilism seems to reduce to requiring random choice, which violates the “will” part of “free will”, and Compatibilism would seem to violate the “free” part since it’s compatible with determinism.

However, it’s clearly nonsense to require absolute freedom in order to qualify as free will. We’ve never used the term in this fashion, as we must always make our choices subject to various constraints. One thing that we do always require however, is the “will”, as in, we deliberately made a particular choice.

This is of course a crude summary of the subtle debates around this topic, but these days the majority of philosophers are Compatibilists and so believe that free will is compatible with determinism.

16. Sandro David Magi Says:

mjgeddes #8:

As I commented on Twitter, I now think that consciousness is explainable within a physicalist framework, and I’m quite confident of this.

You might be interested in this work: The attention schema theory: a mechanistic account of subjective awareness.

17. fred Says:

Sandro #15

Hah, but even if free will was a thing, one could always make the argument that nobody can be truly held responsible for anything because noone controls what they’re being handed at birth, which includes their own brain and its ability to do the right choices or learn from the wrong choices or its amount of will power the change itself.

Some religions try to account for that, like Buddhism which says that whatever bad cards you’re being held in this life are a punishment for whatever poor choices you made in the prior life (which is really perverse because it makes people born with disabilities responsible for their own bad luck)… but it really doesn’t solve anything besides moving the problem backwards in time over and over… until you’ve worked your way back to the very first living cell that’s the ancestor to all the other living cells currently alive.

Although, if you interpret this broadly, there’s some truth to it:
Every “choices” you’re making in this life will influence your subsequent incarnations (it doesn’t have to be you in a “next” life, but simply you in 10 years from now) because everything is connected, and basically what goes around comes around.
So you’re better off doing good deeds than committing bad deeds.
Of course, like anything else, you can’t be held accountable for being sensitive to this argument or not.

In the end it’s all about the quality of the ideas themselves, whether they propagate or not.
Ideas/philosophies are like seeds blowing in the wind, and as humans, we’re the temporary soil in which they land, and they will either blossom within us and create more seeds (which then propagate and bloom in more and more humans) or just die out.
Ideas themselves are expressions of the basic fundamental forces of the universe.

18. Aspect Says:

Scott I’m curious, what do you consider as a “certificate of importance” for a certain field of study? To me, it seems like a good sign when the insights generated in one field help lead to discoveries in other (adjacent) fields, but I’m not sure if this criterion is general enough.

BTW, about halfway through the interview I noticed that there was hardly any stutter or pause in your speech (at least none that stood out to me). Personally I’m not bothered when they happen, however it seems like there has been a vast improvement in your flow so some props are in order 🙂

19. Mark Says:

Scott #13: Thanks for the reply, the “certificate” concept is interesting and I realize I hoped you would be mine! Regards.

20. Scott Says:

Aspect #18: There are many possible “certificates of importance,” but the one you mentioned is an excellent example.

21. Gerard Says:

Scott #10

Why do you believe that randomness is not sufficient for free-will while “Knightian uncertainty” might be ?

My understanding of the latter is that it refers to cases where it isn’t even possible to define a probability distribution for some variable (it might help my understanding if someone could provide an actual example of such a case).

Certainly the randomness in QM doesn’t behave that way because one sometimes has enough information about a system to know its quantum state, which implies a probability distribution. If you consider an electron bound to an atom and in its ground state, for example, it seems reasonable to see the wave function as a kind of “soft constraint” on the behavior of the electron. However as long as that soft constraint is met the electron behaves randomly, meaning in a way that is unpredictable by any known theory. Why isn’t that enough to entertain the idea that the electron has some kind of free will ? If you measure it at a certain position at a certain time, it’s because that’s where the electron “wanted” to be. That seems to me to be as reasonable a way as any other to give “meaning” to the kind of fundamental unpredictability we find in quantum mechanics.

Of course if you accept that notion the next problem is to explain how a large number of electrons with free-will “add up” to a human that thinks it has free will. I won’t attempt to address that question now but I’m curious about your response to the first question.

22. mjgeddes Says:

Sandro #16

Yes, the Attention-Schema Theory (AST) is an interesting theory by Michael Graziano, who is a very good neuro-scientist. The idea is that the brain constructs a model of attention, and that’s the basis for consciousness. It could argued that it has the virtue of simplicity, which is in it’s favor, but note that it’s enthusiastically endorsed by some members of the ‘Less Wrong’ rationalist community, which should immediately set off red flags 😉

I don’t believe AST myself. The trouble is, although attention is *correlated* with consciousness, from what I understand, the correlation is just not good enough to make me think that attention is the basis for consciousness. And the simplicity of the theory is also it’s weakness…consciousness is multi-faceted… I’d be amazed if something as simple as AST could account for all aspects of it.

23. Scott Says:

Gerard #21:

Why do you believe that randomness is not sufficient for free-will while “Knightian uncertainty” might be ?

Probably the simplest answer is to read GIQTM—it might or might not convince you, but it certainly explains why I think what I do!

Note that once a quantum state is specified, the probability distribution over outcomes of any possible measurement on the state is completely determined by the Born rule, so is not subject to Knightian uncertainty. The latter arises only in situations where we don’t know which quantum state we’re dealing with—owing (if you trace it back far enough…) to our lack of full knowledge of the initial state of the universe, or of the part of the universe that we inhabit.

24. Will Sawin Says:

Fred #14: In mathematics you have a lot of things that work for small n and not large n. Computing things is generally very hard and the problems are basically allowed to take every chance they can get to work against you. So they tend to exhibit bad behavior for smaller values of n than in other fields of mathematics.

Another explanation might go by graphs. 3-way gates (2-in, 1-out) are enough to make a general circuit because trivalent graphs can have quite a general structure among graphs (in particular they can make arbitrarily branching trees). On the other hand, 2-way gates do not suffice because bivalent graphs can only be unions of cycles. And as Scott pointed out, the fact that you can make a universal computer is key to the problem’s difficulty.

I don’t know what problems of tensors you think are hard, but one problem is to get a complete set of invariants. Invariants of N x N tensors (up to the orthogonal group action, say) can be expressed as bivalent graphs, and invariants of N x N x N tensors can be expressed as trivalent graphs. So you can see that this problem is much harder for the same reason.

Scott #13: I am confused on the relationship between category theory and nuclear physics in your example. I guess it all hinges on the meaning of “amply repaid”.

I can think of three interpretations:

(1) – you want to be paid in information which is useful to you in your work
(2) – you want to be paid in information which is interesting
(3) – you want to be paid in information which is useful to someone for something

I guess (1) is a higher price than (2) and (2) is a higher price than (3).

It seems to me that you have strong evidence that nuclear physics meets (3) (nuclear power plants et. al.), I’m not sure about (2), and probably not (1).

It also seems to me that you have strong evidence that category theory meets (3), and you lack evidence whether it meets (1) or (2).

Is the source of the discrepancy that you are sure nuclear physics meets (2), or that you aren’t sure category theory meets (3), or am I totally off base?

25. Scott Says:

Will Sawin #24: On reflection, you raise an excellent point about my choice of examples.

Neither nuclear physics nor category theory are likely to be useful for my work anytime soon. Meanwhile, both are obviously useful to someone for something.

So I suppose the key differentiator between them is indeed your (2). I feel confident that, if I studied nuclear physics, I’d learn a lot that I found personally extremely interesting, whereas I lack that confidence for category theory.

26. Plus Pi Says:

Nice interview! Recommend 1x viewing speed to enjoy the conversation and take the most out of it. If you have to watch it on 1.5x you don’t have time to watch it, do your job or something else.

27. fred Says:

Will #24
thanks! very interesting, I will look into those gate concepts more closely (thanks to Scott also, who mentioned the same thing).

About most tensor properties being NP-Hard, I just ran into this paper
https://arxiv.org/abs/0911.1393
For example, eigenvalues are only “easy” to compute for a very small subset of them (so called “odeco” tensors).

28. alpha Says:

@Scott #7 2SAT involves clauses of form $x\vee\not y$ which also needs universal gates (we need a NOT and an OR).

29. alpha Says:

If P=NP holds then is it possible still P\neq PP=BQP holds? Any reason to believe P=NP will not give P=PP? Any reason to believe P vs NP is distinct problem compared to P vs PP?

30. Scott Says:

alpha #28: Right, but the relevant question for the reduction is not what logic operations you need to represent a k-SAT clause, but the opposite: how large k needs to be before a k-SAT clause can represent both the inputs and the output of a logic operation.

31. Scott Says:

alpha #28: Those are excellent questions. The short answer is that all the possibilities you mention—even P=NP≠BQP=PP—are theoretically consistent with current knowledge. And, building on the breakthrough by Raz and Tal, we even now know how to give oracles relative to which these bizarre possibilities hold, which is one way of saying that no “elementary” argument is going to rule them out. (I have a paper on my back burner that will make this implication of Raz-Tal explicit, along with other implications. In any case, if you just want an oracle relative to which P=NP≠PP, then you don’t need Raz-Tal; results from the 1980s will give what you need.)

In practice, if P=NP were actually shown, I’d probably throw my hands up and say “who the hell knows? Why not P=PP=PSPACE too?” But you’re talking to someone willing to bet good money that P≠NP≠BQP≠PP≠PSPACE. 🙂

32. Gerard Says:

Scott #31

“But you’re talking to someone willing to bet good money that P≠NP≠BQP≠PP≠PSPACE.”

I’m curious if anything that has happened “recently”, say in the last 10 years but feel free to expand that definition a bit if necessary, has changed your bets on any of those inequalities.

33. Scott Says:

Gerard #32: Not really.

34. alpha Says:

Well for BQP, PP and PSPACE there is the number of quantifiers barrier which NP does not have. So even if P is NP so what? BQP and PP probably have special place does not seem that bizarre no?

35. Scott Says:

alpha #34: Sorry, I didn’t understand the question. If P=NP, formally that means nothing for (e.g.) P vs PSPACE as far as we know. Informally, it means that our whole intuitive picture of complexity theory was wildly wrong.

36. Gerard Says:

Scott #35

If I understood correctly your response to one of my questions a while back, it is known that BQP \in P^#P.

If that’s true P^#P seems like kind of a key link to include in the chain.

I also find it rather amusing that someone who knows nothing about quantum complexity theory could have a huge impact on the field by proving NP = P^#P because that would imply NP = P^#P = BQP.

37. alpha Says:

I am just saying perhaps P=NP is different from P=PP or BQP or PSPACE which all have to be defined with a quantifier complexity that conjecturally grows as a monotone function $f(n)$ of $n$. P=NP says nothing about the growth of $f(n)$ which even at $\omega(1)$ fails to give polynomial time algorithm (https://cstheory.stackexchange.com/questions/5463/can-one-amplify-p-np-beyond-p-ph/37272#37272) and so perhaps P=NP is ok and might be an anamoly in the complexity theoretic picture.

38. Scott Says:

Gerard #36: Keep in mind that PPP=P#P, meaning that PP and P#P have “almost” the same power (some people even suspect exactly the same; I’m agnostic).

And yes, the famous open problems of quantum complexity theory are “sandwiched between” even famouser open problems of classical complexity theory!

39. alpha Says:

Would you entertain that view that quantifiers and not non-determinism is of prime importance?

40. Gerard Says:

Scott #38

OK, I don’t know anything about probabilistic complexity theory either, but a priori it makes sense to me that PP and #P would be related because the mathematics of probability is really all about counting things.

41. Scott Says:

alpha #39: I don’t understand the question. In CS theory, “nondeterminism” means nothing other than a single existential quantifier. And you can define PSPACE as “like NP but with polynomially many quantifiers,” but that’s just one of many ways to define it. And it seems weird to debate which of these absolutely primitive concepts is of “prime importance”—why not just admit that they all are?

42. alpha Says:

Well I think you are opinionated with P\neq NP. What I say is quantified non-determinism is inherently harder not because of the non-determinism part but because of the number of quantifications. P=PH might be possible while and P\neq PSPACE is impossible because of the number of quantifications necessary and by that logic P=PP might be possible if the number of quantifications is O(1).

43. Scott Says:

alpha #42: OK. I mean, NP also involves polynomially many quantified Boolean variables—it’s just that the quantifiers are all of the same kind, rather than alternating universal and existential. And PP doesn’t involve universal or existential quantifiers at all, but only “counting quantifiers” (like with NP and unlike with PSPACE, the same kind for every variable).

44. fred Says:

I have a NP-Hard related question.

For 3SAT, with N variables, the number of permutations of the variables is 2^N.
The number of possible output sets is 2^N^N (the number of boolean functions taking N vars).
The number of possible clauses is 8*N*(N-1)(N-2)/6.

So for N = 4, there are 2^16 possible boolean solution sets, and a total of 8*4 = 32 possible clauses.
So you can create 2^32 possible sets of clauses (with lots of repetitions due to symmetries).
I was wondering how many of the 2^16 sets of solutions can be represented by those 2^32 possible sets of clauses.
Each clause excludes 2^(N-3) variable combinations at once, for N = 4, it’s 2. So there’s clearly some solution sets you can’t cover with 3SAT (like accept all permutations of variables except just one).
I’m sure there’s a way to compute this precisely, but I just generated all the possibilities for N=4 and found that you can express 43,145 out of the possible 65,536 solution sets (66%).
I guess that ratio is decreasing exponentially fast for N > 4 because we have 2^N^N possible solution sets vs O(2^N^3) possible clause sets.
Is this “coverage” typical of for all NP-Hard problems?
I guess it makes sense that you can only express a tiny portion of all boolean functions because an NP-hard instance is a way to compress a boolean function into a size that’s much smaller than its truth table.
But what if you give an generic boolean expression of N variables (instead of a strict 3SAT style clauses), isn’t that still NP-Hard while covering all the possible solution sets? But I’m guessing the size of the expressions would be (on average) as big (in bits) as the truth table?

45. alpha Says:

That is what I also mean PSPACE is polynomially many quantifications on polynomially many non-deterministic bits. Doesn’t the Raz-Tal oracle result signify an almost linear number of universal and existential alternations for PP? Also doesn’t NP=PP ask if we can singly existentially quantify PP with polynomially many non-deterministic bits and PH=PP ask the same with O(1) quantifications?

46. alpha Says:

It will be nice to know the difference between Bernie style socialism and communism. You nee to make a detailed blog post of this. You know your post appears in news sites sometimes (perhaps you may connect to classical versus quantum computing to highlight differences if you wish ;))

47. Gerard Says:

fred #44

You can take an arbitrary boolean circuit (which is typically much smaller than the truth table for the function it implements) and reduce the question of whether that circuit outputs 1 for at least one input to the same question for some 3-SAT instance whose size is polynomial in that of the boolean circuit. If I’m not mistaken you specifically will get 4 k=3 clauses per AND/OR gate and 2 k=2 clauses per NOT gate. You also will get an extra variable per gate in addition to the input variables to the original circuit.

Hence the solution set of the 3-SAT instance is clearly not identical to that of the original boolean circuit (they can’t be because the number of inputs isn’t the same).

48. Scott Says:

alpha #46: Socialism and communism are both about the aspiration to appropriate the means of production from corporations and wealthy individuals, and to put them into the hands of “the workers” or “the people.” But, like, there are aspirations and then there are aspirations. There’s: “we aspire to this, but once the reality sets in of how unachievable it is, we’ll gladly settle for Scandinavian-style capitalism with a strong social safety net and nationalized healthcare and free college; even that will be vociferously resisted by our political opponents who will still be around.” And then there’s: “we aspire to this, and when people resist us, we will shoot them or put them in gulags until they no longer resist.” While I’m neither a socialist nor a communist, one reason why I’d vote for Bernie over Trump in a heartbeat is that he’s pretty obviously in the former category.

49. Shmi Says:

Scott, I thought I remembered you talking about a petition for open access some time back, but your signature is not on http://oaintheusa.com/signatures.php. I assume there is a good reason for it.

50. Scott Says:

Shmi #49: I just signed it; thanks! The good reason why I hadn’t yet signed is that I hadn’t seen it.

It feels bizarre to cheer on the Trump administration to go ahead and do something that, in their minds, is probably “smashing the power of some arrogant scientific elites.” I feel like a bear cheering a hunter who happened to set his sights on a really evil bear. On the other hand, Trump will eventually be gone (please god, let it be in January 2021), whereas Elsevier et al. will maintain their stranglehold over scientific publishing forever until someone breaks the equilibrium.

The petition website really needs to clean up the list of signatories to eliminate duplicate entries, etc., and also make it possible to search by institution or field.

51. Gerard Says:

Scott #23

I have looked at GIQTM, though unfortunately I don’t have the time/energy to get through all 85 pages.

Nonetheless I do have one comment and one question.

You don’t seem to provide anything resembling a formal definition of what you call “Knightian uncertainty”.

There is this statement: “An agent in a state of Knightian uncertainty might describe
its beliefs using a convex set of probability distributions, rather than a single distribution.”

I must admit that a “convex set of probability distributions” is an object that I have no intuition for whatsoever. For example, how does normalization come into play there ? I see that there is a reference on that sentence but searching the corresponding document for the word “convex” found no matches.

Now for the question.

You are only the second person I have ever heard mention Dempster-Shafer theory.

I think you’ve done a lot of work with probabilistic algorithms and probabilistic analysis of algorithms. Is Dempster-Shafer theory something that you have ever found useful in that work (or anything else related to computational science) ? Are you aware of any problem Dempster-Shafer theory solves better than more well-known approaches ?

52. Scott Says:

Gerard #51: I do attempt a formal definition of Knightian uncertainty in the relevant sense, in Sections 12 and 14 (appendices), which it sounds like you didn’t get to. Other definitions are surely possible. “Convex sets of probability distributions” is just saying in math what might be rendered in English, “states of knowledge like the one where you know that the probability of some event is at least 60% and at most 80%, but you don’t know anything beyond that. Where the worst odds at which you’ll bet on some event don’t quite match the worst odds at which you’ll bet against it, there being an interval where you’ll bet neither for nor against.”

No, I haven’t ever seen a need for Dempster-Shafer theory in my own work. But that theory came up at the first place I ever attended to give a talk (the ACM SIGIR’97 conference), and in any case I mentioned it only in passing in my essay, as another candidate formalism for Knightian uncertainty.

53. mjgeddes Says:

I read GIQTM in the Turing machine all the way through. I’m fairly neutral as to whether it might be true or not – no strong views either way. I do think the ideas in GIQTM are entirely compatible with my own ideas on consciousness that I outlined in points (1)-(10) in post #8 above. In fact I think the freebits idea would fit right in a natural way.

The concept of ‘Knightian Uncertainty’ seems rather unclear, but appears to be related to imprecise probabilities? There’s a number of different ideas about various forms of uncertainty but no consensus. But I’m all for trying to extend probability theory, since I don’t believe Bayesianism and I don’t think that ‘understanding’ is reducible to ‘prediction’.

As I suggested above, “In a nutshell, I’ll say that philosophical explanations are *translations* (mappings) between different levels of model abstraction”. So is there some extension to probability theory that could capture this notion of ambiguities in translation between levels of abstraction?

The most promising candidate I can see currently is ‘Possibility Theory’, which is an extension of fuzzy logic developed by Lofti Zadeh.

“Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, possibility theory uses two concepts, the possibility and the necessity of the event.”

https://en.wikipedia.org/wiki/Possibility_theory

54. Gerard Says:

What is missing from probability theory that requires it to be extended ?

If you have quantifiable (in some way) uncertainty about what probability distributions describe a certain system can’t you always create a new probability distribution that is a weighted average off all distributions in the set you’re considering ?

For example if you believe that a certain data set is described by a gaussian distribution but you aren’t sure about the sigma, only having some estimate of the its probability distribution can’t you just integrate (or sum if the sigma distribution is discrete) P(x|sigma) over sigma weighting by P(sigma) ?

55. Scott Says:

Gerard #54: What you say is fine as long as your uncertainty is always quantifiable—or as some would put it, as long as you’re willing to take at least one side of any possible bet. But some of us are more risk-averse! There are bets that we don’t want to take either side of! When you try to model such situations formally, you end up with the convex sets of probability distributions that I was talking about.

56. Gerard Says:

Scott: #54

But if you have enough information to define such a convex set don’t you automatically have enough to reduce it to a single distribution ?

For example if all you know about a certain parameter are upper and lower bounds isn’t it clear that you should assume a uniform distribution over that range, and if it’s a parameter that controls some family of distributions, to integrate them over that range for the parameter ?

57. Scott Says:

Gerard #56: A billion times no! No, just because you’re ignorant about something, doesn’t mean you get to assume a uniform distribution over all the possible values it could have (which, ironically, would be asserting an extremely specific kind of knowledge). On that question, ironically, I’m in total agreement with the hardcore Bayesians. I merely go one step further than them, in holding that you might not get to assume any other specific probability distribution either.

Any time one forgets this, it might help to remember the activist worried that the LHC was going to destroy the world, who when interviewed by John Oliver, earnestly explained that since the LHC might destroy the world and might not destroy it, the only rational conclusion was that it had a 50% chance of destroying the world. “I … don’t think that’s how probability works,” Oliver responded.

58. Peter P. Says:

The nice thing about rejecting free will (i.e., endorsing superdeterminism) is that Bell experiments no longer have any implications for nonlocality or realism, which demystifies quantum mechanics.

59. Scott Says:

Peter #58: Right, but once you’ve accepted “superdeterminism,” you’ve bought into a cosmic nonlocality that’s a billion times worse than the one you were trying to explain away, so who cares anymore about the Bell inequality?

60. Gerard Says:

Scott #57

Your example doesn’t really convince me that uniform priors are an unreasonable way to represent actual ignorance.

It seems to me that what’s happening in that example is that the person is choosing to ignore a huge amount of available evidence that pointed to the probability of that particular coin coming up tails being vanishingly small and therefore arrived at a very poor performing predictive model for the outcome of the experiment.

61. Sniffnoy Says:

Gerard: I don’t think your idea makes any sense if you try to formalize it. If all you have is some convex set of probability distributions (over some fixed set), how do you define a uniform probability distribution over these? (And what notion of distance are you using in order to make your one-parameter family special case work?)

62. maline Says:

I also just tried re-reading GITQTM, and failed to make heads or tails of what “Knightian uncertainty” is supposed to mean, or why it would be relevant to anything.

As far as I can tell, economists use the term for situations where you can’t apply things like the Central Limit Theorem, because there is no way to divide things into a sum of many statistically independent components. These situations are dangerous because the tails of the probability distribution you will have to end up using are much heavier than for the comfortable situations where CLM applies. If they say things like “there is no probability distribution”, I can only assume they are speaking imprecisely.

As a Bayesian (not committed as to whether priors can be made objective), I understand probability, in most contexts, to mean “degree of belief”. Every uncertainty that someone may have is a probability distribution, by definition! “Uncertainty with no probability distribution” makes as little sense to me as “colorless green ideas”.

And yes, if you have absolutely no evidence relevant to some question, then your distribution will just be your prior, which will usually be roughly uniform. That is not “asserting an extremely specific kind of knowledge”, it is identically the admission of ignorance.

But the case of “absolutely no evidence” is extremely rare. In general, if we are considering some system, we do have some level of experience with other systems that are at least a bit analogous. For instance, if we see a string of ones and zeros printed on a slip of paper, we should not use a uniform distribution on all bit strings – we know that machines that print out stuff usually do so for some programmatic reason, and so the probability of a string should be negatively related to some measure like its Kolmogorov complexity. On the other hand, machine-generated bit strings usually encode information, so extremely simple strings should have a lower probability. The correct Bayesian distribution takes all such issues into account. In particular, we have good reason not to treat the bits as statistically independent – in sharp contrast with the case where we truly knew nothing!

So what we can say is that using uniform probability is not an assertion of knowledge, but it is a confident, perhaps hubristic, assertion of true, pristine ignorance.

Anyway, bottom line, you seem to be using the concept of “probability” in a way that I’m not familiar with. Would you mind trying again to explain what you mean?

63. maline Says:

[Continuing my previous comment]

Come to think of it, perhaps we can just drop the issue of “types of uncertainty”. The central claim of GITQTM seems to be that the predictions made by an agent that learns through a specified class of “nondestructive interventions” will necessarily by less informative that those made by a Laplace Demon (even if the demon is limited to knowing the quantum state and not any Bohmian-style hidden variables, which would made its predictions perfect). “Less informative” can probably be taken to mean “boundedly higher entropy”, if both predictors present their results as (conditional) probability distributions over detailed sets of behaviors.

Does this correctly capture the main idea, or does the “Knightian probability” issue play a more critical role?

THe question of relevance still eludes me, though. It would probably help if we had at least one suggestion for what “free will” might mean, so that we could try to decide whether it has anything to do with “impossibility for a physical agent to make the best conceivable predictions”. But I don’t really see even a potential for connection there.

Specifically, if being uploaded to cyberspace makes it possible for someone to predict my behavior because the “initial condition quantum randomness” is no longer relevant, I see that as a security issue but little more. It’s not as if this would have any effect on what thinking and making choices with my simulated neurons would feel like, as long as the quantum randomness is simulated by RNG’s with distributions that are similar, on a very coarse-grained level, to the empirical distributions taken from the past history of such events in my meat brain. If being simulated did have some effect on experience, it would have to be because of a change in metaphysical “mind stuff”, unrelated to predictability.

64. Gerard Says:

Sniffnoy #61

“If all you have is some convex set of probability distributions (over some fixed set), how do you define a uniform probability distribution over these?”

By defining a probability distribution over the parameters of the distributions in the set.

As a concrete example consider a family of binomial distributions, defined by a single parameter, the probability of success: p. Suppose that the only information we have about p is that it lies in the interval [0.3,0.7]. Now despite Scott’s objections let’s assume for sake of argument that it makes sense to represent our uncertainty about the value of p as the uniform distribution over that interval.

I claim that the outcomes of this experiment will be described by a single binomial distribution represented by p* = mean( [pmin,pmax] ) = 0.5.

This should be very easy to simulate and check. For each trial first choose a value for p from the uniform distribution then sample from the binomial distribution using that value for p.

I think that’s the exact same way you would simulate sampling from a mixture distribution, though the words used to describe the two situations appear different.

Of course I’m certainly open to the possibility that everything I just said here is completely wrong, I’m certainly no expert on probability theory or statistics.

65. Raoul Ohio Says:

Ramon noodles, a “not very hard” example of “three is a lot harder than two”:

Most readers will be familiar with Ordinary Differential Equations (ODE’s), where you study systems governed by an ODE of form x’ = f(x), where x is a time dependent vector, and x’ (the time derivative) tells how the vector is changing as time moves forward.

For x in 2D, solutions to ODE’s can be thought of as paths on a 2D surface. At any point (x,y), the ODE determines that the point follows the the path that it is on. Therefore, one solution path cannot cross over another solution path, so the entire batch of paths is like a single layer of uncooked ramen noodles. They do not cross, so any path is pretty simple, and you can qualitatively solve the system by sketching some curves.

For x in 3D, paths can cross over each other, tie themselves in knots, etc. This is like ramen that has been cooked and stirred up, so paths can be arbitrarily complicated, i.e., there are “chaotic solutions”.

66. Raoul Ohio Says:

Oops! Ramen noodle example is slightly confusing for beginners due to both x and (x,y) being used to represent the point in 2D. Improved version:

w’ = f(w), with w = (x,y) in 2D and w = (x,y,z) in 3D.

67. fred Says:

Peter #58

I don’t think that rejecting free will means accepting superdeterminism in the way Scott usually presents it. That type of superdeterminism also states the the initial conditions of the universe were such as to make every QM experiments produce an outcome (through correlation) that happens to follow the rules of QM. Although the Debroglie/Bohm created a deterministic framework that supposedly produces the same outcomes as QM without the need for some sort of “conspiracy” of correlation.

Rejecting free will simply recognizes that the brain is a quantum system and nothing more. I.e. all outputs of the brain depend on its prior state(s) and the inputs. So, a brain doesn’t have any real “choices”, because none of its fundamental particles have any choice either.
There’s the question of micro vs macro system in QM: if two particles taken in isolation follow a certain type of causality, and then considering those two particles as one system makes it behave in a different manner (like with entanglement), that bigger (macro) system still follows causality. I don’t think that entanglement means that a macro system can “override” the dynamic evolution of its parts. And of course any “system” is in the eye of the beholder, hence some confusion around systems within systems in QM.

68. fred Says:

Raoul #66

Thanks!
What’s also interesting is that when you look at the list of platonic shapes in different dimensions, the pattern is pretty counterintuitive (more dimension doesn’t always mean more richness):

1 dim: 1
2 dim: infinity
3 dim: 5
4 dim: 6
>5 dim: 3

69. Sniffnoy Says:

Gerard #64: What parameters?? Who said anything about parameters? All I see is you coming up with ways to handle special cases where everything can be thought of as lying in R^n when suitably interpreted. What about when that’s not the case?? All we have to work with here is a set S, I guess a sigma algebra on S determining what’s measurable?, and a convex set of probability distributions.

70. Scott Says:

fred #67:

I don’t think that rejecting free will means accepting superdeterminism in the way Scott usually presents it.

That’s an understatement. 🙂 Free will and determinism are questions that human beings have argued about for thousands of years, and that I could imagine them arguing about for thousands more. Whereas “superdeterminism,” I hope and expect will someday be no more argued about than (say) the wrong ideas of Immanuel Velikovsky.

71. Gerard Says:

Sniffnoy: #69

I haven’t even seen a formal definition of what a “convex set of probability distributions” is. The only convex sets I’m familiar with are defined in R^n. My example was just based on Scott’s informal description of what he had in mind.

I was only asking questions to get a better understanding of some of these concepts.

So maybe you’re right and what I’ve said makes no sense whatsoever.

72. Sniffnoy Says:

Gerard #71: Ah, yeah, “convex set” is a notion that makes sense in way more generality than just R^n. Like, just directly extending the sense of the word “convex” as applied to R^n, it makes sense in any vector space — or, more generally, affine space — over R. (And then the word is used in a bunch of other related senes too…)

If we fix a set S (and a sigma-algebra of measurable subsets), then probability distributions over it naturally form an affine space over R. So, convex sets of probability distributions make sense in broad generality. It’s not something restricted to the case where S is finite so that everything lives in R^n. If it were, I think the correct response to Scott would not be “well here’s how you could put a uniform distribution”, but rather, “why on earth is your theory of Knightian uncertainty only handling finite sets??”.

73. Sniffnoy Says:

Errr what am I saying. They’re not an affine space. But they’re a convex subset of an affine space. (Or a vector space, I guess.) Anyway yeah, convex subsets of them make sense is the point.

74. Gerard Says:

fred: #67

“Rejecting free will simply recognizes that the brain is a quantum system and nothing more. I.e. all outputs of the brain depend on its prior state(s) and the inputs. So, a brain doesn’t have any real “choices”, because none of its fundamental particles have any choice either.”

It seems to me there is a huge difference in character between deterministic and probabilistic predictions and that the lessons of QM are just about the opposite of what you say here.

Physicists who lived through the quantum revolution saw it as a very radical departure from what had come before. If Einstein had thought there was no fundamental difference between deterministic and (intrinsically) probabilistic systems I don’t think he would have been so insistent that “God does not play dice with the universe”.

75. Scott Says:

Gerard #74: While this is not directly related to the free will debate, I can’t resist commenting … yes, the physicists who lived through the quantum revolution saw it as a radical departure from what came before, and they were clearly right. Indeed, they probably even underestimated how radical it was, because many imagined it could be contained to “the microworld,” rather than changing the rules for the entire universe. It’s about as radical a revolution as there ever was in human knowledge, up there with Darwin or Galileo or anything else.

But we should forgive the founding generation if they didn’t always put their finger on why it was such a radical departure. Bohr’s writings always struck me as very far from the core of the matter—indeed, a physics historian once told me that it’s not clear to what extent Bohr ever even learned post-1926 QM, as opposed to the earlier patchwork of ideas leading up to it. Einstein got much closer but was off as well—IMHO, his bit about “God playing dice” contributed to a century of misconceptions about why QM is revolutionary. No, it’s not probability per se, it’s more specifically the new way we need to calculate probabilities—using complex-valued amplitudes, which have fundamentally different behavior from probabilities, and which are not just some calculational trick but (in some sense) the deepest layer of reality that’s ever been found, and which are what make it impossible to interpret the probabilities as merely reflecting human ignorance of some underlying classical state of affairs. (To be fair to Einstein, he had other passages where he understood all this, but then he’d go back to grumbling about his desire for a return to determinism…)

Anyway, I’d say that it was Schrödinger, von Neumann, Dirac, and then especially Feynman who demonstrated that it was possible to articulate the core of QM in plain English. Namely, that it’s all about the amplitudes, and how (unlike classical probabilities) they can interfere and cancel each other out, thereby giving rise to a different calculus of probability than the one we’re used to.

76. Gerard Says:

Scott: #75

I don’t disagree with anything you say there, but I think you are perhaps underestimating the importance of probability. I think both the deterministic evolution of quantum states and their non-deterministic translation into observations are equally important. It is perhaps the only way to have a universe in which some kind of freedom exists but which nonetheless contains something other than total chaos.

77. Bennett Standeven Says:

@maline:

It looks like the thing Scott is calling “Knightian uncertainty” is actually ‘Knightian’/’Ellsbergian’ risk, caused by the initial conditions of the model being uncertain (but not actually random, so conventional risk theory wouldn’t apply). Of course, to an economist, this is just normal risk.

78. fred Says:

Gerard #76

Related to this, the latest PBS spacetime episode on decoherence is pretty interesting

79. STEM Caveman Says:

FYI – according to his Wikipedia page, Boris Tsirelson died recently. Jan 21, a few days after the resolution of his eponymous conjecture as a consequence of the MIP*=RE breakthrough.

80. Gerard Says:

fred #78

It’s certainly a very interesting short video.

My impression is that the notion that decoherence explains some aspects of the “measurement problem” is a fairly “recent” development. I took several courses on QM in both the physics and philosophy departments at Cornell in the late-80’s or early 90’s and I don’t recall it coming up very much, but I did start hearing more about it only a few years after that.

However, as far as I know there still doesn’t exist a single “coherent” theory of QM in which decoherence fully explains measurement, everything becomes fully deterministic and we don’t need the Born rule anymore.

81. fred Says:

Scott,

many physics articles state things like:

“It’s widely thought that information, like energy, should obey a conservation rule: The total amount of information in the universe will always stay the same.
Hawking initially believed that even if a black hole fully evaporated, the information it had consumed would remain lost forever.”

As if information is something that’s absolute and unambiguous, like counting stars in a particular region of space.

At the same time, the CS/Math view of information seems way more elusive:

“The Kolmogorov complexity of an object, such as a piece of text, is the length of the shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object.
In particular, for almost all objects, it is not possible to compute even a lower bound for its Kolmogorov complexity, let alone its exact value.”

How to reconcile those two views of the nature of information?
Are we only really talking about changes in information? (like changes in energy level)

82. Scott Says:

fred #81: There’s no contradiction. Kolmogorov complexity is just one specific type of information: namely, the number of bits in the optimal computable compression of a given string. But even within math and CS, the concept of “information” is much, much broader than that—e.g., the entire theories of channel capacity and error correction can be developed (and were, by Shannon) without ever making reference to computability.

In the context of the black hole information problem, “information is neither created nor destroyed” should be interpreted as just a crude way of expressing a much more precise statement: namely, that the evolution of the universe’s quantum state is perfectly reversible and unitary, and never maps pure states to mixed states. That’s the principle that was (and in some people’s view, still is) challenged by the existence of black holes.

83. fred Says:

Thanks Scott!

84. alpha Says:

I think P is NP. I have some solid leads. If successful may I ask for clarification?

85. Scott Says:

alpha #84: Sorry, no, you’ll need to ask someone else.

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.