## “Closer to Truth”

Two years ago, when I attended the FQXi conference on a ship from Norway to Denmark, I (along with many other conference participants) was interviewed by Robert Lawrence Kuhn, who produces a late-night TV program called “Closer to Truth.”  I’m pleased to announce (hat tip: Sean Carroll) that four videos from my interview are finally available online:

• Is the Universe a Computer?
• (like a politician, I steer the question toward “what kind of computer is the universe?,” then start talking about P vs. NP, quantum computing, and the holographic principle)

• What Does Quantum Theory Mean?
• (here I mostly talk about the idea of computational intractability as a principle of physics)

• Quantum Computing Mysteries
• (basics of quantum mechanics and quantum computing)

• Setting Time Aright (about the differences between time and space, the P vs. PSPACE problem, and computing with closed timelike curves)

(No, I didn’t choose the titles!)

For regular readers of this blog, there’s probably nothing new in these videos, but for those who are “just tuning in,” they provide an extremely simple and concise introduction to what I care about and why.  I’m pretty happy with how they came out.

Once you’re finished with me (or maybe even before then…), click here for the full list of interviewees, which includes David Albert, Raphael Bousso, Sean Carroll, David Deutsch, Rebecca Goldstein, Seth Lloyd, Marvin Minsky, Roger Penrose, Lenny Susskind, Steven Weinberg, and many, many others who might be of interest to Shtetl-Optimized readers.

### 25 Responses to ““Closer to Truth””

1. Ungrateful_Person Says:

Hi Scott,

While I have not thought much about it and I maybe unclear or making a fool of myself but is there no ‘simple sounding’ resolution to this paradox (which is probability free)? I mean i understand that simple sounding may vary from person to person, so i will try to get more concrete.

For instance, can you tell me if the following is a valid resolution. Say person X kills his grandfather by going back in time. He leaves his current world at time t_1 and kills his grandfather at some past time t_0.

Now, there are 2 different timelines begin to evolve (or 2 different copies of the “original” universe come to being).

In one copy, person X simply disappeared from the world. His grandfather never died at time t_0 in that universe and person X’s disappearence was a sudden event which possibly remains unexplained.

In other copy of the universe, person X (an extra person) suddenly appears. This person, “entirely new” to this universe pays a visit to his target _in this universe_ and kills him. But this does not kill X as he is from a different universe.

I hope the above makes sense.

Now onto a 2nd question.

You never said it directly — but I just want to confirm if the following is implied in your interview.

Say grandfather paradox remains unresolved (i.e., let us pretend that it is a very valid paradox). Then can this be considered as a thought experiment which prohibits time travel to the past?

I mean given the small amount of science i know about, something like “it will incur infinitely large amount of energy to do so” counts as a good obstacle to some physical process. As does violation of 2nd law of thermodynamics and a few other things.

But this logical argument against time travel (if it is an argument at all – assuming grandfather paradox) somehow makes me uneasy. I have never heard such an argument before and just wanted to check if it is valid here.

2. Ungrateful_Person Says:

To belabor upon the second point in the above post.

Say, we want to prove the following theorem

Theorem: Time travel into past is impossible.

Proof: If allowed, someone can kill his grandfather. This leads to a contradiction.

The above argument, even if grandfather paradox holds true, makes me uneasy

Let me know if the question still does not look well framed.

3. Scott Says:

Ungrateful_Person:

Yes, the Grandfather Paradox has often been put forward as a “proof” that time travel into the past is logically impossible. But there are several loopholes in that “proof.” One of them is the possibility of resolving the paradox probabilistically or quantumly (as Deutsch proposed). Another loophole is that maybe Nature simply always finds a consistent deterministic evolution, no matter how unlikely it seemed a priori. (E.g., if you went back in time and tried to kill your grandfather, you’d always discover that the gun jammed, or you forgot to load it, or he recovered from the gunshot wound and went on to sire your parent, etc. etc.) So really the Grandfather Paradox should be seen as a central, obvious difficulty that any account of closed timelike curves needs to overcome.

Your resolution of the paradox, in your first comment, is actually a good way to describe or visualize what happens in Deutsch’s resolution. (Indeed, since Deutsch avidly believes in the Many-Worlds Interpretation, he would regard it not just as a convenient way to visualize, but as a literal description of what happens in his proposal.)

However, one can also invent more complicated time-travel scenarios: for example, what happens if you flip a fair coin, and go back in time and kill your grandfather if and only if the coin lands heads? The beauty of Deutsch’s proposal is that it gives you an automatic way to compute a consistent story for any possible such scenario.

(Spoiler alert: in the above example, the solution is that you’re born with probability 2/3 and not born with probability 1/3. Or if you prefer, you’re born in 2 of 3 parallel universes, and not born in 1 of them.)

I’d write more, but I suspect you won’t be grateful.

4. Davide Says:

Scott,
These videos are great. You are a great speaker and the interviewer is also very ahead of many other popular science reporters.
Cheers,
Davide

5. Scott Says:

Thanks so much, Davide! I was just enjoying watching Robert’s interviews with some other people this morning. Even if his questions sometimes feel cheesy, they’re quite effective at getting people to say what they really think about big issues.

6. Ungrateful_Person Says:

Hey Scott,

Do not be so hasty
I kept this name as usually i do not contribute anything useful and just steal ideas from your blog posts (and in some cases your courses).

Coming back to the topic, can you write more? Please?
I would be grateful, I promise

7. Miquel Says:

Thank you for sharing the videos, I found them most inspiring. The first of them I’ve recommended to some colleagues teaching undergrad Computational Complexity courses. It might help to further motivate the interest of those courses to reluctant students

On the other hand, I’d like to make a more technical comment which sort of has to do with the third – and the most “esoteric” – of the three talks.

For some time I’ve been wondering about Feynman Diagrams, a tool which I understand to be the “computational framework” particle physicists use to determine probabilities of particle interactions – under the assumptions entailed by adopting a determinate model of such interactions, like the Standard Model – in order to either design experiments or make sense out of the signals detected by the sensors deployed in a particle accelerator.

One connection – or analogy – which I’ve come to recently is that Feynman Diagrams can be understood as a Natural Deduction system of sorts, where particle interactions “rules” play the part of axioms in a proof theory.

This connection, besides just being a bit of a “curiosity” best left for philosophers of science, I think it’s quite interesting and relevant to the whole idea of “the universe as a computer”.

By restricting in a suitable manner the number of particles being considered – and other purely notational “tricks” which would probably rile people like your Czech friend – I think it’s quite possible that it can be shown that the kind of computations done on Feynman Diagrams are not easier than solving a PSPACE-complete class of problems (and I’d say possibly harder, as computing probabilities involves “model counting” as in “counting possible sequences of interactions with the same outcome”).

While I don’t buy at all that metaphor of the Universe being a “computer” at all, I do buy the idea of physicists doing in their heads “computations” to make sense out the phenomena they observe, according to some theory – i.e. very much like when constructing a proof, one is playing an abstract game following certain “rules”.

* Can you recommend a good book on QFT and manipulation of Feynman Diagrams which you think a PhD on CompSci can get into? After reading Lubos blog through your links, I can’t help feeling a bit retarded

* Have you ever heard or read about anything like the above? Coming from a CompSci background, my Arxiv-fu is very weak.

* And the hardest of all: does this strike you as an interesting line of research?

8. Scott Says:

Miquel #7: If you want to evaluate a particular Feynman path integral (or equivalently: evaluate some particular amplitude in a quantum computation), that’s a #P-complete problem. So it’s easier than PSPACE, though not much easier!

The reason we think quantum mechanics does not give us the ability to solve #P-complete problems, despite the above, is that the value of a Feynman path integral is not directly observable in QM. Instead, you “just” get to sample from a probability distribution, where the probability of each outcome is given by the absolute square of a suitable Feynman path integral.

I don’t know of any good book about QFT for computer scientists, and I’ve been searching for one. If you find such a book, please let me know!

On the other hand, I do know a book that explains the logical connection between the Feynman path integral and quantum computing: namely, Quantum Computing Since Democritus!

You might also want to check out “This Week’s Finds” by John Baez—he’s also developed the analogies between Feynman diagrams and many other things (I can’t remember whether formal deductive systems were among them).

9. Miquel Ramirez Says:

Scott,

I’m surprised that there is a #P-complete reduction, as that would imply that the number of steps in the path needs to be bounded by some constant $k$.

Do you have a reference for that reduction? Google Scholar seems to be very confused if I put a ‘#’ or a lonely ‘p’ in the search query box

[…]The reason we think quantum mechanics does not give us the ability to solve #P-complete problems, despite the above, is that the value of a Feynman path integral is not directly observable in QM. Instead, you “just” get to sample from a probability distribution, where the probability of each outcome is given by the absolute square of a suitable Feynman path integral.[…]

In that respect, we’re on the same page. As far as I know there’s no way to ‘access’ those other possibilities in a useful way.

That’s quite interesting, indeed, but I’m more interested in finding out how those path integrals are computed – with a classical computer – and if the techniques we’ve learnt in the last decade that enable solving effectively certain #P and PSPACE classes of problems are readily applicable to the problem of evaluating path integrals along Feynman Diagrams.

Regarding your book: I ordered it quite a while ago, but Amazon has been weird (they told me they regretted they couldn’t honor my purchase and delayed shipping to July, then backpedaled and shipped it away and finally I got a 1.38\$ discount).

My comment regarding “amenable for computer scientists QFT textbook” was more about a QFT textbook which assumes 0 knowledge and infinite intelligence from the reader I’ll be checking the John Baez (I almost wrote ‘Joan’ instead of John, by the way). I’m delighted to find out that someone has seen something along the same lines as I think I have: that means I’m not too cranky (yet).

10. Carl Says:

I haven’t watched the “Is the Universe a Computer?” video yet, but I wish to register a terminological objection:

If in the 16th century, someone had given a talk about “Is Nature a Machine?” we would see that this was an advance in certain respects from prior views of nature which tended to be qualitative rather than quantitative, but it also contains a misleading anthropomorphism. A “machine” is a device made by a person to accomplish some goal. Thinking of nature as a machine then leads one to think of it as a device made by an anthropomorphic God in order to achieve some goal or another. It would be better to say instead that “Nature operates like a machine,” so that we retain the sense of determinism but lose the anthropomorphic deity.

The same objection applies to calling the universe a “computer.” A computer is a device used by a person for computing. It’s a specific type of machine, so all the objections above apply. However, on top of that there is the further objection that “computation” itself is fundamentally anthropocentric. To “compute” is to use one rule governed system to make rational inferences about another rule governed system. Hence to say that the universe is doing “computation” is to say that Someone is using it to compute something; that is, to relate one thing to another. Even if one believes in God, it seems silly to suppose that He would need to do computation, since He is supposed to be omniscient.

It is better therefore to ask “Does the universe operate like a computer?” or “What kinds of computation are possible in our universe?” so as to remove the anthropocentrisms.

11. Chris Says:

Carl #10 – your objections are dealt with in the very first part of the video.

12. Scott Says:

Carl: Yes! If you watch the video, the very first thing I do is dismiss the question “Is the universe a computer?” as a bad one, and try to replace it by better questions.

13. Scott Says:

Miquel #9: In my previous answer, I simply assumed that the number of interaction vertices (in the Feynman diagrams we wanted to sum over) was specified in advance. If we can’t impose any cutoff on the number of vertices, then there’s an excellent chance that our Feynman integral is divergent, and poorly-defined even at a physics level!

Maybe a better way to answer your question is this. At some level, a Feynman sum is “merely” a Taylor series expansion, and as such, it doesn’t always converge on the right answer—especially if the theory is strongly interacting. For that reason, if you’re trying to simulate QFT on a computer, then often the better thing to do is to switch to a better-defined “nonpertubative” formulation, whenever the latter exists. See for example this paper by Jordan, Lee, and Preskill.

14. Carl Says:

Don’t blame me! PHP Error! (Seriously, their site was down. Looks like it’s back now.)

15. Carl Says:

OK, I saw it now. I’m glad to see that you tried to move the interviewer away from the bad question and toward better ones. He seemed determined to move back to the bad one though!

16. John Sidles Says:

That the themes of this Shtetl Optimized essay have been extant for at least the past 178 years is documented Z. K. Silagadze’s recent preprint “arXiv in the classical Russian literature” (arXiv:1203.6789), which surveys the literary context of one of Vladimir Arnold’s favorite poems:

——————–
A Hog under an Oak

A Hog under a mighty Oak
Had glutted tons of tasty acorns, then, supine,
Napped in its shade; but when awoke,
He, with persistence and the snoot of real swine,
The giant’s roots began to undermine.

“The tree is hurt when they’re exposed.”
A Raven on a branch arose.
“It may dry up and perish — don’t you care?”
”Not in the least!” The Hog raised up its head.

“Why would the prospect make me scared?
The tree is useless; be it dead
Two hundred fifty years, I won’t regret a second.
Nutritious acorns — only that’s what’s reckoned!” —

“Ungrateful pig!” the tree exclaimed with scorn.
You’d have a chance to figure out
Where your beloved fruit is born.”

An ignoramus, likewise, in defiance
Is scolding scientists and science,
And all preprints at lanl_dot_gov,
Oblivious of his partaking fruit thereof.

I. A. Krylov (1825)
Translated by Alexander Givental and Elysee Wilson-Egolf
——————–

Kudos especially to (UC Berkeley student) Elysee Wilson-Egolf for this apt translation!

17. Rahul Says:

As an aside here’s one related but somewhat wacky book:

http://www.nytimes.com/2013/05/05/books/review/time-reborn-by-lee-smolin.html?pagewanted=2&ref=books&pagewanted=all&_r=0

‘Time Reborn,’ by Lee Smolin

Comments? Has anyone read it? I’m not sure whether this is serious work or somewhat crackpotish?

18. Miquel Ramirez Says:

Scott #13:

Thank you for the pointer, Scott. I’ll need to get into Jordan et al. paper, in order to be able to comment about your observation about “often the better thing to do is to switch to a better-defined nonperturbative formulation”.

Regarding the assumption on the number of interaction vertices being bounded

[…]
In my previous answer, I simply assumed that the number of interaction vertices (in the Feynman diagrams we wanted to sum over) was specified in advance.
[…]

That’s very interesting. The natural question that comes to my mind is “who and how such set of vertices is determined?”. How hard is to “pre-process” the input theory so as to circumscribe the set of potential particle interactions to those where the path integral converges? That makes me think somebody (or something) is playing the part of an extremely powerful – computationally speaking – oracle here.

[…]
If we can’t impose any cutoff on the number of vertices, then there’s an excellent chance that our Feynman integral is divergent, and poorly-defined even at a physics level!
[…]

Indeed, there needs to be some sort of “terminating condition” for the whole thing to make sense. As I understand the physics – and here I might be dead wrong – particles – which are low entropy forms of energy – in a infinite vacuum – and I think that a region of ever expanding space-time can be indeed taken as being infinite for all practical purposes – will be resulting in an overall state with higher entropy, as the result of the interactions tend to end up in stuff which doesn’t interact at all with other particles. So I’d say that there isn’t an explicit cutoff per se: we get to a certain state where further interactions are so improbable that just take amounts of time so big so as not being “interesting” or “useful” any more.

In an experimental setting, it’s quite reasonable to expect that those particles will be interacting eventually with some detector, which for all practical purposes I think is a well-defined “terminal condition”.

That the unbounded “generation” of possible system histories by evolving through possible, valid interactions which conserves energy , can end up in an “inconsistent” state is even more interesting. From a purely logical stand point, consistent theories do indeed allow for “evaluations” which aren’t “true” according to the theory.

This sort of reinforces my intuition that it makes sense to understand the Standard Model (encoded as Feynman Diagram + parameters over a finite state space) as a proof theory of sorts. That by following “valid” steps we end up with a “bad” state, doesn’t seem to me to be a problem in the formulation of the problem, but rather in the computational approach to the problem.

If one takes the space of possible system evolutions to be a directed-acyclic graph, with the edges corresponding to possible sets of particle interactions, the fact that you are able to follow a path ending up in a inconsistent state isn’t “bad news”. Actually, I’d say it’s “good news” as it would imply that the theory (from which you describe states and transitions) isn’t tautological.

I hope this doesn’t sound like pure crackpottery

19. Miquel Ramirez Says:

When I say “unbounded” I actually meant “not explicitly bounded”. Too bad I can’t edit comments

20. Peter Shor Says:

I think the most frightening follow-up question to “is the universe a computer?” is “what happens when there’s a bug in the program?”

On the other hand, given that the universe has run bug-free for nearly 14 billion years means that the Bayesian probability that the universe is a computer is extremely close to 0.

21. Miquel Ramirez Says:

Regarding introductory materials for CS people who might be interested (besides Scott’s book) I’ve found:

* The appendices to Sean Carroll’s “The Particle at the End of the Universe”

* The article

“Physics and Feynman’s Diagrams”
Kaiser, D.
American Scientist, Vol 93, 2003

gives a quite well-written and accessible historical overview of FD’s impact and how its usage by the Physics community has evolved over the years.

* I found a book with a quite promising title

“Quantum Field Theory in a Nutshell”
A. Zee
http://press.princeton.edu/titles/9227.html

22. Rahul Says:

Peter Shor Comment #20 :

On the other hand, given that the universe has run bug-free for nearly 14 billion years means that the Bayesian probability that the universe is a computer is extremely close to 0.

How do we know the universe has run bug-free? What would a bug in the universe-program look like to us?

23. Miquel Ramirez Says:

Rahul #22:

Latkes growing on trees?

24. Rahul Says:

Miquel Ramirez #23:

Right. If 50 generations grew up digging cherries like potatoes we’d think that’s natural too.

Latkes growing on trees might look like a feature and not a bug.

25. Kenneth W. Regan Says:

As far as “closer to truth” goes, I was right there—X marks the spot—in 1978. The US Championship was played at Ambassador College (WWCG HQ in Pasadena) in June 1978, just when Kuhn was delivering his Systematic Theology for Bobby Fischer’s church—and when the “Garner Ted in Bed” scandal broke. Place felt like chickens with heads cut off (we chessplayers too, as Walter Browne walked out in the first round), but the only thing I minded was the lawn always being mowed at 6am outside where I and the other players were staying.