## My Quora session

Here it is.  Enjoy!  (But sorry, no new questions right now.)

### 89 Responses to “My Quora session”

1. Avi Says:

“It’s directly analogous to how many beginners in CS imagine that “programs that can rewrite their own code” must be fundamentally more powerful than programs that can’t, but then they learn more and quickly get disabused of that notion.”

Paul Graham would disagree. http://paulgraham.com/avg.html (To be fair that’s a slightly different claim)

2. Scott Says:

Avi #1:

To be fair that’s a slightly different claim

Indeed, Graham and I don’t actually disagree about anything here. He’s just more interested in providing powerful abstractions to the human programmer, whereas I’m more interested in which functions a given language can or can’t express in principle.

3. Roland Says:

Why do you call Ryan Williams stuff “ironic complexity”?

4. Curious Says:

I missed. We know you bet on $P!=NP$. Would you bet on $ETH$ and no $2^{(\log\log n)^c}$ algorithm (if your family allows)? Would you consider yourselves sane for doing that?

5. Scott Says:

Roland #3: Because it’s fundamentally about proving lower bounds by discovering efficient algorithms (i.e., by proving upper bounds).

6. Scott Says:

Curious #4: Do you realize that a 2loglog(n)^c algorithm would be less than linear time—and therefore, for problems like 3SAT, we can rule it out right now, unconditionally? And thus, one would have to insane not to bet on the nonexistence of such an algorithm?

While it’s a tougher case, yes, I would also bet on ETH (but not my whole life savings or anything like that, and not at ridiculous odds).

7. Peter Morgan Says:

“Bell inequality—I’m not sure, has there been there any new development that I should respond to?”
Perhaps that it’s now accepted that the superdeterminism loophole cannot be closed by experiment, which was far from as much the case ten years ago. For example, see Jan-Åke Larsson’s “Loopholes in Bell inequality tests of local realism”, J. Phys. A 47 (2014) 424003. Jan-Åke lays out the possible loopholes pretty well, except that he does not distinguish stochastic superdeterminism from superdeterminism. Recent experimental papers are careful to make allowance for the possibility while more-or-less ridiculing it. I’m not now especially keen on this loophole, but I’ve found it helpful to be more open-minded about it than are the inadequate slapdowns one finds in the literature. In particular, (stochastic) superdeterminism, crudely put as “(probability measures over) initial conditions determine (probability measures in) the future”, was the standard position for the whole of the 19th Century; the slapdowns say that one can’t do science if (the probabilities of) experimenters’ choices are determined.
Separately, and perhaps more interesting here, that the ways in which Physicists and others discuss nonlocality in quantum theory (insofar as there is any, of course) has evolved over the last ten years, particularly as part of the continuing introduction of new concepts in the quantum information literature. Nonlocality as a resource, for example.

8. Sandro Says:

> the slapdowns say that one can’t do science if (the probabilities of) experimenters’ choices are determined.

I’ve never understood why this argument was ever taken seriously. Isn’t it obvious that there exist various means, like hill climbing or genetic algorithms, that can fully explore a solution space despite being deterministic? I don’t see why you’d need anything more. Determinism has always seemed to me to be a complete red herring to our ability to conduct science.

9. Scott Says:

Peter Morgan #7 and Sandro #8: Indeed, the relevant point is simply that, if you want to deal with Bell inequality violations using “superdeterminism,” then you need to posit a nonlocal conspiracy (involving the particles, the measuring apparatus, the experimenters’ brains, etc.) that’s a thousand times worse than the thing you were trying to explain in the first place. Whether that conspiracy is deterministic or indeterministic is immaterial … it’s still crazy!

10. Stefan O'Rear Says:

Your “nonlocal conspiracy” has always sounded an awful lot like postselection and/or $$\mathrm{BPP}_\mathrm{CTC}$$. I’m not sure whether that makes it more or less plausible.

11. Sandro Says:

Scott #9:
> if you want to deal with Bell inequality violations using “superdeterminism,” then you need to posit a nonlocal conspiracy (involving the particles, the measuring apparatus, the experimenters’ brains, etc.) that’s a thousand times worse than the thing you were trying to explain in the first place.

Is it really though? Objectively? Or are you just so used to the idea that you have to give up realism that it no longer seems as completely absurd an idea as it really is?

Keeping the realism and accepting the non-locality otherwise needed, as in Bohmian mechanics, seems no better than positing a conspiracy, and the latter doesn’t require a preferred reference frame to agree with relativity.

Skepticism because the conspiracy influences the brain aren’t convincing either. “If an experimenter could only tweak this ONE setting, then they would see Bell’s equality satisfied!” is no better than saying, “well, if only I had supernatural powers, then I could break the laws of physics!” If wishes were horses…

If ‘t Hooft provides a valid superdeterministic QM theory with a comparable axiomatic complexity to de Broglie-Bohm or MWI, then that simply proves that giving up realism or accepting non-locality are conceptually *no better* than a global conspiracy. Full stop. Any protests that one is more or less believable are merely human projections.

Why, precisely, is given up realism or accepting non-locality actually “better” than the conspiracy of superdeterminism?

12. Peter Morgan Says:

Scott, #9, the nonlocal conspiracy in the initial conditions is why I’m not as keen as I once was on superdeterminism, except that an initial condition of a stochastic field theory, which has to be given everywhere on a space-like hypersurface, say, is always and necessarily nonlocal.

But then the quantum optics states that are used for elementary models of Bell-violating experiments are sums of tensor products of wave-number eigenstates, which are intrinsically (improper, infinitely) nonlocal initial conditions in QFT. An argument against nonlocal conspiracy of initial conditions would appear to apply to such quantum states, but I take it we would say that the experimenter ensured that a sufficient swathe of the EM field in space-time was appropriately prepared.

Perhaps more potently, if we introduce Wigner’s friend to construct a QFT state that models Alice and Bob’s procedures or mechanisms for choosing experimental settings as well as their Bell-violating experiment (which physics more often than not takes to be far beyond our capability rather than impossible in principle), that QFT model would as much determine the statistics of Alice and Bob’s choices as would a classical stochastic field model.

Stefan, #10, I suppose the 19th Century attitude would have been “if we observe such and such a state now, then the initial conditions must have been what they would have had to have been to result in the state we have now.” Essentially the Sherlock Holmes principle, that once we have ruled out everything else, whatever possibility remains must have been the case (assuming, contentiously, that we know the dynamics). I take the case against classical models to be more that they’re not as *useful* as the Hilbert space mathematics and incompatible measurements of QM, not that they’re more or less *plausible*, insofar as we’re looking at experimental results, not at what, if anything, is “really” there. The case in favor of the usefulness of QM is of course very strong indeed.

Anyway, I think there has been some movement, as always glacially slow, in our understanding of QFT in particular.

13. mjgeddes Says:

The reason people tie themselves into philosophical knots trying to understand quantum entanglement is that they are misinterpreting what the ‘wave-function’ really is.

All the big problems with understanding only arise if you demand that the ‘wave-function’ is something ‘physical’. Insisting that the ‘wave-function’ is something physical immediately puts you into conflict with relativity theory, since then you need non-local influences to account for entanglement. But if you grant that the ‘wave-function’ can’t be physical, then there is no need for non-locality.

The philosopher of science basically has 3 options when discussing the wave-function of QM:

(1) Continue to demand that the ‘wave-function’ has a physical interpretation. Then QM can’t be complete and you need non-locality to explain reality. This leads to interpretations such as Bohm pilot-wave, objective collapse, etc., or other bizarre solutions including the aforementioned super-determinism.

(2) Accept that the ‘wave-function’ is not physical, but refuse to grant it objective reality. This is basically the Copenhagen interpretation, where the wave-function is taken simply to represent information in the mind of the observer. A perfectly respectable view, but the trouble is that there can be no objective picture of what reality is. You simply can’t talk about what’s really ‘out there’ on the QM at all.

Thankfully, there is a third option, and it’s the one I urge all philosophers of science to accept 😉

(3) Accept that the ‘wave-function’ is not physical, but grant objective reality to it as a *mathematical* object! That is to say, accept that non-physical entities objectivity exist! In this case, we grant reality to the mathematical object that is the wave-function. The big advantage of this view, is that we avoid the need for non-locality or conflict with known physics, but at the same time we can still talk about objective reality…only now the objective reality is taken to be mathematical rather than physical.

14. Paul T. Says:

Peter Morgan and Sandro, the Bell inequality only implies that realist deterministic theories require superluminal communication if you make the extra and almost always unstated assumption of a perfectly flat Minkowski spacetime, which we know from general relativity is false. We get back to common sense deterministic classical physics without crazy superconspiracies by accepting the more cutting edge physicists’ view that entangled particles are connected by wormholes. It’s not surprising that measuring one entagled particle quickly affects the other since the signal doesn’t have to travel faster than light, but just over a shorter distance than Copenhagenists would naively assume.

15. Curious Says:

Professor Scott @ comment #6 my scaling is different from your scaling. $\log n$ is number of bits and so $(\log n)^r<2^{(\log\log n)^c$ holds for fixed $r,c\geq1$ at large enough $n$.

Do you believe in ETH even if $P!=NP$ is shown (say non-constructively revealing nothing about best possible algorithms)? There is no evidence like any hierarchy theorems for this.

16. Curious Says:

what odds would you give for ETH assuming say you give P=NP at $1$ in $2^{BB(10000000)}$?

17. mjgeddes Says:

Paul #14:

The idea of entangled particles as being connected by wormholes is yet another example of the same very basic philosophical mistake I described in #13.

There is *no* way to ‘get back to common sense deterministic classical physics’ without violating the principle of locality.

A bright 10-year old can very easily understand quantum mechanics by avoiding one very simple, very basic philosophical error: the error is that ‘the wave-function is physical’.

The wave-function is *not* physical. This means that there is literally *nothing* physical that corresponds to it: no pilot-waves, no hidden variables, no worm-holes: nothing.

Realism is saved by extending our conception of ‘reality’ to include more than the merely physical.

The ‘wave-function’ is clearly a *mathematical* object. It’s non-physical *and* objectively real….it exists in and of itself…as pure mathematics!

I must emphasize again: there is literally *nothing* physical that the ‘wave-function’ represents.

18. ppnl Says:

Does super-determinism necessarily mean that quantum computers have no speed advantage?

As I understand it ‘t Hooft’s version does reduce quantum computers to classical. You could have other versions of super-determinism but the motivation for all of them seem to be to do away with the weird parts of quantum mechanics and replace it with reasonable classical rules. If you do that I don’t see how you could have super powerful quantum computers.

19. Scott Says:

ppnl #18: There’s an absolutely bizarre inconsistency in ‘t Hooft’s thinking about this. If we can arrange for our superdeterministic classical conspiracy to violate the Bell inequality, and allow for all the other already-demonstrated weird effects of quantum mechanics, then surely we can also arrange for it to account for the results of quantum computations! I.e., why not say that every time you run Shor’s algorithm, really the initial conditions of the universe caused you to choose a composite number that it “already knew” the prime factors of? That would be no weirder than what ‘t Hooft already says about the Bell inequality—it’s precisely the same kind of thing.

More generally, just like the conspiracies that killed JFK, faked the moon landings, etc. etc., we could arrange for this conspiracy to leave no empirical trace whatsoever of its existence—that’s how good it is!

But no. For ‘t Hooft, the conspiracy only needs to reproduce those quantum effects that have already been seen in experiments—when we try to run Shor’s algorithm, that’s apparently when the jig will finally be up!

For my part, I can honestly say that I hope things turn out that way, since it would mean that the effort to build QCs had led to one of the most important developments in the history of physics: namely, the experimental overthrow of QM, and its replacement by ‘t Hooftism.

And what about on ‘t Hooft’s part? If, say, within the next 5 years, we manage to convincingly demonstrate quantum supremacy, will ‘t Hooft then admit that he was wrong, or will he just shift the conspiracy forward (possibly into unfalsifiable territory)?

20. Scott Says:

Curious #15: In that case, your scaling is completely bizarre and nonstandard. I’d recommend against it even when talking about factoring, where N~2n is the number to be factored, but when talking about ETH and using a lowercase n, it has no justification whatsoever.

To satisfy your “curiosity”: yes, I believe ETH, but not with overwhelming conviction. Maybe I’d bet on it at 4:1 odds? And I’d bet on P≠NP at maybe 100:1 odds, not 2BB(10000000):1. We had this discussion earlier on this blog and I don’t want to rehash it now, but I don’t even think I’d bet on 2+2=4 at 2BB(10000000):1 odds. (What if the counterparty demanded payment from me, and the trial was held in the world of Orwell’s 1984? A real financial bet is never solely about the question at hand, but also about the sanity of the surrounding society.) P≠NP being proved nonconstructively doesn’t really affect this for me.

21. Peter Morgan Says:

#13, I try to hold ideas that are not ruled out simultaneously, and choose for a while which to think more about than others, rather than “accepting” any one. I have never had much to say about many worlds type interpretations, which what you are advocating seems to fall under, but still they’re on the list, and not a few people think intensively about such approaches. My concern with superdeterminism, which I discuss inter alia in a 2006 JPhysA paper but haven’t thought much about for over five years, is to point out the weaknesses in the standard reason for ruling it out, without, however, saying that superdeterminism is the true way, nor even that it’s especially useful.
#14, FWIW, superdeterminism requires nonlocal correlations of the initial conditions, but does not require superluminal communication, unless one worries about how the initial conditions were *originally* established (which seems a cosmological problem; ultimately, however, the world is what it is). Crazy or not, superdeterminism is not ruled out. Wormholes are certainly not ruled out, but a specific theory has to ensure very-nearly Lorentz invariance and Einstein locality of the resulting effective theory at above the attometer scale, say, constructed on a very-nearly Minkowski background on the hundred-kilometer scale of quantum optics experiments, say, despite a plethora of what would appear relative to that very-nearly Minkowski background to be superluminal communication. Not a few people think intensively about wormhole approaches, too.

22. Peter Morgan Says:

ppnl, #18, Scott, #19, the point of *stochastic* superdeterminism is that it is perhaps most naturally presented as a stochastic signal processing formalism, which, inevitably, uses Fourier transforms and Hilbert spaces. A stochastic superdeterministic model can be constructed based on a Hilbert space that is isomorphic to the Hilbert space of quantum optics. From a stochastic signal processing perspective, we work with modulations of the stochastic vacuum state that can be mapped to the states of quantum optics. Perhaps one can say that the nonlocal correlations of states of the resulting stochastic signal processing model give the same computational power as the nonlocal correlations of states of quantum optics. What is different is that the algebra of field observables is commutative instead of nontrivial [there is a paper for the Klein-Gordon case in EPL, another for the EM case is only on ArXiv]. Fermion fields, however, maybe not, certainly not by me. Obviously all this is my perspective, not ‘t Hooft’s.

23. Sandro Says:

Paul T. #14:
> We get back to common sense deterministic classical physics without crazy superconspiracies by accepting the more cutting edge physicists’ view that entangled particles are connected by wormholes.

I don’t see how accepting wormholes and CTCs is any different than accepting non-locality. This isn’t a new idea either [1].

24. jonas Says:

I already found one interesting detail that I don’t having heard earlier: that you would bet that P != NP can be proved in Peano arithmetic. It’s a pity there were too few questions about specific computer science problems, and too much emphasis on general philosophical questions and quantum computing.

Also, thanks for the Kipling poem you included as a bonus in one reply.

Avi #1, Scott #2,

What question on quora is this about?

26. Raoul Ohio Says:

In answering the “Why move from MIT to UT” question, you mention some good aspects of UT.

I always wondered about working in a building that was intentionally designed to look like earthquake rubble:

Have they ever managed to keep the roof from leaking?

27. Curious Says:

100:1 is not a strong of bet and 4:1 is even worse (trump has better odds right now).

28. Avi Says:

Scott: “We had this discussion earlier on this blog and I don’t want to rehash it now, but I don’t even think I’d bet on 2+2=4 at 2BB(10000000):1 odds.”

You said differently in http://www.scottaaronson.com/blog/?p=2673#comment-1012561:

“In particular, yes, I would accept a bet at 10^10^80:1 odds that 15=3*5.”

Unless there’s some difference between 10^80 and BB(10000000) (but the discussion there implied/stated you’d accept any number).

Have you changed your mind since then?

29. Avi Says:

30. Bumble Bee Says:

Where on earth do you get the name of your blog from?

It sounds like it’s from Pikesville, Maryland and it has something to do with the NSA.

31. ppnl Says:

Scott,

Like you I am befuddled by John Searle’s inability to well, think. I find it almost impossible to even read anything he wrote. But I do think you can put his Chinese room into a form that at least presents a conundrum.

Say you take a cattle prod and torture a dog to death over several days. That is a horrific act for which you could go to prison. Causing such pain is inhuman.

But say you take a video of the act. Are you recreating the pain when you replay the video? Most people would say no.

But lets say it was a very detailed recording that captured all the relevant internal brain states. But it is still just data isn’t it?

Ok this recording is a vast amount of data that we might want to compress. One way to compress it is to record the rules by which the neurons operate and the initial state. Now we can calculate the next state from the previous state. It is still just recorded data isn’t it? Yet our chosen compression algorithm has inadvertently created a software brain. Does it feel pain?

In general what is the difference in a sequence of deterministic events that follow each other and a movie of the same events? For example what is the difference between watching a pattern in Conway’s game of life play out and watching a Youtube movie of the same?

32. Curious Says:

If 3SAT or 3SUM is in linear time with a small constant is it really the end of the world?

33. Scott Says:

Curious #32: It’s the beginning of a new world.

34. Scott Says:

Bumble Bee #30: See my answer to “Why do you call your blog Shtetl-Optimized?” here.

35. John Sidles Says:

ppnl asks “What is the difference between watching a pattern in Conway’s game of life play out and watching a Youtube movie of the same?”

Extending this question to be more physically realistic can elicit satisfying STEM-answers, as follows:

An extended Searle-style question  What is the difference between watching the behavior of a living ant (or any other living creature), and watching a Youtube movie of the same?

Following extension, Searle-style questions have a physically correct answer, that encompasses both classical ants and quantum ants, and indeed applies to all living creatures (both classical and quantum).

A Searle-style answer  Living ants, like all living creatures (including us humans) continuously interact with an unbounded, shared, external reality.

Physicists describe this reality variously as a ‘classical reality’ or a shared ‘quantum vacuum’; in either case it is unbounded in space and time.

In striking contrast, Youtube videos, Chinese Room observers, and blank-tape Turing Machines (and also scalable quantum computers and BosonSampling devices) are crucially designed so as to lack the unbounded/shared reality-interaction attribute.

An instructive exercise  Survey each week’s arxiv preprints for cutting-edge vacuum-related research. For example, week’s gems include: “Emergent dark energy via decoherence in quantum interactions” (arXiv:1605.05980v1 [gr-qc]) and “Quantum computing with atomic qubits and Rydberg interactions: Progress and challenges” (arXiv:1605.06002v1 [quant-ph]).

These preprints (and dozens more every week) are showing us that great many problems — particularly in quantum computing — become very much easier to analyze when strict isolation is assumed; however Nature has her own policies in this regard; and in particular she strictly requires that physically realizable Hamiltonians flows leak information into an unbounded QED vacuum.

Beyond isolationism  Nowadays the implications (both practical and fundamental) of this universal leakage are increasingly coming to dominate multiple areas of STEM inquiry. In quantum computing and BosonSampling (for example), physicists are deploying ever-higher-finesse optical cavities, ever-lower-temperature cryostats, ever-more-carefully engineered materials, and ever-more-ingenious quantum error-correction schemes, all in an effort to thwart Nature’s QED-mediated attempts to radiate information toward infinity.

Needless to say, this struggle makes for great science, great mathematics, great engineering, and even great philosophy — and great opportunities for young people. 🙂

Conclusion  A great many fundamental STEM problems — in fields as diverse as biology, physics, computer science, mathematics, and ethics — are intimately entangled with problems associated to the physical feasibility (versus infeasibility), and the mathematical abstraction (versus over-simplification), and the philosophical idealization (versus immorality), of isolating complex dynamical systems from shared environments (both in principle and in practice).

Question  What are some good starting references for students to learn about vacuum/condensed matter dynamics? That is a crucially important and mighty tough question, to which no very satisfactory answer is known (to me anyway).

36. Scott Says:

ppnl #31: Yes, your dog-torturing story gets at precisely the questions about strong AI and consciousness that are the most interesting and confusing! FWIW, I set out some of my own thoughts in The Ghost in the Quantum Turing Machine (warning: long read).

37. Scott Says:

Avi #28: OK, fine, you’ve successfully Dutch-booked me! 😉

But seriously: the difference is that, in the former context, we were talking about abstract assignments of probabilities to mathematical statements. I was trying to explain to wolfgang why there aren’t assignments other than 1 or 0 that make any sort of internal logical sense.

But the present context, at least as I understood it, is about which bets I would or wouldn’t accept as a practical matter. There, I think wolfgang’s implicit point stands, that whether or not you take a bet depends not only on your belief about the underlying question (certitude, in my case, in the case of 2+2=4), but also on your correct understanding of the meaning of the bet (and your understanding of the counterparty’s understanding, etc.), the honesty of the counterparty, the justice of your civilization’s courts, and countless other issues.

38. mjgeddes Says:

#31 and #36

Lucky you guys have me around to give swift and decisive answers to all of the world’s most perplexing philosophical questions 😉

The solution to the puppy dog consciousness conundrum lies in the nature of causality and causal relationships. Let me explain.

You see, causality and correlation are not the same thing. In the case of the movies, only the surface patterns are captured, not the underlying causal relationships. And consciousness requires that actual causal relationships are instantiated, not just correlative patterns.

So what distinguishes true causal relations from mere correlative patterns? The answer resides in levels of abstraction and the nature of emergence and reductionism.

In the case of a mere ‘movie’ or ‘play-back’ of an event, only one level of description is present. But for true causal relationships to be represented, you need *three* levels of description…object, meta- and meta-meta. And that needs actual computations to take place where an encoding system can be instantiated that recursively recreates or represents 3 separate levels of abstraction.

39. Curious Says:

Scott comment 34 ‘It’s the beginning of a new world’ seems optimistic. Is there any reason to not to believe we are finished if 3SAT or 3SUM is in say 10n time (n is number of input bits)?

Think about this: anything the smartest humans can do intellectually in a life time a rice cooker chip can do in a blink of a second. If a path for a technological breakthrough is feasible we ca achieve in a blink of a second. If there is a loophole in timetravel possibility we can figure in a blink of a second, if there is a loophole to fooling thermodynamics we can all be young again in a blink of a second. What does it mean to be human then? What does it mean to live then? Aren’t we all finished? (eh these are just my crazy thoughts).

40. Curious Says:

By 3SUM I mean SUBSETSUM (though 3SUM in linear time would be a surprise it has higher probability of not causing apocalypse).

41. asdf Says:

Scott, that was a good interview but man I hate Quora. Way too many pages to click through to read one answer, and they registration-gate a vast amount of reader-contributed content whose actual authors never get paid. I’d have been a lot more excited if you did a Reddit AMA, and I’m it would have had a lot more readers too. Any chance?

Separately, I wonder if this is interesting:
http://arxiv.org/pdf/1605.06022v1.pdf
and more generally, whether theories like Trace Dynamics have particular implications for quantum computation.

42. Scott Says:

asdf #41: Sure, I’d be happy to do a Reddit AMA, if they ever asked me! 🙂

Sorry, I don’t know anything about Trace Dynamics, and probably won’t be able to read that paper anytime soon.

43. asdf Says:

You don’t need an invite per se. Just ask the moderators to schedule it (open a reddit account first you don’t have one), then do it:

https://www.reddit.com/r/IAmA/wiki/index

44. Richard Says:

Please just cut-and-paste your Q&A here. You wrote all the A, after all.

And do so as, you know, good old roman-alphabet text, enhanced, for Advanced Responsive Accessibility, by formatting into paragraphs.

Quora is so god-damned awful it manages to break scrolling, along with every any other basic browser behavior it can get its filthy Javascript and cookies and paywalls and click-baits on. There’s nothing good to be said about it. Die die die!

45. SC Says:

Is there a non-encrypted transcript available? Quora’s encryption doesn’t work with all browsers.

ppnl #31, Scott #36:

Dog torturing from a Buddhist emptiness perspective (eww) would regard the ‘movie’ or ‘playback’ of the event to be precisely of the same nature as the ‘actual’ event: both are equally unreal. And the suffering only comes about when the conciousness viewing it mistakes it for being real.

mjgeddes #38: what is this causal power you place so much importance upon and how can it be found? What is the difference between causes and correlations from an ultimate perspective?

ppnl #31, Scott #36:

It is precisely the same as someone dreaming of said dog torturing and then waking up to discover it was not real and the sense of relief that brings. If you were aware in your dream that it was a you would not suffer so. On the side of a dog, if you were dreaming yourself as the dog being tortured and woke up to discover it was not so the suffering would disappear.

In essence, the question of whether or not we regard the ‘movie’ or ‘playback’ to entail suffering is just is wrestling with whether or not it is *real* or to assign ontological importance to it. If you *were* the ‘movie’ or ‘playback’ and knew what it was like to *be* this, the suffering would depend upon whether you believed yourself ontologically real. And exactly the same way for a ‘real’ dog or not… dream of no.

Wow, typing this on a small phone… Sorry for typos and mistakes.

49. mjgeddes Says:

AdamT#46 said ” what is this causal power you place so much importance upon and how can it be found? What is the difference between causes and correlations from an ultimate perspective?”

A correlation is an inductive (generalized) set of observations that several phenomenon tend to occur together. The process of detecting correlations can be entirely described by ‘Bayesian inference’, which can quantify these correlations in precise statistical terms. Correlations are patterns.

Causation on the other hand, implies a hierarchy; some things are prior to (the causes of) others. You can describe causality by hierarchical categories – imagine taking a rope and throwing it around some set of things – you can then say ‘all these things are connected because they can be glued together to form a coherent category’. And categories themselves are ‘things’ that can be grouped together to form new categories.

This categorization or concept formation is a consequence of causal power. So the “causal power” is the abstract logical rules in operation that let the mind form these coherent categories.

This summary illuminates a critical point about causality: it’s a high-level phenomenon that just doesn’t exist on a microscopic scale. The laws of physics are time-reversible, so no flow of time exists on a fundamental level. Only when the mind forms abstractions and starts to separate reality out into different levels of abstraction does anything resembling causality appear.

mjgeddes #49:

Request for clarification:

* You say, “Causation on the other hand, implies a hierarchy; some things are prior to (the causes of) others.” This means that the hierarchy is delimited by moments in time, yes? But correlations can also be so delimited, right? If so, what is the difference?

* You say, “‘Causal power’ is the abstract logical rules in operation that let the mind form these coherent categories.” Do these abstract logical rules exist somewhere or are they merely conceptions of the mind?

Your third paragraph leaves me with the impression that you don’t really believe that causality is ontologically real since the laws of physics are time reversible and presumably you hold that the laws of physics are what is fundamental… ie, what is ontologically real, right?

This still leaves me with the question of how you differentiate causality and correlation in this system. Both can be described as sets of things/observations/phenomenon that are delimited by moments of time. Both can be described as categories or sets of phenomenon. But for one you ascribe “causal power”, but for the other you do not and yet you admit on a fundamental level this “causal power” does not exist. So what is the utility of imagining this “causal power” which does not exist on a fundamental level that somehow differentiates it from mere correlation?

And bringing it back to the topic (dog torturing and the precise formations that can be described as ontologically real and what can be safely said to be non-existent in terms of suffering) what does this “causal power” bring to the situation to illuminate this question?

51. venky Says:

Does quantum computing extend the domain of computable functions? I am trying to understand if any real Turing oracles have ever been found.

52. John Sidles Says:

A literature survey
One fundamental question, two quantum principles, some concrete computations, and a clarifying philosophical practice

One fundamental question  Scott’s well-reasoned and very enjoyable (as it seems to me) essay “The Ghost in the Quantum Turing Machine” has now appeared in print (The Once and Future Turing, Cambridge Univ. Press, 2016, see also arXiv:1306.0159v2). The essay includes this reflection:

Question  Does the brain possess what one could call a clean digital abstraction layer? … As soon as we try to answer these questions, I’ve argued that we’re driven, more-or-less inevitably, to the view that the brain’s detailed evolution would have to be buffeted around by chaotically-amplified “Knightian surprises,” which I called freebits … The “freebits” referred to throughout the essay are just 2-level freestates.

Two quantum principles  Last month’s Quanta Essay by Frank Wilczek, “Entanglement Made Simple” (of April 2016) provides two quantum principles that can be read (by me anyway) as governing the conversion of freebits to qubits, or more generally, freestates to quantum states:

(1) “A property that is not measured need not exist”, and (2) “measurement is an active process that alters the system being measured.”

For an extended discussion, see Wilczek’s Solvay Lecture “A long view of particle physics” (2012, arXiv:1204.4683v2), in particular Wilzek’s concluding remarks (in Section VI.A  “Cosmic questions/Information as foundation?”).

Concrete computations Scott’s fundamental question and Wilczek’s two quantum principles receive a concrete computational embodiment in this week’s preprint by Ilya Kuprov “Fokker-Planck formalism in magnetic resonance simulations” (2016, arXiv:1605.05243).

Like Scott and Frank Wilczek, Ilya Kuprov too is pretty good at pithy remarks

Good theory papers have two essential features: they are readable and computable. That is the reason why Liouville – von Neumann equation (on the coherent side) and Bloch – Redfield — Wangsness / Lipari – Szabo theories (on the relaxation side) dominate magnetic resonance — there are papers and books that describe them with eloquence and elegance of a well written detective story.

Kuprov’s writings are well-worth reading (as it seems to me at least) for multiple reasons; five of which are:

(1) Kuprov’s recipes explicitly respect Wilczek’s two quantum principles, while concretely reducing Scott’s “freestates” to “quantum states” (by evolving uninitialized $$\mathbf{\rho}$$-matrices to physical $$\mathbf{\rho}$$-matrices);

(2) Kuprov’s SPINACH software is open-source and thoroughly documented (which makes it easy for students to get started);

(3) SPINACH simulations are solidly grounded in the experimental literature (and thoroughly documented);

(4) Kuprov’s computational recipes extend naturally to the broader tensor-product literature (thus suggesting plenty of research topics); and

(5) Kuprov’s views are vigorously advocated and (in his best writing) he expresses these views with “the eloquence and elegance of a well written detective story” (similar to Shtetl Optimized).

If as Scott’s Quora interview foresees “quantum supremacy can be achieved within the next 5–10 years”, then (as it seems to me) the research community working toward this goal will require all the Kuprov-style “clean digital abstraction layers” that it can get.

After all, few things are so experimentally common in quantum research, as decoherence levels that are unexpectedly high, for reasons that are not immediately evident.

Conversely, as Kuprov’s preprint concludes:

A high level of abstraction in the fundamental equations of motion, exotic and unwieldy though they may at first appear, is worth it in the long run because the resulting framework is general, flexible, extensible and maintainable.

As with simulations of quantum supremacy, so with hoped-for technologies that will demonstrate quantum supremacy: the best quantum supremacy technologies hopefully will be (in Kuprov’s words) “general, flexible, extensible and maintainable.”

Then (and only then) the Extended Church-Turing (ECT) can be “put on termination notice” (in Scott’s pungent Quora phrase).

A clarifying philosophical practice  At least some philosophical questions (like the “Chinese Room” and the “Abused Dog Video”) can be clarified by restricting the domain of philosophical inquiry as follows: fundamental questions about brains and minds are to be addressed solely in the context of Scott’s “clean digital abstraction layer”, as dynamically instantiated in accord with Wilczek’s two quantum principles, and as concretely simulable in accord with Kuprov-style computational recipes.

The principles ensure that whenever we are watching a dog, the dog is watching us too. It follows that, as our human observation processes reduce canine freestates to canine quantum states, the dog’s observation processes (yes, dogs physiologically possess them!) are reciprocally reducing our human freestates to canine quantum states.

In contrast, a crucial real-world deficiency of mind-models that include Chinese Rooms, Dog Videos, and Turing Tapes is that these models unphysically violate Wilczek’s second quantum principle, and in consequence of this crucial unphysicality, these mind-models fail to provide Scott’s “clean abstraction layer” that is so necessary to clear philosophical discourse. They are useful toy models for proving theorems, but they are very far from realistic models of how real minds work.

53. Ian Says:

Hi Professor Aaronson,

Have you ever done an analysis over all your blog posts of who the most frequent (non tweet-back or non-blog-post-link-back) final commenters are? I think it would be really revealing. 🙂

54. Sniffnoy Says:

Venky: No, they don’t. Quantum computers can be simulated classically with exponential slowdown. This used to be in the blog header, but I gather Scott thought that was no longer the most common part for people to be confused about. 🙂

55. venky Says:

Thx Sniffnoy.

56. mjgeddes Says:

If you drop a glass on a hard floor and the glass shatters, you won’t ever see the process happening in reverse. That is to say, you won’t ever see broken pieces of glass suddenly leap from the floor and reform themselves into a whole glass in your hand 😉 So the process of the glass falling from your hand and shattering was irreversible.

But nowhere in the laws of particle physics can you find an explanation for this. You need new meta-principles (i.e the laws of thermodynamics) to account for the one-way nature of some processes. In the above case, the 2nd law of thermodynamics about entropy. But where do these meta-principles come from? Why does entropy increase? Well, part of the answer is that entropy was much lower in the past. But why? Well, it’s what is called a *boundary condition*, and it’s not in the fundamental laws themselves. It’s something additional to that.

And *causality* is really about these ‘boundary conditions’, that explain how to relate the high-level (abstract) properties of things (such as entropy) to the lower-level laws. So causality sets the ‘direction’ or ‘arrow of time’. Mere correlation, on the other hand, does not.

The ‘laws of causality’, whilst not a part of fundamental physics, are still *real*: they exist as logical-mathematical principles, so they are just as ‘real’ as the rest of mathematics.

Causality is real enough in that it takes very concrete forms. For what do you think that subjective experience (‘consciousness’) actually is? I’d say it is exactly the manifestation of physical causality. Causality isn’t just something that exists ‘in’ the mind: it IS the mind 😀

57. John Sidles Says:

mjgeddes observes (#56) “Causality is real enough in that it takes very concrete forms.”

At least three burgeoning areas of quantum research are driven by these concrete causality-forms. (1) the extension of Hamiltonian flows to Lindbladian flows provides mathematical foundations; then (2)  quantum measurement-and-control theory describes (both formally and practically) the construction/destruction of informatic correlation by causal processes; and (3)  the associated dynamical flows carry uninitialized “Knightian” $$\psi$$-vectors of Hilbert space onto the low-dimension varietal submanifolds that are the natural computational venue of quantum simulationists in general, and tensor-product-state researchers in particular.

In any given week, the arxiv provides dozens of preprints that fit naturally into this quantum-causal schema. Most of these articles are eminently “readable and computable” (in Ilya Kuprov’s phrase of #52), so it’s a considerable problem that there are so exhaustingly many of them.

mjgeddes #56:

So you do believe in the ontological reality of “causal power!” In your view, causality is generated by running the laws against initial conditions where the laws and conditions are seen as ontologically real.

I don’t see how this helps with determining in what form puppy torture should be seen as ontologically real. In fact, it just muddles the question further. Consider…

Instead of puppy torture, we can ask in what way these laws+initial conditions should be seen as ontologically real.

Do these laws+initial conditions need be instantiated or run – does the ‘movie’ of the universe need to play – in order for it to be ontologically real?

Must there be a “consciousness” encoded in these laws+initial conditions that is capable of conceiving the laws+initial conditions as ontologically real… for it to be ontologically real?

To me, this hints that the questions themselves betray a false assumption: that there is any such *real* thing as a real thing. The error is hypostatizing anything at all.

I suggest the answer to the question, “In what form does a puppy need be (physical instantiation, mere encoded information, etc) to suffer from torture?” is any form capable of committing the error of hypostatization.

59. venky Says:

Sniffnoy: it’s kind of amazing that a model of computation based on complex numbers can be simulated exactly by a model based on finite operations on positive integers.

60. fred Says:

ppnl #31.

Reading your argument about video, it reminded me that space-time itself in general relativity is considered one “solid” block. Time is purely a construct of the way our memories are formed, time doesn’t pass, and all instants just exist. Very much like the different frames in a video tape.

About the dog not suffering again when playing the video, there’s the different legal argument that content that’s been created artificially and that depicts forbidden acts is just as illegal as actual photos/videos, even though there is obviously no victim involved. One could argue that it’s the viewer who provides the bit of consciousness necessary to (re)enact the recorded emotions, through empathy or sadism. So when I watch a dog suffer on video, somewhere inside my head, my consciousness is “forking” a little ghostly instance of a dog that’s in pain.

I guess it all boils down back to the argument of simulation vs the actual system that’s simulated. And what do we mean by “computation”. Eventually a simulation itself is made of real matter. And in a perfect simulation the relations/patterns between atoms are isomorphic to the relations/patterns between atoms of the “real” system. The question is whether the property of “consciousness” is maintained by the multiple transformations defining the isomorphism?

61. JimV Says:

mjgeddes Says: “… the process of the glass falling from your hand and shattering was irreversible.

But nowhere in the laws of particle physics can you find an explanation for this.” (Comment #56)

I don’t think that is a good example of what you mean. The glass falling was caused by gravity. There would have to be negative gravity or some other force to cause pieces of glass on the ground to rise into the air, collide, and sinter themselves into a glass (container). There is no such contrary force in the laws of physics, so the laws of physics do explain the irreversibility, in that instance.

If we remove the external force from the example, then irreversibility is just a statistical phenomenon: any particular configuration of interacting particles is very unlikely to reoccur due to the vast number of possible configurations, just as random shuffling of a deck of cards is very apt not to give the same deal twice in a row.

62. fred Says:

mjgeddes #56:

“If you drop a glass on a hard floor and the glass shatters, you won’t ever see the process happening in reverse.”

Sure, but you will never see again another glass break in exactly the same manner as the glass that just shattered…
I.e. the “glass that just shattered” is an event that’s just as rare as any glass that would reconstitute itself, and indeed, at the atomic level, there’s is no difference.
We just happen to group an infinite set of unique glass shattering events into one category. We’re biased to it because the chemistry of our memories and the set of “shattering glass” events are ancestors of the same initial conditions in the same local pocket of universe/reality.

63. Sniffnoy Says:

venky: Well, you can’t simulate complex numbers exactly, obviously, because there are uncountably many; but quantum computation doesn’t let you read out the amplitudes, it only lets you sample according to them. So you don’t need infinite precision to simulate quantum computing.

64. Greg Says:

venky #59: a discrete computer doesn’t have to exactly represent complex-valued amplitudes to arrive at comparable accept/reject probabilities. Indeed, that’s the rub in general when someone comes along and proposes some analog device that threatens to dislodge the Church-Turing thesis: buried in any definition of any reasonable model of computation based on continuous quantities, as also in experimental physics, are error bars or some equivalent means of accounting for finite accuracy.

65. jonas Says:

Scott #19:
> why not say that every time you run Shor’s algorithm, really the initial conditions of the universe caused you to choose a composite number that it “already knew” the prime factors of

I don’t think that specific method would work. We can use an unseeded cryptographic pseudo-random generator to deterministically generate lots of integers from a uniform distribution. If we have a working quantum computer, we can then factor those. Those numbers might already be hard to factor for a conspiracy if it only has access to polynomial time classical computation, even if it knows exactly how they were generated.

66. venky Says:

Thx Sniffnoy and Greg. It’s amazing we are back to the original 19th century definition of reals as equivalence classes of Cauchy sequences of rational fractions.

67. John Sidles Says:

Sniffnoy (#54), Fred (#62), Greg (#64), Venky (#66) (and several other SI commenters) may be interested in Colin McLarty’s very lively, very reader-friendly guest article on M-Phi: a Blog Dedicated to Mathematical Philosophy, titled “Fermat, set theory, and arithmetic” (of 22 April 2013):

Some philosophers suspect mathematicians don’t care about foundations but only care about what works. But that elides the problem mathematicians constantly face: what will work? […]

These themes converged in the on-line row over whether Wiles’s proof of Fermat’s Last Theorem (FLT) uses Grothendieck universes. Universes are controversial in some circles since they are sets large enough to model Zermelo Fraenkel set theory (ZF) and so, by Gödel’s incompleteness theorem, ZF cannot prove they exist.

It is no surprise theoretically that a statement about numbers could be proved by high level set theory […] but it is surprising in fact that FLT should be proved this way. We do not expect to see the Gödel phenomenon in such simple statements. […]

I am working to lessen the surprise in the case of FLT and other recent number theory by bringing the proofs closer to arithmetic. I have formalized the whole Grothendieck toolkit in finite order arithmetic. That is the strongest theory that is commonly called “arithmetic”.

The much-discussed HoTT mathematical foundations program provides a venue in which McLarty’s ideas are not only well-posed, but natural and even machine-checkable.

Practical parallels  For engineers too, there are parallels with practical problems in quantum simulation. A pretty considerable portion of the modern quantum simulation literature can be viewed as a McLarty-style effort to replace the large-dimension (uncountably infinite) Hilbert space of gauge field theory with low-dimension (usually finite, and at most countably infinite) varietal spaces.

For example, Google’s new programming language “TensorFlow”, implemented on Google’s new TPU chips (“Tensor Processing Units”), as the basis of Google’s new algorithms (“AlphaGo”) represent an evolving class of computations that are not obviously well-represented by Turing-equivalent von Neumann processors manipulating large-dimension $$\psi$$-vectors.

Broadly speaking, in the Turing/von Neumann/Hilber computational world, physical systems are encoded as large-dimension vectors of complex numbers, while in the Grothendieck/McLarty/Google/varietal world, physical systems are encoded as smaller-dimension joins of polynomials.

Toward Varietal Comity  At the practical level, Grothendieck/McLarty/Google-style varietal simulations have been so successful that it is natural to wonder whether these dimension-reducing methods are applicable universally, consistent with the Extended Church-Turing Thesis (ECT) as a fundamental law of nature, that governs in particular all gauge-field-theory quantum universes.

In this practical universe the Standard Model of quantum physics is entirely correct, and yet Gil Kalai’s skeptical views regarding quantum computing (per Gil’s recent Notice of the AMS article, see too arXiv:1605.00992) are entirely correct too.

Toward Quantum Supremacy  On the other hand, the experimental demonstration of Quantum Superiority — for example via scalably fault-tolerant quantum computing and/or scalable BosonSampling — would confound fond hopes for Varietal Comity (as it might be called), by overthrowing the Extended Church-Turing Thesis.

So this is one of those transformational (and immensely enjoyable) epochs in science, in which there is plenty of high-quality literature, and stimulating arguments, and high-quality mathematics, on both sides of a fundamental scientific question.

Conclusion  Scott answered the Quora question “Do you still think that philosophers should care more about computational complexity?” with “You better believe it! 🙂 ” (emoticon as in the original).

The converse question “Should qubit-designers and BosonSamplers and quantum simulationists care more about computational philosophy?” is answered by works like Colin McLarty’s and Gil Kalai’s identically: “We’d better believe it! 🙂 “

68. mjgeddes Says:

Adam, Fred and JimV (#58 , #60 and #61)

The point I was trying to make by talking about irreversibility and thermodynamics was that you can describe the world at different levels of abstraction: and the world looks very different when you do that.

Look at that computer screen before you. Take a big magnifying glass and ‘zoom in’ , the screen disappears and now you are looking at pixels. Now take a super-duper microscope and ‘zoom in’ again; now you will be seeing the raw materials and molecules that make up the screen: ‘zoom-in’ / ‘zoom out’, that’s levels of abstraction.

What I’m saying that subjective experience is caused by the ‘switching’ between these levels of abstraction: it’s the ‘zoom-in’ /’zoom-out’ switching that generates consciousness!

When you ‘zoom-in’ all the way to the microscopic level, there is no time or causation at all! So our perception of time has only appeared when we ‘zoomed-out’ (switched) to a higher-level of abstraction.

But here is the big punch-line that very few have noticed: there is no actual reason in the laws of physics for the different levels of abstraction to be entirely consistent with each other! Have you ever stopped and wondered *why* you can just ‘zoom-in’/’zoom-out’ like that without any inconsistency ? No, I bet you never even considered it really – you just took for granted – unless, you were a very clever philosopher 😉

The point about the shattering glass I was making is that the ‘one-way’ nature of it *can’t* be explained by the laws of physics alone: I just pointed out that at the microscopic level , time doesn’t exist. So an *extra* principle is needed to explain exactly how to ‘switch’ (‘zoom out’) from the micro-level to the high-level description. And that *extra* principle is *not* in the laws of physics!

In the case of the shattering glass example, the ‘extra’ you had to add to explain it was the fact that the universe was in a state of extremely low-entropy in the distant past. But the point is not this specific example, but just the fact you *do* have to add something ‘extra’ to perform the ‘zoom-in’/’zoom-out’ switching correctly).

The ‘extra’ principles that explain how the different levels of description are connected (how you can ‘switch’ or ‘zoom-in’/’zoom-out’) are the missing ‘laws of causality’, and it is precisely these laws that explain consciousness!

69. John Sidles Says:

Pretty much nothing in any text on entropy makes sense except in light of the considerations that mjgeddes mentions (in #68). Commonly this is acknowledged explicitly, for example in Rudolf Peirel’s pungent essay “Some simple remarks on the basis of transport theory” (1974)

In going from the reversible equations of mechanics to this Boltzmann equation, we have already smuggled in the irreversible behavior in some place. This place is the stosszahlansatz of Boltzmann. […]

I mention this because in any theoretical treatment of transport problems, it is important to realize at what point the irreversibility has been incorporated. If it has not been incorporated, the treatment is wrong. A description of the situation which preserves the reversability of time is bound to give the answer zero or infinity for any conductivity. If we do not see clearly where the irreversibility is introduced, we do not clearly understand what we are doing.

In recent years there has been an explosion of work in this area; even entire journals are devoted to it. And yet nothing approaching a uniform consensus has emerged. The still-increasing diversity of opinion and exposition is well-surveyed (as it seems to me) in a recent textbook by Georgy Lebon, David Jou, and José Casas-Vázquez, titled Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers. This book’s Epilogue concludes:

It may be asked why so many thermodynamics? A tentative answer may be found in the diversity of thought of individuals, depending on their roots, environment, and prior formation as physicists, mathematicians, chemists, engineers, or biologists. The various thermodynamic theories are based on different foundations: macroscopic equilibrium thermodynamics, kinetic theory, statistical mechanics, or information theory.

Other causes of diversity may be found in the selection of the most relevant variables and the difficulty to propose an undisputed definition of temperature, entropy, and the second law outside equilibrium.

There is no doubt that trying to reach unanimity remains a tremendous challenging task.

A considerable virtue (as it seems to me) of the quest for Quantum Supremacy, is that it compels us to the most rigorous examination of the above-mentioned entropic issues, within a STEM context that is mostly free of “imperial entanglements”, that is, mostly unencumbered by the secrecy and administrative complications that attend commercial, strategic, and moral considerations.

The resulting open research environment is good for training young people and launching STEM careers, and it’s good for advancing math and science too.

These are reasons why many people (including me) are admirers of Quantum Supremacy research in general, and the Aaronson/Arkhipov idea of BosonSampling in particular.

70. fred Says:

mjgeddes #68

“In the case of the shattering glass example, the ‘extra’ you had to add to explain it was the fact that the universe was in a state of extremely low-entropy in the distant past. But the point is not this specific example, but just the fact you *do* have to add something ‘extra’ to perform the ‘zoom-in’/’zoom-out’ switching correctly).”

The interesting thing is that in many cases those hierarchies you’re talking about are pretty trivial.

And for the cases when they’re not trivial, it’s an absolute mystery: e.g. my consciousness is “something” that exists at a given level, yet, ‘all that I am’ can be fully explained at the atomic level. But then why is there even such a thing as the illusion of free will?

The concept of “entropy” gets thrown around left and right constantly, is it even a “fundamental law” of physics? Isn’t it just an observation about macro quantities for specific classes of systems and their particular initial conditions? E.g. saying that two thrown dices are more likely to add up to 6 isn’t a “law”, no?

Thermodynamics is handy to study thermal engines and black-holes, but in situations like the following, it seems to fail miserably:

We consider the earth as it was 4 billion years ago, a big lump of molten rock. Then we run a series of super detailed simulations of it at the atomic level, for a few billion years worth of time steps. We do this for different initial states of the “initial magma” (which is pretty “featureless”).
It could be very likely that a significant number of those simulations will feature “macro” events where we see little chunks of metal “spontaneously” leave the earth, move in space to other planets of the solar system, land there, and then come back (our version of this is the Apollo 11 moon landing). Then there would probably be many cases where the earth suddenly bursts into a nuclear firestorm in the blink of an eye.
From a thermodynamic point of view, those events are seemingly impossible, like a thousand shattered glasses “magically” gluing themselves back together to form a giant glass.
Yet, nothing else happened in there that’s not a direct consequence of simple particle dynamics.
The concepts of biological life, civilizations, history, psychology, human mathematics and sciences,… are irrelevant to the simulation itself.

“Why, for example, should a group of simple, stable compounds of carbon, hydrogen, oxygen and nitrogen struggle for billions of years to organize themselves into a professor of chemistry? What’s the motive?”

Similarly, in his book “Lila”, he wonders whether the intelligence necessary to have built human cities is an inherent property of its constituting atoms.

71. John Sidles Says:

fred opines “From a thermodynamic point of view, those events — like AlphaGo learning to play go at the 9-dan level — are seemingly impossible, like a thousand shattered glasses ‘magically’ gluing themselves back together to form a giant glass.”

Quote by fred, AlphaGo interpolation by me.

For computationally evolved entities like AlphaGo, as for biologically evolved entities like humans, the question “how does cognition work?” has no obvious answer in terms of logical deduction or rules of inference; neither are Turing-equivalent von Neumann architectures obviously the most efficient or efficient architectures for supporting AlphaGo-type / brain-type models of cognition.

Instead, as far as our best theories are concerned, the following are comparably mysterious:
1) AlphaGo learning to play 9-dan go, and
2) human infants acquiring human language, and
3) the repeated Darwinian evolution of vision,
And these mysteries persist even though — entirely the case of AlphaGo and increasingly in the case of human brains and human genomes — we have complete information, both in principle and in practice, regarding the microscopic workings of the relevant cognitive/biological processes.

Not for the first time in STEM history, bur more strikingly in our 21st century than any previous one, our technologies and our biologies are “just working” for evolutionary reasons that our best theories of computation and cognition are struggling to explain to us, in any terms that we can understand.

72. mjgeddes Says:

Here’s a summary of my basic metaphysics, which connects all the ideas I’ve posted on Scott’s blog:

The universe is separated into 3 levels of abstraction – mathematical (basement), physical (mid-level) and mental (top-level).
These 3 levels are ‘dual’ descriptions of reality – each is universal in scope, but not complete. (There is only 1 reality, but 3 different ways of describing it) – a ‘triple-aspect property dualism’.
The reason for the 3-level split is a break-down in reductionism. Reductionism is the idea that the high-level properties could in principle be completely predicted from lower-level ones, and that the different levels of description are entirely consistent. Whilst a very good approximation most of the time, reductionism is not completely true. There are complexity thresholds past which reductionism *breaks* and entirely new properties appear not predictable in principle from lower-levels, nor completely consistent with the lower-levels.
Extra logical principles will be needed to explain how to connect higher—level emergent properties to lower-level ones. These principles can be defined as ‘laws of causality’.
Consciousness (subjective experience) occurs whenever there is a ‘transmission’ of information across different levels of organization. To be precise, it occurs whenever a ‘causal relation’ is represented; this requires the 3-level split that exists in reality to be mirrored by computation.
When you have a computation that encodes for 3 levels of abstraction, then infinite recursion can always be approximated by at most 3 levels of abstraction: object, meta- and meta-meta . A computation represents a ‘causal relation’ by performing an information transmission across these 3 levels of abstraction.
This property of information processing is present to some degree in most things, indicating some degree of consciousness is present nearly everywhere (panpsychism). Consciousness is the ‘informational glue’ that is ‘stitching’ the different levels of organization together. It is consciousness that making the universe more and more coherent, by reducing the inconsistencies between the different levels of organization in the universe.
The process that generates subjective awareness is also what generates the flow of time, working according to the above-mentioned ‘laws of causality’’. So consciousness is ‘stitching’ reality together: it is the principle of consistency and cohesion that enables reductionism to work.
The purpose of life is revealed: to truly understand all of reality as an integrated (coherent) whole. For it is precisely the computations in our brains that generate new coherent categories of thought about reality (concept formation) that make us conscious and perform the ‘reality stitching’ operations described above!

73. wolfgang Says:

@mjgeddes

1) Does my PC have a conscious experience?
Just some additional information: In addition to several conventional processes it also runs a simulated neural network training/trained on financial data; It may communicate with a large number of machines at AWS performing similar computations.
2) Does my dog sitting right next to me have a conscious experience?

If you answer yes to 1) or 2) can you please also explain if this conscious experience is in any way similar to mine.

74. mjgeddes Says:

wolfgang #73

The solution to consciousness is very simple.

A conscious system is any computational system that is engaged in self-modelling and splits its model into 3 levels of abstraction. Once you have a logic separated into 3 levels of abstraction, you can approximate infinite recursion to any desired degree of accuracy, so any further recursion beyond 3 levels is pointless.

The 3 levels of abstraction for any domain can be defined as follows:

*The structural level: The intrinsic properties of an object. ‘What’s the object made of?’
*The functional level: Extrinsic properties, relations between the object and other objects, ‘What does the object do?’
*The representational level: ‘How can we talk about the object?’

The system needs a way to ‘switch’ between the 3 levels of abstraction (‘zoom-in’/’zoom-out’ ) – it needs to translate concepts on one level to concepts on another level. It’s the ‘switching’ or translation process at the symbolic level that is consciousness (the transmission of information across the 3 levels of abstraction).

In the case your PC running the neural network, I think any consciousness present would be miniscule given the set-up you’ve described. First, there is no self-modelling going on in processing financial data or communication with other machines. Second, neural networks as they exist today fail to take the critical step of encoding for the split into 3 separate levels of abstraction at the symbolic level, and switching between them (they operate at the sub-symbolic level). I’m fairly sure your dog’s conscious, but I need to know the details of how its brain operates at the symbolic level.

75. Anonymous Programmer Says:

I have a problem, I solved P=NP and decoded a lot of president Obama’s encrypted files and to my horror discovered that President Obama ordered me and my mother assassinated and my mother’s house torn apart to find the algorithm. I decoded messages that Obama and Bill and Hillary Clinton ordered many people to be assassinated and they were!!!!!!!!!!!!!!!!!!! My mother and I are on the kill list!!!!!!!!!!!!!!!!!!!!!!!! Two attempts to assassinate me and my mother have already occurred. Not only will I not get a million dollars, me and mother will not get to live!!!!!!!!!!!!!!!!!! What should I do if a President of the US issues an illegal assassination order to kill you because you know too much!!!!!!!!!!! I thought only Hitler or Stalin would do something like that.

What should I do to keep me and my mother alive, Scott Aaronson?

Kevin Brent Pryor

76. Scott Says:

Anonymous Programmer #75: Dude, you’ve proven P=NP, and you’re the one asking me for advice? Why don’t you mine a few billion dollars’ worth of bitcoin, move you and your mother into a fortified compound with a 24-hour security staff, and then have more time to contemplate your next move? (Of course, I’d be delighted were your next move to publish your algorithm, along with the evidence that would convince the world of your amazing story…)

77. Raoul Ohio Says:

The Super Mario Brothers break into the Complexity Zoo:

http://phys.org/news/2016-05-analysis-super-mario-brothers-harder.html

78. jonas Says:

Re #76: If you have a polynomial time Sat solver algorithm, I would suggest against publishing it immediately. Instead, announce the result and convince everyone that you have such an algorithm with zero-knowledge proofs, to allow the world some time to fix the cryptography-based systems that this suddenly makes insecure. Publish the algorithm only after a couple of years of embargo. Of course, P=NP would cause a lot of problems even if you give enough grace time, but it’s still better not to publish such a valuable secret immediately. Of course, I don’t think this is possible, because I believe that P!=NP.

79. John Sidles Says:

Good advice (regarding #75) from the B&BRF, which is itself an outstandingly effective research-supporting institution.

80. fred Says:

mjgeddes

“There are complexity thresholds past which reductionism *breaks* and entirely new properties appear not predictable in principle from lower-levels, nor completely consistent with the lower-levels.”

Do you have any examples of this that don’t involve consciousness?

On hand it’s undeniable that consciousness has some reality at a higher level of abstraction. At least it thinks it has a reality (since we’re arguing about it).
It seems amazing that all those concepts existing only in the human mind (emotions, love, mathematics, computer science, high level programming languages), would all be illusions with no causal power on their own.

On the other hand, there’s nothing in the laws of physics that would account for this.
But then can’t we just as well flip everything around and say that those objects in our mind are the reality, are the source of causality, and everything else (particles, etc) are the illusions (in the sense that the essence of the “game of life” doesn’t depend on what it’s implemented on, pebbles, atoms, symbols on paper,..).

Then maybe there’s more going on. Maybe the randomness at the core of QM could create an “opening” where somehow there’s some sort of causal relation between different levels of abstraction (that causal relation being neither temporal nor spatial), giving both QM particles and consciousness a reality on their own? (nature of the QM measurement/collapse, etc… I guess there’s been plenty of “theories” trying to link consciousness and QM already).

81. mjgeddes Says:

fred,

See my above post (#72). I believe there are 2 breaks in reductionism, (1) the transition between pure mathematics and the physical world, and (2) the transition between the physical world and the mental world. Thus in my ontology, you end up with a 3-level reality (math, physics and mind).

(1) is not as clear as (2), because it already requires that you accept a controversial assumption: that mathematics is something that exists independently of the human mind, and that it forms the basement level of reality (this is the position known as ‘mathematical realism’ or in its stronger form ‘platonism’)

But perhaps I could rephrase the above into a form that avoids philosophical assumptions, and there’s still a mystery;
namely how to get from the purely mathematical (abstract) equations of ‘laws of physics’ to the concrete physical world. Or: how to go from the quantum mechanical micro-world (which is purely abstract wave-functions) to the classical macro-world.

Every-one sees this puzzle of the ‘switch’ between the QM micro-world and the classical world, and they rush to connect this to consciousness (this is route of quantum theories of mind e.g. Penrose). I think this is a mistake, because it’s taking place at a level of description that is too-low.

As I mentioned, in another thread, the brain is a signal processer that needs to insulate itself from low-level physics effects, so my own view is that QM is *not* involved in consciousness.

That’s not to say that the mystery of the ‘switch’ between the QM and classical worlds is fascinating it’s own right. I think a good case can be made that this is indeed the *1st* ‘break’ in reductionism. So perhaps if we could understand that, we’d still gain some insight into the further puzzle of the emergence of consciousness. But I think consciousness itself is due to a *2nd* break in reductionism, which happens much higher up the physics chain than the QM level.

82. Stevie A. Says:

Given the amount of utopian happiness they would create why isn’t there a Manhattan Project urgency at places like MIT and Caltech for creating functional female sexbots?

83. John Sidles Says:

fred wonders (#80 in brief) “Can’t we say that those objects in our mind are the source of causality?”

For folks who prefer to tackle (even dissolve) philosophical problems from a starting-point of practical problem-solving, Frank Wilczek’s recent Quanta essay “Entanglement Made Simple” (April 2016) provides one starting point.

That philosophical starting-point is the fundamental principle that (in Wilczek’s words)

“measurement is an active process that alters the system being measured.”

Nothing like this happens classically!

When we embrace Wilczek’s quantum point-of-view, then it’s immediately evident that the “objects in our minds brains” that accomplish this quantum state-reduction, and thereby are “the source of causality”, are our sensing-and-control organs (including but not limited to sight, smell, hearing, taste, tactility, proprioception, DNA error-correction, and epigenetic regulation).

Our various biological organs physically act — in accord with orthodox Wilczek-style quantum dynamics — so as to alter the external world’s wave-function to accord with internal brain-states. Indeed, biological organs are carefully optimized by evolution so as to accomplish this quantum state-reduction, no less than the photon-detectors and current-amplifiers of a BosonSampling experiment, which are carefully designed (by rational engineering rather than stochastic evolution) to accomplish the same quantum dynamical objective (of state-reduction).

Large, prosperous, ambitious, expanding STEAM-enterprises are founded upon this Wilczekian state-reduction principle; see for example the recent news releases of Schrödinger Inc.

The interests and capabilities of organizations like Schrödinger provide a practical starting-point for two (compatible) directions of philosophical investigation. The first direction is, how is it that Schrödinger’s quantum simulation recipes work so well? Can further advances be anticipated? Can these recipes be applied even to seemingly impractical problems (optimizing BosonSampling experiments for example)?

More philosophically, how is that we live in a gauge field theory universe, in which Schrödinger’s algorithms work so remarkably well, when it is possible to imagine (non-gauged) quantum dynamical universes, in which quantum dynamical simulations are less feasible, because scalable quantum computing and BosonSampling are more feasible?

The second philosophical direction, which is also a moral direction, is associated to medical questions. For example, how is it that pharmaceutical compounds, as designed with the help of Schrödinger-type software, in modifying the physiological processes of the brain, exert profound effects upon the subjective experience of the mind? And what understanding(s), beyond the purely quantum-chemical, can we achieve of these brain/mind alterations?

What’s fun (for me) about inside-out methods of philosophical investigations is that the investigations are concretely disciplined by entangled mathematical, physical, engineering, medical, and moral considerations. It’s the discipline that makes these philosophical investigations fun, and difficult, and important.

Scott,

What do you think? If given the initial conditions of the universe and the definitive laws of physics, do we need to embody and “run” these laws and input the initial conditions for all the consciousness’ in the universe to be considered “real”? Or is merely having the information of these initial conditions and the laws in some static form sufficient?

Isn’t that question of equal probative value to the dog torturing question? Aren’t we just arguing about what ontological reality itself *is* and not what consciousness is?

mjgeddes,

What is the efficacy of your metaphysics? How does it help to answer the dog torturing question or does it? What about the question above regarding the initial state and laws of physics encoded in some static form… All of this seems to be assuming a common assumption of what *reality* itself *is*, but I haven’t seen it defined anymore than I’ve seen consciousness defined.

In short, if we can’t even define what *reality* is, then how can we possibly define in what form *consciousness* can be considered to be *real*?

I don’t see how your 3-level abstraction can help to illustrate what should be understood as *real*.

85. wolfgang Says:

@mjgeddes

>> A conscious system is any computational system that is engaged in self-modelling

You would have to clarify what you mean with self-modeling.
The word “car” denotes an automobile and is at the same time “self-modelling”, because its characters “c” “a” “r” exactly model the string “car”.
But I guess we agree that the string “car” which appears on your screen is not conscious, although all 3 of your levels are present (e.g. the string appearing on your screen could be decomposed into pixels, electrons, strings etc.)

86. ppnl Says:

I have not had time lately what with getting banned from Lubos Motle’s site and all. So I am behind in the discussion. But A few points I want to make…

The idea that there may be higher level laws that do not follow from lower level laws is just… ugly. It’s freaking grotesque. Beyond that I don’t think it can be made coherent and I don’t see how it solves anything anyway.

Higher levels of abstractions are shortcuts. They are simplifications of the lower levels of abstractions that add nothing except make some problems tractable. They always give wrong answers when pushed. The ideal gas laws are a perfect example. They ignore some details of the underlying physics and so fails in many situations. Gasses aren’t ideal.

Some have said that the answer to the dog brain problem is that the computer model of the brain includes details of the causality. Maybe. But what experiment can you do to check that theory? If there isn’t one then the theory has no physical consequence.

Causality simply means that future states can be derived from past states. Like falling dominoes “this” leads to “that” and then to “the other”. In fact you can implement a computer with falling dominoes…

So you could implement the dog brain in falling dominoes. But can a pattern of falling dominoes feel pain? Maybe, but if so it adds nothing. The dominoes will fall the same either way. Consciousness adds nothing. If the dominoes are conscious then they are a helpless witness to events that they have no control over. No appeal to a higher level of abstraction will change this fact. At most it will simply obfuscate it.

87. mjgeddes Says:

We can’t say what (if anything) the ‘ultimate’ nature of reality is. But we don’t need to.

As the philosopher Kant pointed out, we never see reality in itself, but instead always view reality through the categories of thought (or ‘lens’) that are built into our minds.

The ‘3 layers’ I talked about are DESCRIPTIONS of reality (categories of thought, or particular ways of looking at things), not some sort of ‘stuff’ out there.

The point is not to presuppose what reality is, but instead to investigate the various different ways of *describing* reality and how these different descriptions fit together.

The abstract ‘laws of physics’ are ONE particular way of describing reality that is timeless (a static description where nothing ever changes). But that’s not the only valid way. That’s only *one* level of abstraction. It doesn’t fully explain what we see. So no, the ‘laws of physics’ alone are not enough.

I told you , there are 3 levels of description: (1) Abstract laws + (2) Functional processes + (3) Objects

Which is ‘real’ you ask? The only valid criteria for reality is how useful our descriptions of the world are, in terms of (a) making predictions and (b) Integrating different branches of knowledge together into coherent categories or explanations. By this criteria, all 3 are equally ‘real’, since all 3 are needed to give a complete account of what we actually see.

Doing back to the puppy dog mystery, the answer is that consciousness won’t be present unless all 3 layers of description are specified in the play-back.
(1) Causal relations + (2) Functional processes + (3) Objects

It’s the integration of all 3 levels of description that generates consciousness.