## Three things that I should’ve gotten around to years ago

Updates (11/8): Alas, video of Eliezer’s talk will not be available after all. The nincompoops who we paid to record the talk wrote down November instead of October for the date, didn’t show up, then stalled for a month before finally admitting what had happened. So my written summary will have to suffice (and maybe Eliezer can put his slides up as well).

In other news, Shachar Lovett has asked me to announce a workshop on complexity and coding theory, which will be held at UC San Diego, January 8-10, 2014.

Update (10/21): Some readers might be interested in my defense of LessWrongism against a surprisingly-common type of ad-hominem attack (i.e., “the LW ideas must be wrong because so many of their advocates are economically-privileged but socially-awkward white male nerds, the same sorts of people who might also be drawn to Ayn Rand or other stuff I dislike”). By all means debate the ideas—I’ve been doing it for years—but please give beyond-kindergarten arguments when you do so!

Update (10/18): I just posted a long summary and review of Eliezer Yudkowsky’s talk at MIT yesterday.

Update (10/15): Leonard Schulman sent me the news that, according to an article by Victoria Woollaston in the Daily Mail, Google hopes to use its D-Wave quantum computer to “solve global warming,” “develop sophisticated artificial life,” and “find aliens.”  (No, I’m not making any of this up: just quoting stuff other people made up.)  The article also repeats the debunked canard that the D-Wave machine is “3600 times faster,” and soberly explains that D-Wave’s 512 qubits compare favorably to the mere 32 or 64 bits found in home PCs (exercise for those of you who aren’t already rolling on the floor: think about that until you are).  It contains not a shadow of a hint of skepticism anywhere, not one token sentence.  I would say that, even in an extremely crowded field, Woollaston’s piece takes the cake as the single most irresponsible article about D-Wave I’ve seen.  And I’d feel terrible for my many friends at Google, whose company comes out of this looking like a laughingstock.  But that’s assuming that this isn’t some sort of elaborate, Sokal-style prank, designed simply to prove that media outlets will publish anything whatsoever, no matter how forehead-bangingly absurd, as long as it contains the words “D-Wave,” “Google,” “NASA,” and “quantum”—and thereby, to prove the truth of what I’ve been saying on this blog since 2007.

1. I’ve added MathJax support to the comments section!  If you want to insert an inline LaTeX equation, surround it with$$\backslash( \backslash)$$, while if you want to insert a displayed equation, surround it with $$\text{\\ \\}$$.  Thanks very much to Michael Dixon for prodding me to do this and telling me how.

2. I’ve also added upvoting and downvoting to the comments section!  OK, in the first significant use of comment voting, the readers have voted overwhelmingly, by 41 – 13, that they want the comment voting to disappear.  So disappear it has!

3. Most importantly, I’ve invited Eliezer Yudkowsky to MIT to give a talk!  He’s here all week, and will be speaking on “Recursion in Rational Agents: Foundations for Self-Modifying AI” this Thursday at 4PM in 32-123 in the MIT Stata Center.  Refreshments at 3:45.  See here for the abstract.  Anyone in the area who’s interested in AI, rationalism, or other such nerdy things is strongly encouraged to attend; it should be interesting.  Just don’t call Eliezer a “Singularitarian”: I’m woefully out of the loop, but I learned yesterday that they’ve dropped that term entirely, and now prefer to be known as machine intelligence researchers talk about the intelligence explosion.

(In addition, Paul Christiano—former MIT undergrad, and my collaborator on quantum money—will be speaking today at 4:30 at the Harvard Science Center, on “Probabilistic metamathematics and the definability of truth.”  His talk will be related to Eliezer’s but somewhat more technical.  See here for details.)

Update (10/15): Alistair Sinclair asked me to post the following announcement.

The Simons Institute for the Theory of Computing at UC Berkeley invites applications for Research Fellowships for academic year 2014-15.

Simons-Berkeley Research Fellowships are an opportunity for outstanding junior scientists (up to 6 years from PhD by Fall 2014) to spend one or two semesters at the Institute in connection with one or more of its programs. The programs for 2014-15 are as follows:

* Algorithmic Spectral Graph Theory (Fall 2014)
* Algorithms and Complexity in Algebraic Geometry (Fall 2014)
* Information Theory (Spring 2015)

Applicants who already hold junior faculty or postdoctoral positions are welcome to apply. In particular, applicants who hold, or expect to hold, postdoctoral appointments at other institutions are encouraged to apply to spend one semester as a Simons-Berkeley Fellow subject to the approval of the postdoctoral institution.

Further details and application instructions can be found at http://simons.berkeley.edu/fellows2014. Information about the Institute and the above programs can be found at http://simons.berkeley.edu.

Deadline for applications: 15 December, 2013.

### 143 Responses to “Three things that I should’ve gotten around to years ago”

1. Raoul Ohio Says:

Eliezer might be a “recovering Singulatarian”, having stepped back from the sinularity. Or, he might be “on the other side” (of the singularity), and thus really negative. Or jumpy, logarithmic, or even essential. If he is continuously essential, he is entitled to sport a $$x sin(1/x)$$ tattoo.

2. razvan Says:

Is it possible to sort comments by their score?
And on the superficial side of things, does the up/downvote box have to be that ugly?

3. Bill Kaminsky Says:

First, on the superficial side of things, are all comments boldfaced by default now?

Second, on that note, if all comments are boldfaced by default, then what does invoking HTML boldface tags do. Let’s find out! The following quote will be in HTML boldface tags:
“This is a test.”

Third, and more substantively, if there’s other people reading this thread who, like me, can’t make either talk but are interested in the nitty-gritty of what Yudkowsky and Christiano are talking about, then please note there are pertinent papers posted by them at their research institute’s website, namely:

a) The first portion of Yudkowsky’s talk seems to me to pertain to this paper:

Patrick LaVictoire, Mihaly Barasz, Paul Christiano, Benja Fallenstein, Marcello Herresho ff, and Eliezer Yudkowsky. “Robust Cooperation in the Prisoner’s Dilemma:
Program Equilibrium via Provability Logic.” (draft dated May 31, 2013) http://intelligence.org/files/RobustCooperation.pdf

b) The second portion of Yudkowsky’s talk as well as Christiano’s more technical talk seems to me to pertain to this paper draft:

Paul Christiano, Eliezer Yudkowsky, Marcello Herresho ff, and
Mihaly Barasz. “De nability of Truth in Probabilistic Logic (Early draft).” (draft dated June 10, 2013) http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf

Finally, in addition to the talks Scott highlighted, there’s an informal meetup with Yudkowsky (and Christiano too?) at MIT at 7pm in Bldg. 6-120 on Friday (i.e., 18-Oct-2013).

4. Sniffnoy Says:

Also apparently all comments are now bold?

5. Mike Says:

And, even if you press the thumbs up button, the number appears to the left of the thumbs down button — what’s that supposed to mean? 😉

6. Sniffnoy Says:

…also hitting “submit comment” now takes you to a weird page with only your comment instead of back to the blog post?

7. Mike Says:

I meant to the right of the thumbs up button, and my happy face emoticon doesn’t compute either — but I do see that the counter is cumulative so that takes care of that comment.

8. Bill Kaminsky Says:

Also, in terms of nitpicks about rendering, it appears that spacing between paragraphs is no longer WYSIWYG. As a test, let me try the following:

This paragraph #1 is probably run together with…

…this paragraph #2, I imagine (though hope not)

Now trying with HTML paragraph tags:

This paragraph #1 is hopefully set off from…

… this paragraph #2.

9. Bill Kaminsky Says:

Hmmm… judging from the tests in my last comment (#8), there’s no simple way anymore to have your comment have paragraphs set apart by line breaks. Sad. 🙁

Meh, personally, I don’t like the upvote/downvote thing. This blog is too sophisticated for that kind of populism. Not that I feel strongly about it either way; this definitely rates as a first world problem.

Anything you can do about the bold comments? Comments should be bold in their content, not their font.

The properly formatted math is awesome!

Oh yeah, the lack of paragraph breaks is strange, too. I wonder if HTML’s P or BR tags would work.

This is a test
This is another testbr/>
Also a test.br/>

I hate to say it since I’m not one to complain about an excellent, free blog, but I liked it better before (except the math formatting).

12. Michael Dixon Says:

I think I found the culprit causing the bold behavior in http://www.scottaaronson.com/blog/wp-content/themes/default/style.css

.commentlist li {
font-weight: bold;
}

.commentlist li is already including elsewhere in that css file. For some reason, this line got added (the body of comments are within html list tags).

As for spacing, the html break tags, , are getting escaped somehow. It clearly is not the MathJax, since it only effects comments. It could be the Zaki Like-Dislike WP Plugin, but it seems to not have this issue on a test WP setup.

Anti-Bold Test: “This text should really not be in bold” .

13. Danny Torrance Says:

Scott, seeing as how you’re hosting Yudkowsky, I trust you will also make sure his talk is videod (in HD) and uploaded onto the net?

14. Scott Says:

Michael: Thanks very much for pinpointing the issue! I deleted that line and now, as you can see, the text is no longer bold.

But it looks like the paragraph breaking in comments is still screwed up! Any ideas about that? Thanks!

15. Scott Says:

OK, let me try uninstalling the Zaki Plugin and see if that fixes things…

16. Scott Says:

Yes, it fixed things! Let me try some other upvote/downvote plugin, just to see if it works better (before making the “editorial decision” about whether to use it or not).

17. ramsey Says:

Thanks for taking the time to make the blog experience better!

18. Michael Dixon Says:

@Scott: You should consider a couple of choices for plugins that allow for an edit comment time window. This should give them the opportunity to correct some mistakes they might have made in their $$\LaTeX$$. For some of them, the user might have to refresh the page in order to re-render the MathJax. I imagine that this could be fix with an added line of javascript somewhere.

I spent some time looking for a live WYSIWYG editor for comments that might handle MathJax correctly. Apparently, there are no live comment editors period for the more recent versions of WordPress. Tough luck for now.

19. Scott Says:

Danny #13: We will indeed have video made of the talk. After that, it will be up to Eliezer and MIRI what they want to do with the video.

Nice looking blog you’ve got here, Scott, thanks from countless nerds like myself!

21. Curious Programmer Says:

I don’t suppose it would occur to you that linking to a particularly bad article, out of many excellent ones about Google’s support of quantum computing, is doing any good to anything except the continuation of your cognitive dissonance? I note you didn’t link to the originating content. I suspect that’s because you haven’t seen it.

I feel sorry for the people you think are your friends at Google. I’m sure they really aren’t.

22. Sniffnoy Says:

Just testing where the “submit comment” button now lands me; hope that’s OK.

23. The Other Mike Says:

Yudkowsky? How about you invite someone actually interesting who knows what they’re talking about like your good friend Geordie Rose — check this interview out. http://www.singularityweblog.com/geordie-rose-d-wave-quantum-computing/.

24. Joshua Zelinsky Says:

I couldn’t make it to Paul’s talk unfortunately, but will hopefully be there for Eliezer’s.

As to the Daily Mail: I mean, this is the Daily Mail. We should be happy that the article is at least talking about a company that actually exists.

25. Raoul Ohio Says:

Following the first remark of Vadim, #10;

I read somewhere about how much Facebook knows about FB members by what they like-vote. If I recall correctly, it mentioned it was easy to tell who was gay.

It is kind of creepy that any book I look at on Amazon turns up in a pop up advertisement the next time I check a story in The Register.

I am averaging one like-vote per 66 years, namely to #10 above.

26. Lubos Motl Says:

Congratulations that your field will finally solve global warming, right after it jumps the shark. After these two achievements, it will be complete.

The solution to global warming is
$$J_\alpha(x) = \sum\limits_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m + \alpha + 1)}{\left({\frac{x}{2}}\right)}^{2 m + \alpha}$$
I am testing whether you have successfully followed my blog to become the 2nd one with a functional MathJax $$\rm\LaTeX$$.

27. Rahul Says:

Curious Programmer #21

“I don’t suppose it would occur to you that linking to a particularly bad article, out of many excellent ones about Google’s support of quantum computing, is doing any good to anything except the continuation of your cognitive dissonance? “

Can you share some links? Which are these excellent articles?

28. Rahul Says:

“As to the Daily Mail: I mean, this is the Daily Mail. We should be happy that the article is at least talking about a company that actually exists.”

Reminds me of this memorable exchange from Yes Minister;

Minister Hacker: Don’t tell me about the press. I know exactly who reads the papers: The Daily Mirror is read by people who think they run the country; The Guardian is read by people who think they ought to run the country; The Times is read by the people who actually do run the country; The Daily Mail is read by the wives of the people who run the country; The Financial Times is read by people who own the country; The Morning Star is read by people who think the country ought to be run by another country; And The Daily Telegraph is read by people who think it is.

Sir Humphrey: Oh and Prime Minister, what about the people who read The Sun?

Bernard: Sun readers don’t care who runs the country, as long as she’s got big tits.

29. aram Says:

I like the comment changes! But what I really find myself wanting, especially when doing latex/html formatting where I’m not sure if it’s supported, is a comment preview feature.

30. Scott Says:

Curious Programmer #21: I don’t agree that my posting served no purpose. If I can use this blog to make the author, Victoria Woollaston, feel a tiny bit of embarrassment over what she wrote (or the editors over what they published), then my life will not have been in vain.

I second, third, and fourth the challenge of Rahul #27: why don’t you show me some of these excellent articles about Google’s support of quantum computing. I’d be genuinely interested to know what, if anything, is really going on.

Yes, I did suffer through the “originating content,” if by that you mean the slick, contentless video that started this latest charade. But that wasn’t exactly a great help in understanding what, if anything, is new scientifically since my last post. And I thought I was doing Google and D-Wave a favor by not linking to it.

31. wolfgang Says:

Scott,

the best part of the D-Wave article has to be “Google’s video also explains the basics behind the multi-verse…”
with a guy turning into a lobster …
I assume this is the the medium lobster of fafblog?

32. Mike Says:

Curious Programmer @21 said:

“I feel sorry for the people you think are your friends at Google. I’m sure they really aren’t.”

Wow, what a sad, pathetic, little comment . . . get a life my friend.

33. Scott Says:

wolfgang #31: fafblog! Oh man, I remember that…

34. asdf Says:

Congrats on the mathjax but boo to the like/dislike button, which seems to turn the blog into facebook.

35. Scott Says:

Curious Programmer #21:

I feel sorry for the people you think are your friends at Google. I’m sure they really aren’t.

Just today, I received some intelligence that there was a recent tech talk by Hartmut Neven at Google about the “quantum AI lab,” and that Neven received a barrage of skeptical questions from his Google colleagues, including one about why he thinks “Troyer and Aaronson” hold views so diametrically opposed to his.

(While it was nice to get explicit confirmation, I wouldn’t have imagined anything else: from my own experience visiting Google and giving a couple tech talks there, they have too many inquisitive people for things to have possibly gone down any other way.)

36. Scott Says:

OK everyone, time for some Shtetl democracy.

Please vote this comment up if you want comment upvoting/downvoting to stay, and please vote it down if you want the voting to disappear.

Thank you!

37. Rahul Says:

#36 Comment Voting:

I want it to stay but can the two buttons become a lil’ unobtrusive / smaller / less prominent?

38. Scott Says:

OK, I just installed comment previewing! But:

(1) It doesn’t show previews for MathJax equations.

(2) The last time I installed a comment previewing plugin, I seem to remember that it slowed the blog to a crawl, and I eventually had to uninstall it. Fingers crossed that the same won’t happen this time.

Scott, FYI about comments, since there’s no login here, it seems to track whether someone’s voted by cookies. I was able to downvote #36 three times. Since my goal was just to see if I could and not to stuff the ballot box, just be aware.

40. ESRogs Says:

41. Michael Dixon Says:

I’m willing to bet that the comment preview script can be modified to re-render MathJax on the client side upon a change (probably just a single line of code). If you tell me what preview plugin you used, I will look into it. Alternatively, you could allow for comment edits for a time period. They require a manual refresh, but still does the job.

You might want to consider an upvotes only plugin. Maybe people will prefer it. Just curious, is there a purpose that you have in mind for voting? As for the people complaining about the button design, that can be changed.

42. Scott Says:

OK, in the first significant use of comment voting, the readers have voted overwhelmingly, by 41 – 13, that they want the comment voting to disappear. So disappear it has!

43. razvan Says:

I will just keep on bitching.
Apparently the blog URL is case sensitive, i.e. http://www.scottaaronson.com/blog/ is the blog but http://www.scottaaronson.com/Blog/ is a 404 error. This is really a minor issue, but I just want to ask if this is on purpose.

44. google's quantum computer by Goshin - TribalWar Forums Says:

[…] […]

45. Henning Dekant Says:

Scott #30, you seriously think you can embarrass somebody who writes for the *Daily Mail* ?

There’s no multiverse in existence where this is gonna happen.

Personally, I am surprised they didn’t claim that Geordie stole the technology from a UFO that crash landed in the Yukon, where Geordie came upon it as part of his Olympic wrestling superhero survival training.

When I sat down with him for an interview I was hard pressed to inquire about this rumour 🙂

46. anon Says:

check Vadim’s post #39 – it was almost certainly not a democratic vote. On almost all sites like yours posters WANT per-post voting – it would be unusual to have such a large NO vote.

47. Douglas Knight Says:

Razvan, no web page is intentionally case-sensitive. except wikipedia.

48. Vitruvius Says:

That’s not correct, Douglas. The case sensitivity of the path part of the URL is up to the server connected to the given port on the given host using the given scheme (see URL). So, for example, if you go to this Blogspot page of mine (where I happen to plug Scott’s “Quantum Computing & the Limits of the Efficiently Computable” Buhl Lecture), and then change the case of one of the letters in the path part to upper case, you’ll get a 404 error from Blogspot, which is not Wikipedia. Ergo, Razvan and Douglas, the software serving http://www.scottaaronson.com:80 is completely justified in treating the case sensitivity of whatever comes after that eponymic part of its own URL however it wants to.

49. anonymous Says:

So… what did you think of Eliezer’s talk?

50. APS Says:

@Henning Dekant: I think it would be more likely for the DM to claim that asylum seekers on benefits stole the technology from him, but he can’t get it back because of the Human Rights Act. Oh, and that at least one person involved in the story Hates Britain(tm).

51. Douglas Knight Says:

Vitruvius, Razvan asked about intention and I answered about intention. Most case-sensitivity in URLs in inherited from the file system. The “p” in this post’s URL does not refer to the file system and is not sensitive.

It is non-trivial to turn off sensitivity, but that work has been done in mod_speling [sic], so if that is already installed Scott can simply add the line “CheckSpelling on” in htaccess. But I don’t think it is available on BlueHost.

52. Scott Says:

anonymous #49:

So… what did you think of Eliezer’s talk?

I thought it was basically excellent (and I got very positive comments about it from other people as well). We had a great turnout—over 200 people—and the audience remained impressively engaged throughout, even though the talk turned out to be quite technical. The subject matter was interestingly and refreshingly different from our usual fare, which was exactly what I’d hoped for.

The main part of the talk dealt with the problem of how to obtain cooperation in the Prisoner’s Dilemma, in a situation where the two “prisoners” are both robots and can both examine the other’s source code. Here we deal with the apparent infinite regress, of code examining code that examines itself, by simply appealing to the Recursion Theorem.

Importantly, the bots can demand (or search themselves for) mathematical proofs that the other bot will cooperate or defect, before deciding what to do. And thus, for example, you can consider one bot that cooperates if and only if there exists a proof, in Peano Arithmetic, that the other bot will cooperate with it. And you can consider another bot that cooperates if and only if there’s no proof in PA that the other bot will defect from it. And there are countless more complicated possibilities, at least 10 of which which Eliezer actually explored in the talk (“now, this one cooperates iff there’s no PA proof that if it defects, then there exists a PA proof that if the other cooperates iff it’s provable in PA that…”), giving the talk a charmingly quirky feel.

Then you can place any pair of these bots against each other (or any given bot against an exact copy of itself), and work out what happens. While that might sound complicated, it turns out that it can be formalized in a weak fragment of modal logic, one that’s decidable in polynomial time using Kripke frames. Accordingly, Eliezer’s collaborators at MIRI wrote a Haskell program (which Eliezer demonstrated during the talk) that tells you exactly what happens when any two such bots are pitted against each other, and you can use the program to run tournaments—like a modal-logic-enhanced version of Robert Axelrod’s famous Prisoner’s Dilemma tournaments from the 1970s. (Eliezer called such competitions “Modal Kombat.” 🙂 )

Eliezer walked through various examples showing how counterintuitive the results of such Modal Kombat can be, and how closely bound up the results are with the Incompleteness Theorem and Löb’s Theorem. But to be honest, if there were any broad lessons to be learned from the actual performing of Modal Kombat, then I can’t remember what they were—I mostly just remember how much I liked the setup.

A second part of the talk dealt with the problem of how a program can generate a second program that the first program can “prove won’t do anything bad.” This can be done, provided (roughly speaking) that the first program uses a stronger system of mathematical reasoning than the second program: in particular, a system that contains an axiom schema that lets it prove the soundness of the reasoning done by the second program. But that seems to suggest that programs can only be confident of the safety of programs that are “weaker than themselves.” So the problem arises of how program A can be confident of the safety of program B, if A and B are equal in mathematical strength. And here, it seems quite obvious that you’re going to run into Gödelian difficulties! Eliezer discussed various “hacks” motivated by the goal of bypassing Gödel’s Theorem in this situation, but I gathered from the talk that he himself isn’t satisfied with those hacks, and considers the issue unresolved.

One of the most striking aspects of the talk was that Eliezer jumped pretty much immediately into the technical material (which, however, was much easier to follow than most technical talks, due to the simplicity and naturalness of the questions, as well as Eliezer’s highly-effective humor).

There was no explicit motivation: he never once explained why he cared about these problems in the first place. Of course, as a sometime reader of Less Wrong, I know why he cares: because in his mind, these are the sorts of problems that will need to be solved in order to build a superhuman AI that pursues what humans would recognize as moral values, rather than (say) converting the Earth into paperclips. Indeed, within his worldview, the fate of the entire universe might rest on the solutions to (what look to others like) certain fun and interesting logic puzzles! But he never explained that.

Maybe Eliezer simply assumed that the people coming to his talk would know that that was his motivation. Maybe he was genuinely more excited about the technical aspects, or decided that it would be better to stress them for this particular talk. In any case, I would’ve found an attempt to connect the technical results to his larger goals to be interesting.

As the host, I felt bad about two technical snafus that distracted from the talk. Firstly, I didn’t think we needed to order an HDMI to VGA converter in advance: I reasoned that, if the room didn’t support HDMI, then he could just give his talk using my or someone else’s laptop. What I didn’t know was that the talk would involve a live Haskell demonstration, which really did require special software installed on his laptop. So, he kept bringing this issue up over and over during the talk, until finally someone brought the HDMI/VGA converter and cable that he needed.

Secondly, there was a minor failure to manage time expectations. The talk was called for 4-5:30, which normally means an hour-long talk, followed by Q&A for 30 minutes or until everyone gets tired (whichever comes sooner). Now, what Eliezer did was to speak for 55 minutes, reach a natural endpoint, and then invite questions. However, he then announced (which he hadn’t before) that everything up till then was just “Part I” of the talk, and that he’d now use his remaining 30 minutes for “Part II”! In retrospect, he should’ve warned people at the beginning what the structure would be, and I should have warned him to warn them.

Fortunately, I think these were minor blips in an otherwise very successful talk.

For those who are interested, a YouTube video of the talk should be available within a week or two.

53. John Sidles Says:

Scott reports  “There are countless more complicated possibilities [associated to the “AIs-in-Prison” class of problems], at least 10 of which which Eliezer actually explored in the talk (“now, this one cooperates iff there’s no PA proof that if it defects, then there exists a PA proof that if the other cooperates iff it’s provable in PA that…”), giving the talk a charmingly quirky feel.

In regard to considering innumerable special cases that lead (gratifyingly) to robust conclusions, algebraic geometers in Yudkowsky’s audience will have appreciated the evident parallels to the problem of resolution of singularities. As János Kollár has expressed it:

[Since Hironaka’s proof of 1964] resolution of singularities emerged as a very unusual subject whose main object has been a deeper understanding of the proof, rather than the search for new theorems. … Two seemingly contradictory aspects make it very interesting to study and develop Hironaka’s approach. First, the method is very robust, in that many variants of the proof work. One can even change basic definitions and be rather confident that the other parts can be modified to fit. Second, the complexity of the proof is very sensitive to details. Small changes in definitions and presentation may result in major simplifications.

These various computation-centric disciplines all contain problem classes that in general are infeasible to solve, yet specific instances of which commonly are easy-to-solve (gratifyingly). The opposite phenomenon is observed too (exasperatingly). It would be nice to understand better why this is such a universal computational experience.

Also Shtetl Optimized’s new preview is super. Thank you Scott!

54. Rose the Hat Says:

Based on the quality of Yudkowsky’s talk, would you grant him tenure at MIT (if it was yours to grant)? He might not have a doctorate but he seems far more talented than most with the degree.

55. Scott Says:

Rose the Hat #54: I wouldn’t want to make a faculty hiring (let alone tenure) decision for anyone based on a single talk (which wasn’t even a job talk), but would want to look at their whole career and body of work.

Yes, Eliezer is much more talented than most people with PhDs. That’s not so high of a bar! 🙂 Nor am I sure exactly what it proves. He could obviously have gotten a PhD and a tenured position if he wanted; he simply chose to follow a different path in life. Vive la différence.

In future comments, let’s try to stick to the ideas, which I think are more interesting.

56. Peter Shor Says:

“OK, in the first significant use of comment voting, the readers have voted overwhelmingly, by 41 – 13, that they want the comment voting to disappear. ”

This reminds me of Claude Shannon’s box:

57. Raoul Ohio Says:

Re #54 and #55, (not sticking to ideas for a moment), check out Garrett Birkhoff, who did not feel like working on a Ph.D. himself. In addition to writing some excellent textbooks, his Ph.D. students from Harvard include some very prominent mathematicians.

The point here is that a Ph.D. is basically a union card. All it shows is that you are halfway smart (at least for science and tech Ph.D.’s) and will jump through a bunch of hoops on command.

In my case, I stumbled on the last hoop; staying awake while the speaker yammers on at the graduation ceremony.

58. Sid Says:

How did Eliezer define “anything bad”. As far as I can tell, testing the “soundness of reasoning” is a different problem than encapsulating values. So you want to prove that the second program won’t reach goal states that program one does not approve of.

Further, did he address the complexity theoretic question whether the certification procedure can be completed in a reasonable time?

59. Scott Says:

Sid #58:

How did Eliezer define “anything bad”.

To be honest, he went too quickly through that part of the talk. I should probably let him (or someone else more familiar with LessWrong-ology) elaborate.

Further, did he address the complexity theoretic question whether the certification procedure can be completed in a reasonable time?

Not really. I’m guessing he’d respond that, if we don’t even understand yet how to do the certification given unlimited time, then we should certainly address that problem before asking about polynomial-time versions.

60. John Sidles Says:

Scott refers (#52) to “a superhuman AI that pursues (say) converting the Earth into paperclips”

Both common sense and recent advances in cognitive science assure us that any such superhuman AI could articulate a demented (yet superficially logical) narrative to explain why the Earth-to-paperclip transformation is desirable.

The historical context and societal implications of this all-too-human — and possibly universal? — cognitive mechanism are charmingly reviewed by Thomas Broman’s “The Habermasian public sphere and ‘Science In The Enlightenment'” (1998), which supplies multiple maxims that echo Yudkowskian themes:

“Science is a form of discourse that structures and regulates much of what anyone else can claim to know.”

“Wherever one sees truth being manufactured, one may be sure that power will be found at no great distance.”

“In principle, they [the Republic of Letters] excluded no one, even if in practice they excluded nearly everyone.”

The Yudkowsky/Broman Thesis  To the degree that multiple long-cherished STEM objectives — including Shtetl Optimized favorites like “demonstrate scalable quantum computation” and “experimentally affirm that quantum state-space is flat” and “rigorously separate P from NP” — have in recent decades been pushed farther-and-farther into the future, and moreover these same objectives are increasingly appreciated solely within specialized cognitive frames that are further-and-further removed from the frames of civic discourse and economically viable STEM enterprises, then a converse Yudkowskian/Bromanian analysis assures us that “power will remove itself to greater-and-greater distance.”

Is this a structural reason why academic STEM funding is declining? Is it because STEM academia is cognitively “converting the 21st century world to paperclips?” … slowly as contrasted with rapidly? Do these various dissonant implications account (subliminally) for the perceptions that Scott disarmingly describes as “the charmingly quirky feel” of Eliezer Yudkowsky’s talk?

Appreciation and thanks are extended to Eliezer and Scott for illuminating these transdisciplinary implications!

—— @article{ Author = {Broman, Thomas}, Title = {The Habermasian public sphere    and 'Science In The Enlightenment'}, Journal = {History of Science}, Volume = {36}, Year = {1998}, Pages = {123-150}} 

61. Scott Says:

Everyone: I forgot to mention my own tiny contribution to Eliezer’s talk—which, being the modest fellow that I am, I wish to record for posterity.

At one point, Eliezer defined a variant of the Prisoner’s Dilemma with three choices: Cooperate, Defect, and “Nuke.” Here Nuke is like a super-duper-defection that causes both players to lose (say) 10,000 utilons. He then considered a bot that he called “ExtortBot,” which defects if its opponent cooperates, and nukes if its opponent defects.

I said, “that sounds a lot like Congress.”

62. ramsey Says:

Now Scott, it’s a bit much to ask that we restrict our comments to the technical content of Eliezer’s talk, rather than remarking on the rather unusual fact of a self-educated high school dropout giving an invited academic talk at MIT!

Personally, I take this as evidence that academia functions reasonably well, focusing on ideas rather than the person. This is, of course, thanks mainly to the open-mindedness and good taste of individual academics (in this case, you)!

63. Scott Says:

ramsey #62: Thanks … but maybe I’m just biased because I, too, was a “high-school dropout”! 🙂 (On the other hand, after dropping out of high school and getting a G.E.D., I went to Cornell, which was generous enough to accept me with that strange record. And I actually liked college, enough to stay there till this day…)

64. Scott Says:

Update: Eliezer informs me that he’s not a high-school dropout; rather, he never went to high school in the first place.

65. Michael McGovern Says:

This might cover some of the topics in Yudkowsky’s talk:

Abstract: Classical game-theoretic agents defect in the Prisoner’s Dilemma even though mutual cooperation would yield higher utility for both agents. Moshe Tennenholtz showed that if each program is allowed to pass its playing strategy to all other players, some programs can then cooperate on the one-shot prisoner’s dilemma. In this paper, provability logic is used to enable a more flexible and secure form of mutual cooperation.

66. wolfgang Says:

Scott,

did you have a chance to discuss eugenics with Elizier?
e.g. along the lines of this:
lesswrong.com/lw/f65/constructing_fictional_eugenics_lw_edition/

67. Martin Says:

Scott,

I hope I am not too rude in questioning your visitor but I’m interested in your opinion on the whole Less Wrong community.

I am certainly interested in rationality but I don’t really have the time to read the “sequences” and I’m very skeptical of the results that Yudkowsky et al. have come to. My caricatured understanding of the Less Wrong community is that they have determined by rational thought that the most important existential threat to humanity is the possibility of unfriendly AI, i.e. computers that will destroy us all, probably because they just don’t care about bags of meat.

This is certainly a surprising conclusion and I wonder if you believe it?

To show some of the wackier sides of this thinking you can see http://rationalwiki.org/wiki/Roko%27s_basilisk which apparently is really something that Yudkowsky thinks need to be hidden from his followers because of the mental harm it could do them.

Would you suggest people take Less wrong seriously, or at least invest the time to read the reams of information that it seems you need to read to be able to debate them?

68. Scott Says:

wolfgang #66: No, we didn’t discuss eugenics.

69. Scott Says:

Martin #67: Dude, check out the archives of this blog! You’ll find that, not only have I been intensely skeptical of many LW ideas, but I’ve written extensively about my skepticism here, and even personally debated Eliezer by video-chat (though I was heavily distracted by technical problems with the recording, and was unhappy with my performance).

Furthermore, my recent 85-page essay The Ghost in the Quantum Turing Machine was, at least partly, my personal attempt to articulate a principled alternative to “LessWrongism”: in particular, a way the physical world could be, such that minds like ours might not be copyable from one substrate to another as easily as digital computer programs.

So when I invited Eliezer to MIT, one of the basic goals was to give us the chance to hash out some of our many disagreements in person! And that’s what we did. Strangely, though, our conversations barely touched on Friendly/Unfriendly AI or the Singularity. Instead, they mostly dealt with general rules of rational behavior, Bayesianism, cognitive biases, the definiteness of mathematical questions like the Continuum Hypothesis, and (especially) quantum mechanics and the Many-Worlds Interpretation.

I think Eliezer’s view is that, if he could only convince me of certain “meta-level” claims about how to think rationally, how to judge competing scientific explanations, etc., then his views about friendly AI, cryonics, and all sorts of other “domain-level” issues would follow as corollaries. Or rather: he thinks that, if I won’t even agree with him about the total slam-dunk obviousness of both Bayesianism and the Many-Worlds Interpretation, then there’s not much point even discussing more “advanced” matters!

For my part, I was happy to debate Eliezer on his chosen turf. But while doing so helped me sharpen my thoughts, to make a long story short I remain skeptical even about many of his “meta-level rules,” let alone the claimed results of applying those rules to issues like AI and cryonics!

On the other hand, LessWrongism does have a kind of internal cohesion. It’s often clear why, if you accepted such-and-such a premise (for example, that it makes sense to consider current AI as “having progressed from zero to lizard-level in a mere half-century”), then such-and-such a conclusion (for example, that we should now expect AI to progress quickly through dog-level, monkey-level, human-level, and then beyond) would be perfectly sensible. That’s one of many reasons why, while I’m not a regular LW reader, I do find it fun and interesting to check in from time to time.

Anyway, as I said before, in his actual talk Eliezer surprised me by skipping the “ideology” entirely, and focusing solely on his and collaborators’ interesting recent results about game theory with recursively self-reflecting agents. You don’t have to agree with the ideology to appreciate the results—and indeed, I know several colleagues who did appreciate the results, despite neither knowing nor caring about the ideology.

Now that you mention it, I was planning to ask Eliezer for his take on the “Roko’s basilisk” affair, but forgot to! It did sound pretty strange when I read about it.

70. John Sidles Says:

Martin #67, in regard to the link you supplied, the best single comment (as it seems to me) in regard to “The Roko Affair” was this one:

With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don’t.
— Eliezer Yudkowsky

Eliezer’s common-sense maxim echoes still-viable themes from the dawn of the Enlightenment

I will prescribe according to my ability and my judgment and never do harm to anyone.
— the Hippocratic Oath

A truth that’s told with bad intent beats all the lies you can invent …
— William Blake/Auguries of Innocence

In modern times these themes are echoed in the climate-change debate. All too commonly, climate-change skeptics focus their criticisms exclusively upon the weakest science, and conversely, climate scientists focus their responses exclusively upon the weakest skepticism.

The resulting pact of “mutual assured infelicity” affords skeptics and scientists alike with ample amusing opportunities for sardonic expression … but no very effective stimulus for cognitive advancement in which skeptics and scientists share alike.

That is why, here on Shtetl Optimized, the commentaries that matter in the long run, in regard to Yudkowsky-style rationalism and DWAVE-style computation (and any other topic too) are the commentaries that respond thoughtfully and creatively to the very real challenges that Eliezer’s work (and DWAVE’s devices) are posing for mainstream CS (and QIT).

Summary and Conclusion  Plenty of good ideas can be found in both Eliezer Yudkowsky’s MIT talk *and* the Google/DWAVE quantum computing video.

71. Martin Says:

Thanks for the links Scott! I’m happy to see you haven’t become a committed transhumanist/singulatarian etc.

72. Raoul Ohio Says:

Martin #67: Excellent pointer! Who’da thunk a group considering itself totally rational would recreate some of the worst of organized religion?

73. wolfgang Says:

Some more links to the funny and not so funny aspects of the ‘Cult of Bayesianism’ are on this page:
plover.net/~bonds/cultofbayes.html

74. Scott Says:

wolfgang #73: As many serious intellectual disagreements as I have with LessWrongism, I cringed at the litany of ad-hominem (and not even funny…) slurs on that page. It’s too easy to say:

“Hahaha, you’re just making these arguments because you’re a nerdy, socially-awkward white male who’s overly impressed with his own intelligence, and who probably read Ayn Rand and had trouble getting laid as a teenager!”

That’s a pretty wide net, one that probably sweeps up a significant fraction of the world’s great scientists. It seems more appropriate to a YouTube comments section—or to junior high school, where at least the bullies skip the pretense of argument and get straight to the punching.

A few more specific replies:

1. Yes, Bayes’ formula is a mathematical triviality, and yes, “Bayesianism” is an ideology that many people take too far. But one reason they’re tempted to take it too far is that Bayesian methods really have been hugely successful over the past few decades in machine learning and other fields (as one example, they were at the heart of Nate Silver’s successful election predictions)—a fact that the essay never grapples with.

2. Yes, it can be amusing to poke fun at LW’s cultlike aspects; I’ve sometimes done so myself. But one has to admit that it’s an unusual cult that regularly hosts, on its own website, detailed arguments about everything wrong with the cult and its leader.

3. Say whatever else about LWers, it seems completely unfair to lump them in with the Randroids and other philosophical advocates of selfishness. For one thing, Eliezer wrote an excellent critique of Ayn Rand (” ‘Study science, not just me!’ is probably the most important piece of advice Ayn Rand should’ve given her followers and didn’t. There’s no one human being who ever lived, whose shoulders were broad enough to bear all the weight of a true science with many contributors”). For another, many LWers have strong humanitarian impulses—and for example, were involved with GiveWell, the site that tries to measure the effectiveness of charities and ends up recommending malaria nets as the best value. Yes, I do have political disagreements with Eliezer, and I consider engagement with current political reality to be much more important than he does. But if your beef is with the Randroids, Social Darwinists, etc., then just criticize them directly, rather than quixotically trying to make Eliezer their stand-in.

75. wolfgang Says:

>> they were at the heart of Nate Silver’s successful election predictions

not really, see e.g.:
normaldeviate.wordpress.com/2012/12/04/nate-silver-is-a-frequentist-review-of-the-signal-and-the-noise/

76. wolfgang Says:

yes, you are right, LW is *not* about selfishness, but it is still pretty disgusting in many cases.
Contemplating a tax on families with ‘negative-expectation’ kids (as he does in his eugenics text) and many other ‘thought experiments’, e.g. about torturing people for the benefit of others, is not only stupid but pretty close to psychopathic imho.

77. John Sidles Says:

Modern-day naturalists and primatologists — like Jared Diamond for example — are all-too-familiar with a passionate declaration by Arthur Wichmann that concludes his magisterial New Guinea fieldwork survey of 1905 Entdeckungsgeschichte von Neu-Guinea, 1828-1902

(p. 821) Verderblicher wirkte aber und wirkt is noch, dass die Mehrzahl der Forschungsreisenden, allen üblen Erfahrungen zum Trotz und bis in die Neueste Zeit hinein, sich nicht dazu aufzuschwingen vermochte sich über frühere Arbeiten zu unterrichten und daher in unzureichender Weise vorbereitet hinauszog. Auch in Zukunft wird es nicht an Leuten fehlen, die sich selbst schändend un unbekümmert darum, dass das Schicksal sie guter Letzt doch ereilt, durch fingirte Reisebeschreibungen die rasch sich verflüchtigende Gunst des grossen Haufens zu erringen suchen. An allen dies Missständen wird auch dieses Werk kaum etwas zu ändern vermögen. Nichts gelernt und alles vergessen!

“Successive explorers commit the same stupidities again and again: unwarranted pride in overstated accomplishments, refusal to acknowledge disastrous oversights, ignoring the accomplishments of previous explorers, consequent repetition of previous errors, hence a long history of unnecessary sufferings and deaths. Future explorers will continue to repeat the same errors. Nothing has been learned, and everything forgotten!”

Nowadays we need not be as pessimistic as Wichman, and yet multiple comments on this thread illustrate the continuing relevance of Wichman’s anhistorical lament “Nichts gelernt und alles vergessen!”

78. ramsey Says:

wolfgang —

How do you propose to come to a greater understanding of morality without considering moral edge cases where human intuitions break down?

This approach is quite common in the philosophy literature.

79. John Sidles Says:

PS: The various points of Scott’s comment #74 are uniformly excellent (as they seem to me) and very much in the spirit of Wichmann.

80. wolfgang Says:

@ramsey

First of all, trying to understand morality as a derivation from Bayes’ theorem would be pretty stupid…

More importantly, if you consider ‘moral edge cases’ then you should clearly state your purpose – otherwise the morality/language of your LW forum may become similar to a neo-nazi forum (they also just want to optimize society, using certain principles as starting point – no?)

81. Scott Says:

wolfgang #75: Well, Silver calls himself a Bayesian, and I’m inclined to accept his explicit self-identification over someone else claiming him for the frequentists … if not, then why not excuse the LWers on the grounds that they’re not “really” Bayesians either?

82. wolfgang Says:

@Scott

>> why not excuse the LWers on the grounds that they’re not “really” Bayesians either

the problem I have with LW is not whether they apply Bayes’ theorem correctly and btw I also don’t question that the seminar talk Eliezer gave was really interesting.

My problem is what LW had to say about eugenics, torture etc. and the fact that if they actually mean what they wrote about such issues then they are pretty close to a psychopathic mindset.

ps:
>> Silver calls himself a Bayesian
Yes and D-wave calls its device a quantum computer.

83. Scott Says:

wolfgang #80:

More importantly, if you consider ‘moral edge cases’ then you should clearly state your purpose – otherwise the morality/language of your LW forum may become similar to a neo-nazi forum (they also just want to optimize society, using certain principles as starting point – no?)

Sorry, but I completely disagree. Many people seem to have drawn the lesson from Nazism that certain ideas (for example, eugenics) are so grievously immoral that they shouldn’t even be openly debated or discussed. While that reaction is understandable, I personally draw the opposite conclusion: I think just about everything should be openly discussed! The Nazis carried out the Holocaust in secret, heavily cloaked in euphemism, because as much antisemitism as there really was in Germany, they correctly understood that their campaign of mass murder would not have survived open discussion.

84. wolfgang Says:

>> everything should be openly discussed
agreed, which is why I posted the above links so that people can read what LW has to say about eugenics and similar topics other than the academic prisoner’s dilemma.

85. wolfgang Says:

Scott,

I hope we do not disagree that a discussion of eugenics should begin with a statement of what exactly the purpose of this discussion is.

If you agree with Eliezer that a discussion of eugenics should begin with the study of animal breeding then we better end this exchange.

86. wolfgang Says:

>> The Nazis carried out the Holocaust in secret

this is not really the topic of this comment thread so I don’t know if we want to go there, but the question what the Germans knew about the ‘final solution’ is quite delicate.

I would make the point that Adolf was very clear early on about his plans in his infamous autobiography; The tragedy was that people did not take him seriously.

The lesson I learn from this is that we take people at their word when they write about eugenics, torture etc.

87. Scott Says:

wolfgang: I just read the post of Eliezer that you linked to (I hadn’t seen it before), and it said explicitly at the beginning that the “purpose” was to help someone writing a science-fiction story.

Personally, my view is that we might have excellent reasons, as a society, for refusing to implement any eugenic policies whatsoever (even the so-called “soft” ones based on economic incentives). But we shouldn’t hitch that moral stance to the false empirical claim that intelligence and other qualities have zero heritability, and therefore that no such scheme could work even in principle. If we do that, then we simply make our moral stance weaker: once people realize (as they will) that the empirical part is wrong, they’ll then question the moral part as well. This case was made extremely eloquently in Steven Pinker’s The Blank Slate.

From my perspective, the happiest plausible outcome is that this entire issue will become moot in a century or two, once genetic engineering lets us give every newborn baby the intelligence of a Feynman or von Neumann. If that ever happens, then for me the relevant question won’t be: “but is doing this morally acceptable?” Rather it will be: “is doing this morally imperative?” 🙂

Dear Scott,
I read through at least the first 3-4 sections of your free will essay and all of Quantum Computing since Democritus some time ago. I understand that your major objection to lesswrong Bayesianism rests in Knightian uncertainty. I am much more interested however, in your objections to lesswrong philosophy that stem around anthropics and the question of personal identity.

I understand that “lesswrong-ology” has concerned itself in large part with questions of how to reason in situations of multiple copies of the self. They have developed multiple “decision theories” to handle the question, and have attempted to “dissolve the question” with essays like:
http://lesswrong.com/lw/32o/if_a_tree_falls_on_sleeping_beauty/
I would be eternally grateful to see someone not steeped in lesswrong-ology engage with these ideas. I find it disgustingly frustrating that there is so little engagement between lesswrong and traditional philosophy, and I am sure it leads to me wasting a lot of time.

After speaking with Eliezer, is there any way that you could explain to someone like me your objections to their *solutions* to anthropics problems/personal identity problems/etc.? I have seen you and others complain often about the problems, and about how it is a strike against the MWI that it forces one to confront them, but until now I have not seen anyone bother to learn and complain about lesswrong’s proposed solutions.

89. John Sidles Says:

wolfgang opines  “The lesson I learn from this is that we take people at their word when they write about eugenics, torture etc.”

The lives of Arthur Wichmann’s (see #77) two children — both gifted, and both home-schooled — impart further lessons:

•  Clara Wichmann was home-schooled, and became a prominent feminist, anarcho-syndicalist, and lawyer. Clara died young (at age 37, in childbirth), and her surviving daughter fought with the Dutch resistance in WWII. The Clara Wichmann Institute is named after her.

•  Erich Wichmann was home-schooled, and he became a prominent artist, writer, and fascist. Erich died young (at age 39, of pneumonia), and his literary and artistic reputation did not survive the defeat of European fascism in WWII.

Clara and Erich both were gifted youths who did not live long enough to witness the sweet-and-bitter fruits of their youthful ideological passions. Alas, their deaths were easily preventable by modern medical advances.

A main lesson-learned (for me) is that we humans “grow too soon old and too late smart.” As continued medical advances slowly increase human lifetimes — increasing our shared stake in a secure, peaceful, prosperous future and a healthy planetary ecology — perhaps we will all of us slowly become smarter.

90. Raoul Ohio Says:

John, It is true. In fact, I am feeling smarter right now; I thought twice and did not interject any half-baked remarks into Wolfgang’s and Scott’s volley above.

91. jonas Says:

That comparison is so unfair! Typical home PCs have 128 bits, and you can even buy home PCs with 256 bits off the shelf these days. The rest of the difference in bits from D-wave is more than made up by multi-core technology.

92. Joshua Zelinsky Says:

So, I’m not sure that anyone would ever actually take that Plover essay seriously. But since apparently at least one person has, I think it might be worth linking to the discussion of it on Less Wrong: http://lesswrong.com/r/all/lw/ib0/open_thread_august_1218_2013/9kpf (Yes, one of the main comments there is my own.) Frankly, I’m someone who disagrees pretty strongly with a lot of what is normal for LW and I found that essay to be at best an interesting example of motivated cognition.

93. Joshua Zelinsky Says:

I should add that the discussion there I don’t think anyone intended as a full scale analysis or response to Plover’s essay but I can write one if Wolfgang or others are actually interested in that.

94. Raoul Ohio Says:

Joshua,

I agree with your assessment that many of Plover’s bad-raps about LW are sloppy and low hits. But it is not clear that they all miss the target. The last item you address is that of “wacky ideas being promoted by billionaire friends considered harmful”. Not sure I agree with your “so what?”.

The big problem that LW has is that the kind of rationality that it promotes usually leads to holding beliefs that are perfectly reasonable, but that lack credibility.

The credibility will only come when (or if) those beliefs are vindicated by events on the real world. But even if the LW crowd are right, that will take 30+ years, and we will all be sitting in a retirement home or pushing up daisies by that time.

Right now, the set of ideas that LW/MIRI/CFAR talk about are “crazy” and will be for the foreseeable future.

This is very sad, but I have learned to accept it and just get on with my life.

Or, to put it another way, once you learn to think much more clearly and rationally than the vast majority of other humans, you end up being a victim of your own success; very few people believe you, most people think you are crazy, and being correct/having good arguments doesn’t help you.

97. wolfgang Says:

>> once genetic engineering lets us give every newborn baby the intelligence of a Feynman or von Neumann.

I for one happen to think that we should use genetic engineering to improve dancing skills rather than IQ if the goal is to improve overall happiness in human beings 😎

98. Joshua Zelinsky Says:

Raoul, that seems to be if anything one of the weakest of the arguments. That a major philanthropist who also supports causes one doesn’t agree with is supporting it is somehow a problem with it? To use the obvious sorts of examples, the Koch brothers, as well as Richard Branson an George Soros all support endeavors not connected to their politics. Are those endeavors now questionable due to that funding?

99. Yongli Chen Says:

Hi Prof. Aaronson,
I am a student of UIUC and attended your speech today. I found it is very interesting! Do you have any other talks available for us to watch online?

100. Scott Says:

Yongli #99: Sure! For starters, you could try my TEDx talk on Feynman and quantum computing, my NIPS talk on quantum information and the brain, my Q+ seminar on quantum money, or my talk at Penn on the nature of mathematical proof.

101. Dave Says:

Hi Scott. Did you go to the Christiano talk? If so, what were your thoughts on that?

102. Scott Says:

Dave #101: Yes, Paul’s talk was really nice too. It was a technical talk, discussing his recent result about how to evade the Liar Paradox (and more generally, Tarski’s indefinability of truth) by
(1) passing to probabilistic logic and then
(2) applying the Kakutani fixed-point theorem to get a “reflective equilibrium.”

The idea is a very natural one; something similar is done in game theory (where you pass from pure to mixed strategies to ensure the existence of a Nash equilibrium), and even in the study of closed timelike curves (where, again, Deutsch proposed passing from pure to mixed states to ensure the existence of a causally-consistent evolution). However, one aspect that I hadn’t appreciated at all before Paul’s talk was their result’s reliance on nonstandard analysis (i.e., formal infinitesimals). That’s something that I still don’t really understand, and would like to understand better.

103. Eliezer Yudkowsky Says:

I could wish people had stronger priors on “Some parts of the Internet hate LW sufficiently, while having sufficiently weak norms of reasonable discussion, that they will just *outright lie*.” Nobody associated with MIRI has *ever* advocated any policy, or any belief implying a policy, for which it would make sense to claim that you have to donate to MIRI on pain of penalization by future AIs. From my perspective, l’affaire basilisk consists of (1) my having no grasp of the Streisand effect, without which no issue would have existed (I have since updated); (2) some Internet trolls innocently or maliciously failing to grasp what alleged ‘bad thoughts’ can do to people with obsessive-compulsive disorder or near-kin mental conditions to it; (3) the almost entire absence of understanding of the decision theory allegedly involved, propagated further since nobody who knew any technical details of the sort covered in the MIT talk was actually participating in the conversation; and (4) continued gross misrepresentation of the historical events and who said what and why, by the same sort of Internet trolls that wrote the linked Plover article or that can’t comprehend the difference between a clearly labeled fictional hypothetical and an actual policy proposal. Some people believe so strongly that Bayesianism ought to devolve into a religion that they will just *make up* religious-sounding beliefs and attribute them. Sometimes there is some distant basis in something somebody actually said once, but you can rarely guess what that was by looking at the accounts told afterward. And of course, it need not be said that not all those who call themselves Bayesian, or rationalist, or “LessWrongian” will speak as a group, especially when some of us are explicitly anticonformist. You might as well attribute opinions to Reddit because of what one redditor said.

104. Eliezer Yudkowsky Says:

I will also comment that I think that LW is collectively, and I am individually, in a state of confessed confusion about anthropic probabilities (trying to reason about hypotheses where different background states of the universe imply different numbers of people or different numbers of indistinguishable copies of you). We talk about it a lot because we don’t understand it, and yet something like it seems to necessarily form a background assumption to any kind of probability theory that we do.

As Bostrom (and probably numerous others (who are nonetheless distinguishable from Bostrom)) point out, in a large-enough universe, for example a spatially infinite one, any experience is realized by some Boltzmann brain – an improbably randomly assembled brain. Whenever we say a hypothesis is confirmed by experience, what we mean is not, “This hypothesis predicts my experience will exist”, but “This hypothesis predicts my experience happens more often, compared to the alternative hypothesis’s prediction” or “This experience is assigned greater ‘measure'” (whatever the hell measure is).

At the very basis of all our epistemology is a point where we go from a sensory experience, to talking about the confirmation or disconfirmation of a belief. But there are no good naturalistic (non-Cartesian, not having fundamental entities for mental stuff) reductionist accounts of how we go from different numbers of brains to a weight of experience. (Suppose you had a brain encoded in a big, flat sheet containing electrical circuits. If we slowly pried apart the sheet along its thickness into two sheets, at what point, if any, would there be two people and twice as much experience? Even the language we’re using here is non-naturalistic!)

So of course we’re fascinated by the problem, but I’ve yet to hear of any really good solutions.

105. Eliezer Yudkowsky Says:

This is also why I don’t regard the Born probabilities as a great problem for many-worlds in particular; *all* our accounts of epistemology contain a similar problem!

106. Kuru Says:

Eliezer #103,

Do you hold philosophers, say Nietzsche or Marx, responsible for how their philosophy can be used by some people “with obsessive-compulsive disorder or near-kin mental conditions”?

What distinction do you see between “fictional hypothetical that we should not take seriously” and, say, your own “fictional hypothetical but we should be prepared” writings about singularity?

107. wolfgang Says:

>> why I don’t regard the Born probabilities as a great problem for many-worlds in particular

Well, without the Born probabilities you don’t really have anything, except a feel-good story imho.

I guess one can use m.w.i in general discussions and calculate the Born probabilities, as e.g. a Copenhagener would, whenever necessary.
But I fail to see what this achieves …

108. John Sidles Says:

wolfgang says (#107) “Without the Born probabilities you don’t really have anything, except a feel-good story imho.”

In the 20th century, generalizing global Euclidean geometry to local differential geometry (and its algebraic descendants) transformationally advanced our mathematical understanding of dynamical processes. In particular, the local notions of pullback and pushforward, blowup and blowdown proved to be seminal.

Similarly in the 21st century, generalizing global Born measurement probabilities to local differential Lindbladian processes (and their algebraic descendants) are transformationally advancing our mathematical understanding of measurement processes. Again the local notions of pullback and pushforward, blowup and blowdown are proving to be seminal.

Global-to-local reductions like that presented in Carlton Caves’ on-line notes “Completely positive maps, positive maps, and the Lindblad form” (Google finds Caves’ notes) provide a canonical path for translating 20th century works like Nielsen and Chuang’s Quantum Computation and Quantum Information entirely into modern geometric language of authors like Arnold, Landsberg, and even Grothendieck:

It is a noncontroversial statement that thermodynamics is a theory in close contact with measurement. Yet the specific implications of this statement have changed beyond recognition since the formative years of the classical theory.
— Laszlo Tisza

As with dynamics, so with thermodynamics and information theory. With the added element of reflection, these same emerging mathematical themes are central even to Less Wrong‘s AI-centric worldview.

When we compute quantum mechanical simulations, are we pulling back and blowing down Nature’s nonsingular Hilbert space? Or is Nature’s state-space varietal, such that Hilbert space is our pushed-forward blowup of it? No one knows.

Appreciation of these rich mathematical themes and tough physics questions accretes slowly. A celebrated passage by Grothendieck expresses it thusly:

A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. … the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it … yet it finally surrounds the resistant substance. […] the usual constructions suggested by geometric intuition can be transcribed, in essentially just one reasonable way, into this language.

So viewed, the process of constructing geometric descriptions/simulations of dynamical systems becomes a lengthy ascent, by many small paces, of a demandingly slippery mathematical slope (as Mount Rainier), as contrasted with short climbs by crux moves up sheer cliffs (as El Capitan). Perhaps this explains the common experience that it is comparably difficult to write an algebraic geometry textbook as to read one.

Conclusion  What Eliezer Yudkowsky calls the “state of confessed confusion about anthropic [quantum] probabilities” substantially originates in a 20th century quantum literature whose translation into modern mathematical language is well begun, yet far from completed. As so often in the history of science, mathematical understanding, physical understanding, and physiological understanding are advancing together.

109. Sandro Says:

wolfgang: “Contemplating a tax on families with ‘negative-expectation’ kids (as he does in his eugenics text) and many other ‘thought experiments’, e.g. about torturing people for the benefit of others, is not only stupid but pretty close to psychopathic imho.”

What a ridiculous sentiment. I’ll just leaves this here for you:

“It is the mark of an educated mind to be able to entertain a thought without accepting it.” ~ Aristotle

If his arguments are so stupid, then post your counters on LW and it should be rather trivial to convince everyone. And if it’s not trivial, then clearly they’re not stupid, and exactly *why* they’re not stupid and trivially countered raises important questions. Exploring these questions is not psychopathic any more than contemplating the trolley problem.

110. wolfgang Says:

@Sandro

If you don’t understand why talk about “negative expectation kids” is stupid, then either you never had kids or you have no heart – in both cases I cannot help you.

111. Kuru Says:

@Sandro

“If his arguments are so stupid, then post your counters on LW and it should be rather trivial to convince everyone.”

So, you don’t believe Eliezer when he says this was just a “clearly labeled fictional hypothetical”?

112. John Sidles Says:

Hard-won advice from a Nobel-winning STEM professional:

Let the reader who expects this book to be a political exposé slam its covers shut right know.

If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them.

But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?

It is well to be wary of ideologies, communities, and individuals that associate no concrete checks-and-balances to Solzhenitsyn’s heart-line.

And it is well to be wary too, of a recurring theme in literature and history and science and even mathematics, that one’s own brain all-too-inventively ascribes the purest reasons even to the darkest motives of one’s own heart.

Younger people (like young Solzhenitsyn) commonly are largely or even wholly unconscious of this line. Older people generally are conscious of it, but often-times cease to respect it; this cynicism arises as the cumulative product of fatigue, illness, disappointments, and personal tragedies (not to mention the temptations of self-aggrandizement).

As for willful ignorance, it afflicts people of all ages.

113. Sandro Says:

@Kuru, what I believe is irrelevant. Wolfgang clearly believes it wasn’t a thought experiment.

@wolfgang, what a convincing argument. I bow to your superior logic.

As just one among many points a society might discourage negative expectation offspring, is that social security and various other safety nets simply don’t work if the ratio of productive:non-productive members tips too far. The feasibility of helping *anyone* rests on the premise that we’re not helping *nearly everyone*.

If our civilization were to experience a sufficient calamity such that this is no longer the case, your “counterargument” would be seen for the convenient fiction it really is.

114. Raoul Ohio Says:

Sandro:

studyPhilosophy();
} else {
studyScience();
}

115. Kuru Says:

@Sandro,

“what I believe is irrelevant.”

I’m afraid you’re right. 🙂

Sorry for the bad joke, this was just too easy. ^^ Of course your beliefs are relevants, at least because your question injected your own belief that LWians are “superior logicists” that can criticize themselves, something some may find not entirely true.

More specifically, Wolfgang believes LWians tend to form a religion, so in his system of belief your question sounds as “If you don’t think Jesus is the Holy Son of God, then post your counters on christian.net and it should be rather trivial to convince everyone.”

116. Alexander Vlasov Says:

Looking in abstract of the talk, I feel some analogies with idea of levels of reflection in Lefebvre Reflexive Theory (cf. http://en.wikipedia.org/wiki/Vladimir_Lefebvre). Is it really so?

117. Sandro Says:

@Kuru, “at least because your question injected your own belief that LWians are “superior logicists” that can criticize themselves, something some may find not entirely true.”

I neither said nor implied that. I merely said wolfgang’s objections are unjustified.

@Kuru, “More specifically, Wolfgang believes LWians tend to form a religion, so in his system of belief your question”

Belief systems require justification. Wolfgang provided his alleged justification and it was found wanting. My belief system is irrelevant to this point.

Finally, on your analogy of LW to Christian.net, one is explicitly established with the intent of encouraging reasoned debate, the other is not. Did woflgang claim that Elizier discounted any sort of reasoned counterargument? Nope, he didn’t even present such a counterargument. Your analogy is thus inapplicable.

@Raoul, you’re only partly right about philosophy consisting of questions without answers. Some of it is, but much of it isn’t. After all, science itself started as natural philosophy and I doubt very much that you consider science as providing no answers.

Philosophy is very important for properly defining a domain of discourse, and once the domain is established, philosophy becomes some sort of science or some sort of mathematics.

Eliezer,
Thank you for clarifying that both you and lesswrong retain professed confusions about anthropic reasoning. While I got a substantial amount out of browsing general posts on lesswrong, I never engaged the large morass of decision theory posts enough to understand what they claim to solve.
I am still interested in understanding what the essence of the debate between you and Prof. Aaronson is regarding the MWI. My guess is that you would agree that the picture laid out in the free will essay is coherent, but essentially very unlikely. You might say that Aaronson’s picture is motivated:
1) By a desire to avoid anthropic conundrums, which you believe need to be addressed even in situations of entirely classical physics. Besides, I would guess you would say, the desire to avoid a conundrum is not necessarily a good reason to believe a more complicated theory that defines away the problem.
2) By dint of his personality and profession, which are obsessed with constructing empirical tests of the boundaries of quantum mechanics. Perhaps, you might say, someone whose entire job is to test the boundaries of quantum mechanics would be more motivated to place weight on situations in which quantum mechanics could not be naively extrapolated.

119. Gabriel Nivasch Says:

Dear Scott,
Google might very well solve global warming thanks to the D-Wave machine. It’s fifty-fifty. I don’t understand why you get so worked up about it.

120. wolfgang Says:

@Sandro
>> analogy of LW to Christian.net

On LW the word “Bayesian” is used quite often, however I have seen very little actual use of Bayesian statistics.
So let me ask you this: How would you use a Bayesian network (or any other form of Bayesian updating) to settle question(s) in eugenics or any other similar topic?

A link to such a study would do, as I am sure it is straightforward for a LW Bayesian to calculate this and must have been done already.

Just to be sure, I am not asking for the pseudo-intellectual back and forth I have seen on LW discussion boards, but real Bayesian statistics.

121. John Sidles Says:

Among the most subtle and influential meditations by a STEM-trained professional upon the topic of cognitive priors — Bayesian and otherwise — is Aleksandr Solzhenitsyn’s diptych We Never Make Mistakes (1963). Solzhenitsyn’s work reminds us that conscious weighing of priors greatly influences human cognition even when numerical probabilities are not assigned to priors.

Further examples of non-numerical cognitive priors come readily to mind. For example, Arthur Jaffe and Frank Quinn’s much-cited article “‘Theoretical mathematics’: toward a cultural synthesis of mathematics and theoretical physics” (1993, arXiv:math/9307227) argues (in effect) that the defining objective of mathematics is the construction of assertions whose probability of falsehood (Bayesian or otherwise) is as near to zero as feasible.

In contrast, Bill Thurston’s much-cited reply “On proof and progress in mathematics” (1994, arXiv:math/9404236) argues for broader less-quantifiable roles for priors in mathematical cognition. Of these two articles, Thurston’s has been the most-cited and (arguably) the most influential.

In subsequent decades neurophysiologists have not been idle, and in particular a PUBMED search for review articles with keyword reconsolidation shows the strengthening evidence that human cognition is a continuing process of orchestrating emotions and organizing memories, in which deductive chains of reasoning from observations and axioms are synthesized last (not first!), and numerical values are never assigned to priors at all.

Conclusion  The conscious contemplation of broad-ranging priors itself serves to broaden human cognition; the assignment of accurate numerical probabilities (Bayesian or otherwise) to those priors is an optional step that for tough problems (both mathematical and otherwise) is generically infeasible.

122. wolfgang Says:

@John

I am not sure what “the theory of foliations and geometrization of 3-manifolds” has to do with any of this,
but perhaps Sandro can shed some further light on it …

123. John Sidles Says:

wolfgang (#122) is puzzled “I am not sure what ‘the theory of foliations and geometrization of 3-manifolds’ has to do with any of this.”

Bill Thurston clarifies this point with the following Socratic elenchus (from p. 2 of arXiv:math/9404236):

Q  How do mathematicians advance human understanding of mathematics?

A  This question brings to the fore something that is fundamental and pervasive: that what we are doing is finding ways for people to understand and think about mathematics. […] If what we are doing is constructing better ways of thinking, then psychological and social dimensions are essential to a good model for mathematical progress.
————-
Q  How do people understand mathematics?

A  This is a very hard question. Understanding is an individual and internal matter that is hard to be fully aware of, hard to understand and often hard to communicate.
————-
Q  How is mathematical understanding communicated?

A  The transfer of understanding from one person to another is not automatic. It is hard and tricky. Therefore, to analyze human understanding of mathematics, it is important to consider who understands what, and when.
————-
Q  What is a proof?

A  … [To learn Thurston’s answers to this and many further questions, Shtetl Optimized folks will have to read his essay for themselves … as Terry Tao’s weblog recommends!]

The congruencies between Bill Thurston’s mathematical elenchus-answers and [historian of science] Thomas Broman’s much-cited historical essay The Habermasian public sphere and ‘Science in the Enlightenment’ are striking and (as it seems to me) meaningful.

Aren’t Bill Thurston and Thomas Broman — and Michel Foucault too — all saying pretty much the same thing as the Less Wrong folks, in regard to the merits of exhibiting and consciously questioning cognitive priors?

Conclusion (in words of one syllable)  Good math guys like Bill T, and the Less Wrong folks, and the French egg-heads too, all have came to same broad mind-truths… and so, should we hold to them too?

——–
PS  Thurston-style foliation theory has *everything* to do with dynamical level-sets, which have *everything* to do with thermodynamics, which has *everything* to do with information theory, computation, and cognition, which have *everything* to do with … well … everything!

124. Sandro Says:

@wolfgang, “So let me ask you this: How would you use a Bayesian network (or any other form of Bayesian updating) to settle question(s) in eugenics or any other similar topic?”

Far too vague a question to provide any sort of meaningful answer, mainly because you haven’t specified what sort of questions you’re considering. Certainly prescriptions on eugenics can’t be answered until someone establishes moral realism, so you must be thinking of some sort of descriptive question. But descriptive questions are the domain of formal and scientific analysis, and Bayesian reasoning may be (and is) a tool that can be employed to answer such questions if a meaningful degree of uncertainty is involved (for instance, in data analysis). I’m not sure you’ll find anything surprising here, so I’m still left wondering what sort of questions makes you so skeptical.

In any case, you seem to be implying that the lack of *formal* rigour somehow means LW content is as fallacious as that found in any religion. This is clearly false.

Informal arguments from simple axiomatic bases are certainly subject to more ambiguity and more errors than formal arguments, but religious arguments often defy both minimal axiomatization and logic. This doesn’t seem to be the case in any of the LW content I’ve read, and the few detractors I’ve read either intentionally misrepresent the content they object to, or they simply misunderstand the context. In other words, I have seen no evidence of the fallacies common to religious arguments in the LW content, but I have seen much evidence of it in every LW detractor I’ve encountered.

So what’s my inevitable Bayesian conclusion? You see, formal rigour isn’t always necessary to obtain a justifiable belief.

125. wolfgang Says:

@Sandro

>> Far too vague a question to provide any sort of meaningful answer

Not vague at all and you answered it: You actually don’t use Bayesian statistics.

You just use the word “Bayesian” – as in “my inevitable Bayesian conclusion”.

In the non-LW world we call this misleading; Just as the “science” in “Scientology”.

126. Nick Mann Says:

Trivia question for philosophers. Answer available upon request. “Wittgenstein made one recorded comment regarding QM. It’s published and checkable. Do you know what it was/is?” (Hint: It’s kind of Zeilingerian.)

This just in:

“Lead author Marc Warner from the London Centre for Nanotechnology, said: ‘In theory, a quantum computer can easily solve problems that a normal, classical, computer would not be able to answer in the lifetime of the universe. We just don’t know how to build one yet.'”

If Mr Warner means means NP-complete problems he hasn’t been paying attention to the masthead of this blog.

http://phys.org/news/2013-10-material-quantum-blue.html

127. Me Says:

@Nick Mann
He probably means integer factoring

@Scott
Is there any chace that you could comment on Timothy Gowers polymath attack on the P vs NP problem? I know that attempted proofs of P neq NP aren’t your favourite topic but if a fields medalist can’t get people interrested, nobody can.

128. Scott Says:

Me #127: I agree, there are enough statements made about quantum computing that are wrong under any interpretation, that I don’t see much point in attacking the statements that when reasonably interpreted are (believed to be) correct!

Meanwhile, could you please give me a link to Gowers’s “polymath attack on the P vs NP problem”? I just looked at his blog, but the only recent post I saw related to P vs NP was a very nice exposition of the Razborov-Rudich barrier. I think Gowers himself would be the last to call that an “attack” on the problem: rather, it’s an explanation of one of the main things we know about how not to attack it!

129. Me Says:

It’s the newest post on his blog.
The inconspicuous title ‘What I did in my summer holidays’ hides the big news a little bit. Maybe that’s the reason it didn’t went viral yet.

130. Bill Kaminsky Says:

Just to clarify @Me #129 (and @Scott #128)

It’s the newest post on his [Gowers’s] blog.
The inconspicuous title `What I did in my summer holidays’ hides the big news a little bit…

The blog link Me @ #129 is alluding to is:

http://gowers.wordpress.com/2013/10/24/what-i-did-in-my-summer-holidays/#more-5141

About 90% down in this post, after lots of indeed necessary exposition, Gowers gets around to the nitty-gritty of explaining why he hopes (uber-cautiously, of course!) that his new approach doesn’t naturalize and thus run afoul of Razborov-Rudich barrier.

And then at the very end of the above blog post, he links to his still-developing notebooks on his work:

http://gowers.tiddlyspace.com/#%5B%5BA%20sitemap%20for%20the%20P%20versus%20NP%20notebook%5D%5D

Fascinating stuff.

131. Scott Says:

Me #129 and Bill #130: OK, thanks! I’d like to have the time to study Gowers’s notebooks before expressing an opinion—and right now I don’t have the time.

What I can say is that reading Gowers’s previous essays and blog posts (including about theoretical computer science) has been so rewarding, that he’s one of the few people for whom I’d look forward to making the time to study assorted scribblings about P vs. NP that, by the author’s own admission, don’t yet lead anywhere concrete! 🙂

132. John Sidles Says:

Please let me echo Scott’s endorsement (#131) of Tim Gower’s wonderfully creative new mathematical enterprise What I did in my summer holidays, which (as a historian might describe it) seeks to extend Bill Thurston’s mathematical elenchus (as summarized in #123) to the Habermasian public sphere of the 21st century internet.

Tim Gower’s starting editorial policy is particularly smile-provoking:

“If maintaining the partial proof-discovery tree becomes too much work to do on my own, then I will consider giving editing rights to one or more “core” participants. But to start with I will be the sole moderator.”

This reasonable and necessary editorial policy calls to mind Thomas Broman’s oft-quoted observation

“In principle, they [the earliest scientific journals] excluded no one, even if in practice they excluded nearly everyone.”

If it should happen that the various PolyMath enterprises prosper as their early adopters hope and intend, then (as it seems to me) the entire PolyMath community (and Tim Gowers in particular) will deserve all of our appreciation and thanks.

Conclusion  The 21st century STEM community is actively reinventing itself by recapitulating its own historical evolution, with “the Polymathematical way of doing research”, as Tim Gowers’calls it, in the 21st century role of the eighteenth century’s Republic of Letters.

133. Rahul Says:

Scott:

http://phys.org/news/2013-10-quantum-reality-complex-previously-thought.html

134. Kuru Says:

@ Rahul

http://physics.illinois.edu/people/kwiat/interaction-free-measurements.asp

135. John Sidles Says:

Rahul and Kury, you might both be interested in Michael Landry’s “Realizing Squeezing: An interview with Carlton Caves” (2013; Google finds it). Caves vividly describes how the quantum squeezing of “nothing” (at infinitesimal energy-cost) serves to macroscopically alter the dynamical interactions of kilowatt-to-megawatt optical wavefronts.

Quantum effects that are remarkable when observed with individual photons become (as it seems to me) truly *incredible* when these same effects are scaled to Avogadro’s Number of photons, and Carlton Caves is deserving of our appreciation and thanks for being among the first to foresee both the technical feasibility and the practical applications of this macroscopic quantum scaling.

136. John Sidles Says:

As a follow-on to #135, Advanced LIGO will circulate (about) 1 MW of photons, of wavelength 1064 nm, in interferometers arms of total length 8 km. A simple calculation finds that at any given instant about $1.4\times10^{20}$ = 0.24 millimoles of photons will be present in a squeezed circulating optical mode.

To the best of my knowledge, by any reasonable measure, these squeezed LIGO light-beams will constitute the most macroscopic non-classical dynamical state ever created, and a wonderful example of scalable engineering applications of fundamental quantum research.

137. Rahul Says:

@John Sidles:

138. Sandro Says:

@wolfgang, “You just use the word “Bayesian” – as in “my inevitable Bayesian conclusion”. What the heck is “Bayesian” about your conclusion?”

I’m weighing the probabilities of you being full of it against the probabilities of LW being full of it. Once again, not a formally rigourous argument as I’m not quantifying it precisely, but the evidence is so overwhelming that I don’t need to.

139. Dave Says:

Scott #102: Thanks for the recap on Paul’s talk!

The part I find interesting (if I am understanding it correctly) is that the undecidability part comes from wanting arbitrary precision. This reminds me of the kind of the potential separation of BPP and P… Think they might be related?

140. wolfpack Says:

What would Einstein do if he found out God played Yahtzee?

141. Douglas Knight Says:

So, there’s this question about whether there is an interactive protocol for boson sampling, so that the sampler can convince you that it’s really sampling from the distribution it’s claiming to sample from. No one knows such a protocol, but it seems quite plausible that one exists.

Here’s a similar question. Chemistry (eg, spectral lines) is supposed to be possible to be derived from known physical theories, but we don’t know how to carry out the computation of the physical theory on classical computers. But is there an interactive protocol that we could use to verify that chemistry is what we think it is?

I’m sure that people have done lots of consistency checks on theories of chemistry, but the point of view of quantum computing and interactive proofs can probably suggest new consistency checks.

142. John Sidles Says:

Douglas Knight remarks: “Chemistry (eg, spectral lines) is supposed to be possible to be derived from known physical theories.”

As a technical point, spectral lines are associated to periodic orbits of Hamiltonian flows. The theory of such orbits is well-developed for quantum flows on (flat) Hilbert state-spaces; less well-developed for general Hamiltonian flows.

As large-scale quantum simulations migrate onto algebraic state-spaces of increasingly geometric sophistication, these once-abstract mathematical questions are gaining in physical and even engineering consequence. In particular, it can happen (and increasingly does happen) that chemical observations are in reasonable accord with computational simulations, and yet we don’t understand either of them.

Conclusion  Allez en avant, et la foi vous viendra (Set forth and faith will come to you).

143. Monkeyluv Says:

OK, Victoria Woollaston’s article was pretty bad (like many science articles in the Daily Mail). But Google’s own video hypes the technology as well, with its talk of the multiverse, etc. Both deserve scorn.