Great Ideas In Theoretical Computer Science Lecture 1
For those who’ve stopped following the Democritus series, there’s another train leaving the station today: the first lecture of my new undergraduate course (“Great Ideas In Theoretical Computer Science”) is now available.
After whetting the appetite by describing how theoretical computer science has finally let humankind achieve one of its noblest aspirations (playing roulette over the Internet), the lecture then goes back to antiquity, and covers the hottest results in the computing world circa 300BC: Euclid’s GCD algorithm and the “straightedge-and-compass” model of computation.
(One word of warning: the lecture notes were written by my teaching assistant, Yinmeng Zhang, and do not faithfully reflect what I said in class. They’re much better than what I said in class. She even had the gall to improve my jokes.)
To preempt the inevitable questions, two remarks about the course’s title:
- I considered calling it just “Great Ideas In Computer Science,” but then I realized it would have to be a week longer. (Har har, just kidding! Cue cymbals.)
- Any relation to the famous CMU course designed by Steven Rudich (“Great Theoretical Ideas In Computer Science”) is obviously a coincidence. The philosophies behind the two courses are every bit as different as their names.
Comment #1 February 13th, 2008 at 8:51 pm
The philosophies behind the two courses are every bit as different as their names.
i.e. Not very, just two of the words swapped?
Sorry, couldn’t resist. Hello, I’m Peter. IANACS, or anything like that, although I have a pure maths degree growing mouldy in a drawer (figuratively speaking).
Comment #2 February 13th, 2008 at 9:23 pm
Hi Scott, just discovered your blog. I’m an EECS senior at UC Berkeley and I love you stuff… Your first lecture slides were a sweet teaser too, please make sure to keep posting your slides!
Comment #3 February 13th, 2008 at 10:49 pm
Small, silly, technical typo: on page 6, you claim that (B mod A)
Comment #4 February 13th, 2008 at 11:08 pm
Jon: Thanks! Fixed.
Comment #5 February 13th, 2008 at 11:25 pm
Small, silly, non-technical typo: In the second sentence of section 2, you have “Edsgar” instead of “Edsger” for Dijkstra’s first name.
Comment #6 February 13th, 2008 at 11:47 pm
Thanks! Fixed.
Comment #7 February 13th, 2008 at 11:51 pm
Off topic but related to a recurring subject on this blog: Harvard is apparently mandating that its faculty only publish in open-access journals. This sounds great, though how they’re going to enforce it, I have no idea.
http://chronicle.com/news/article/3943/harvard-faculty-adopts-open-access-requirement
Comment #8 February 14th, 2008 at 12:02 am
Small, silly, technical and non-technical typos united at last on page 4: “In principle you could construct a regular 65535-sided poygon,” should read “65537-sided polygon”.
Comment #9 February 14th, 2008 at 12:44 am
Without wanting to get into the absurd (though i will be 🙂 ), does data have time potential? A string sw where s is the input to a problem P and w is the solution to P on s, has some time potential close to the lower time bound of P because sw can be reliably used by a machine to speed up the solving of a given instance to another problem P’ that P reduces to.
Given a string of n bits what’s the most time potential that it can have? I think that’s an interesting question.
Perhaps one day in the future we won’t be mining for energy but for time. I know it’s a little absurd but allow me to continue. Possibly soon after the Big Bang a natural occuring process began that to this day has been factoring numbers all over the place and leaving them behind 🙂 . Perhaps it’s not an efficient process, but it has had alot of time to get on! Would material with lots of time be easily recognizable?
Is there preservation of time? Can i always put some amount of time t into forming some string sw such that it can be used to solve P faster than f(n) – t where f(n) is the lower time bound for P? No, that would imply f(n) is not the lower bound for P. With the best efficiency, you can get out of a string sw only as much time as is necessary to build it from scratch without copying (i’m supposing).
What’s going on? What is the relationship between energy and [computational] time?
Comment #10 February 14th, 2008 at 12:54 am
Christopher: 65535 also works; see here.
Comment #11 February 14th, 2008 at 12:59 am
I’m going to harvest time out of nature and then sell it in zero-knowledge interchanges, so that the customers will have to keep coming back. 😀
Comment #12 February 14th, 2008 at 1:02 am
if a quine doesn’t have to compile then i could just write
syntax error
and that qualify.
Comment #13 February 14th, 2008 at 5:20 am
Scott says:
“I considered calling it just Great Ideas In Computer Science, but then I realized it would have to be a week longer.”
Not that I want to start a flame war, but is this course actually going to be about Great Ideas in Complexity Theory?
(All right, I see in the “straight-edge and compass” section that you also plan to cover some Computatibility Theory.)
Anyway, it is a very good thing that undergraduates get a chance to discover the well-kept secret that there are actually Great Ideas in computer science, like in other sciences.
Comment #14 February 14th, 2008 at 10:35 am
I liked it a lot, keep posting about it!
Comment #15 February 14th, 2008 at 10:48 am
Anonymous: It’ll have computability, complexity, logic, cryptography, even a finite automaton here or there … the main reason there won’t be more “Theory B” material is just my own lack of knowledge.
Comment #16 February 14th, 2008 at 10:59 am
Gauss’ grave does not have a 17-gon on it. I believe that the story is that Gauss asked for one, but the carver refused, saying it would look identical to a circle.
http://www.uni-sw.gwdg.de/~panders/Images/Goettingen/rimg0240.jpg
Comment #17 February 14th, 2008 at 11:02 am
Scott: a little-known fact is that the analysis of Euclid’s algorithm that you give is due to Pierre-Joseph-Etienne Finck, in an algebra textbook published in 1841. This is essentially the first analysis of an algorithm ever published. See my article in Historia Mathematica for more details.
Comment #18 February 14th, 2008 at 11:16 am
Sam: Thanks; posted a correction! I would not have guessed that Gauss’s tombstone has no 17-gon, but does have what looks like a star of David. 🙂
Comment #19 February 14th, 2008 at 11:18 am
Jeff: Thanks for the info!
Comment #20 February 14th, 2008 at 11:25 am
Could you also post the homework? Right now it is password protected.
Comment #21 February 14th, 2008 at 11:55 am
Sam: Unfortunately, the software for creating course websites is—how can I say this diplomatically?—in need of improvement. It doesn’t even give the option to make homeworks publicly accessible (I believe most of the students can’t even download it right now, though they have paper copies). After the students have turned the HW in, I’ll post it here if necessary.
Comment #22 February 14th, 2008 at 2:17 pm
Small, silly, technical typo: on page 6, you claim that (B mod A) is less than B/2. But this is not true, in general (consider the case when B is less than A).
[I’m posting this again because it looks like the less than sign is causing problems in my original comment, and because I just looked and this wasn’t fixed yet.]
Comment #23 February 14th, 2008 at 2:29 pm
Jon: No, B is the larger number by assumption. Sorry if that wasn’t clear.
(What I did the last time was to change 1/2B to B/2.)
Comment #24 February 14th, 2008 at 2:40 pm
Scott says: “It’ll have computability, complexity, logic, cryptography, even a finite automaton here or there … the main reason there won’t be more “Theory B” material is just my own lack of knowledge.”
What about algorithms? I know you did Euclid’s GCD algorithm in the first class but will you do other nice algorithms? Or “algorithms” simply a subset of “complexity”?
Comment #25 February 14th, 2008 at 3:06 pm
CC: MIT CS majors already have two required algorithms courses, 6.006 and 6.046 (but no required computability & complexity course). Otherwise I’d say more about algorithms in my course.
Comment #26 February 14th, 2008 at 4:52 pm
I read at one point that the winning entry in a “shortest self-replicating C program” contest was, in fact, an empty file. Some C compilers would compile that to a program that did nothing and hence output another empty file.
Comment #27 February 14th, 2008 at 5:16 pm
great.
Comment #28 February 14th, 2008 at 5:26 pm
Matt: That’s correct; it was a winning IOCCC entry one year (wish I had the link). However, there’s some dispute about whether the empty string is a valid C program (and even if so, whether doing nothing is the same as printing the empty string…).
Comment #29 February 14th, 2008 at 6:22 pm
Scott says: “That’s correct; it was a winning IOCCC entry one year (wish I had the link)”
Here it is!
Description:
http://www.ioccc.org/1994/smr.hint
Source 🙂
http://www.ioccc.org/1994/smr.c
Comment #30 February 14th, 2008 at 11:16 pm
Scott, looks like a great course, I enjoyed reading the first lecture, looking forward for the rest.
Two small corrections:
— first line of section 3: ‘can you seen’
— it is indeed not clear (not written anywhere) that you assume B >= A. I think you should write it in the algorithm description box. And, in that case, you don’t need the second check ‘if B=0 return A’.
Comment #31 February 14th, 2008 at 11:39 pm
TGM: Thanks! Fixed.
Comment #32 February 14th, 2008 at 11:39 pm
what? no video? hrmmph.
Comment #33 February 14th, 2008 at 11:50 pm
We’re making audio recordings, but only to help with the scribe notes. Trust me: like Al Gore, I really, really sound better in writing.
Comment #34 February 15th, 2008 at 1:53 am
Regarding homework: I was able to download it from the site, by clicking on ‘Anonymous access’.
Comment #35 February 15th, 2008 at 3:11 am
yeah, i can access the homework with the visitor login/anonymous access button also.
thanks for all the knowledge Scott!
Comment #36 February 15th, 2008 at 11:48 am
Your first homework exercise reminds me of this cute puzzle by my friend Aaron Dynkin. The title reads “All for one and one for all”, can you translate the other lines? Solution here.
Comment #37 February 15th, 2008 at 2:40 pm
I too can download the HW.
Sorry, I was so outraged by the elitism that I didn’t see the egalitarianism.
Comment #38 February 16th, 2008 at 12:25 am
I take issue with your description of computability theory/recursion theory.
From the notes: “Note when the rules are applied arbitrarily many times, we actually understand pretty well by now what is and isn’t possible: that’s a subject called computability theory”
This is an extremely deficient description of the subject. The study of what’s is and isn’t recursive, and (more generally) what is an isn’t recursively approximable (Delta_2), is only a small part of computability theory/recursion theory. Delta_2 sets are only the beginning of a hierarchy of sets (the arithmetical and hyperarithmetical sets) which are extensively studied by recursion theorists. And of course, recursion theorists study many other things, for instance, randomness.
A better description might be “the study of definability in mathematics”, though this is a bit too general (for instance, it includes descriptive set theory, while recursion theorists (usually) only study effective descriptive set theory).
I think part of the problem is that the name computability theory is misleading. The subject was historically called recursion theory, sets were recursive, not computable, etc. This changed around 1996, when Soare published his paper “Computability and Recursion”, suggesting a terminology change from recursive to computable. Many in the field have now adopted this change in terminology. Some even engage in the bizarre practice of changing the word recursive to computable when citing old papers. Soare’s rationale for the change in terminology is twofold: first, the founders of the subject preferred the term computable to recursive (who cares after 60 years of using recursive terminology). And secondly, people unfamiliar with the field don’t understand recursive to mean computable (how much of a problem is this?). To me, the name computability theory suggests only the study of Delta_2 sets (as you’ve written above). It also seems to suggest that complexity theory is a part of computability theory (as Soare has suggested it should be). Frankly, I don’t know of much relation between the two subjects.
Comment #39 February 16th, 2008 at 2:25 am
That was great. Thanks!
Comment #40 February 20th, 2008 at 9:26 am
“Then if the casino announces red, the player could send the numbers AB and C…”
Should that not be A and BC ? (and AB and C for black)
Comment #41 February 23rd, 2008 at 5:29 am
Sounds like that lecture would make a good podcast. (I never listen to the radio any more since I found my blackberry could hold podcasts.)
Comment #42 February 27th, 2008 at 7:12 am
When will lecture 2 post?