Great Ideas In Theoretical Computer Science Lecture 1

For those who’ve stopped following the Democritus series, there’s another train leaving the station today: the first lecture of my new undergraduate course (“Great Ideas In Theoretical Computer Science”) is now available.

After whetting the appetite by describing how theoretical computer science has finally let humankind achieve one of its noblest aspirations (playing roulette over the Internet), the lecture then goes back to antiquity, and covers the hottest results in the computing world circa 300BC: Euclid’s GCD algorithm and the “straightedge-and-compass” model of computation.

(One word of warning: the lecture notes were written by my teaching assistant, Yinmeng Zhang, and do not faithfully reflect what I said in class. They’re much better than what I said in class. She even had the gall to improve my jokes.)

To preempt the inevitable questions, two remarks about the course’s title:

  1. I considered calling it just “Great Ideas In Computer Science,” but then I realized it would have to be a week longer. (Har har, just kidding! Cue cymbals.)
  2. Any relation to the famous CMU course designed by Steven Rudich (“Great Theoretical Ideas In Computer Science”) is obviously a coincidence. The philosophies behind the two courses are every bit as different as their names.

42 Responses to “Great Ideas In Theoretical Computer Science Lecture 1”

  1. Peter Hollo Says:

    The philosophies behind the two courses are every bit as different as their names.

    i.e. Not very, just two of the words swapped?
    Sorry, couldn’t resist. Hello, I’m Peter. IANACS, or anything like that, although I have a pure maths degree growing mouldy in a drawer (figuratively speaking).

  2. Berkeley Steve Says:

    Hi Scott, just discovered your blog. I’m an EECS senior at UC Berkeley and I love you stuff… Your first lecture slides were a sweet teaser too, please make sure to keep posting your slides!

  3. Jon Says:

    Small, silly, technical typo: on page 6, you claim that (B mod A)

  4. Scott Says:

    Jon: Thanks! Fixed.

  5. Nitish Says:

    Small, silly, non-technical typo: In the second sentence of section 2, you have “Edsgar” instead of “Edsger” for Dijkstra’s first name.

  6. Scott Says:

    Thanks! Fixed.

  7. asdf Says:

    Off topic but related to a recurring subject on this blog: Harvard is apparently mandating that its faculty only publish in open-access journals. This sounds great, though how they’re going to enforce it, I have no idea.

  8. ChristopherH Says:

    Small, silly, technical and non-technical typos united at last on page 4: “In principle you could construct a regular 65535-sided poygon,” should read “65537-sided polygon”.

  9. Job Says:

    Without wanting to get into the absurd (though i will be :) ), does data have time potential? A string sw where s is the input to a problem P and w is the solution to P on s, has some time potential close to the lower time bound of P because sw can be reliably used by a machine to speed up the solving of a given instance to another problem P’ that P reduces to.

    Given a string of n bits what’s the most time potential that it can have? I think that’s an interesting question.

    Perhaps one day in the future we won’t be mining for energy but for time. I know it’s a little absurd but allow me to continue. Possibly soon after the Big Bang a natural occuring process began that to this day has been factoring numbers all over the place and leaving them behind :) . Perhaps it’s not an efficient process, but it has had alot of time to get on! Would material with lots of time be easily recognizable?

    Is there preservation of time? Can i always put some amount of time t into forming some string sw such that it can be used to solve P faster than f(n) – t where f(n) is the lower time bound for P? No, that would imply f(n) is not the lower bound for P. With the best efficiency, you can get out of a string sw only as much time as is necessary to build it from scratch without copying (i’m supposing).

    What’s going on? What is the relationship between energy and [computational] time?

  10. Scott Says:

    Christopher: 65535 also works; see here.

  11. Job Says:

    I’m going to harvest time out of nature and then sell it in zero-knowledge interchanges, so that the customers will have to keep coming back. :D

  12. ian Says:

    if a quine doesn’t have to compile then i could just write

    syntax error

    and that qualify.

  13. Anonymous Commenter Says:

    Scott says:

    “I considered calling it just Great Ideas In Computer Science, but then I realized it would have to be a week longer.”

    Not that I want to start a flame war, but is this course actually going to be about Great Ideas in Complexity Theory?
    (All right, I see in the “straight-edge and compass” section that you also plan to cover some Computatibility Theory.)
    Anyway, it is a very good thing that undergraduates get a chance to discover the well-kept secret that there are actually Great Ideas in computer science, like in other sciences.

  14. Osias Says:

    I liked it a lot, keep posting about it!

  15. Scott Says:

    Anonymous: It’ll have computability, complexity, logic, cryptography, even a finite automaton here or there … the main reason there won’t be more “Theory B” material is just my own lack of knowledge.

  16. sam_nead Says:

    Gauss’ grave does not have a 17-gon on it. I believe that the story is that Gauss asked for one, but the carver refused, saying it would look identical to a circle.

  17. Jeffrey Shallit Says:

    Scott: a little-known fact is that the analysis of Euclid’s algorithm that you give is due to Pierre-Joseph-Etienne Finck, in an algebra textbook published in 1841. This is essentially the first analysis of an algorithm ever published. See my article in Historia Mathematica for more details.

  18. Scott Says:

    Sam: Thanks; posted a correction! I would not have guessed that Gauss’s tombstone has no 17-gon, but does have what looks like a star of David. :-)

  19. Scott Says:

    Jeff: Thanks for the info!

  20. sam_nead Says:

    Could you also post the homework? Right now it is password protected.

  21. Scott Says:

    Sam: Unfortunately, the software for creating course websites is—how can I say this diplomatically?—in need of improvement. It doesn’t even give the option to make homeworks publicly accessible (I believe most of the students can’t even download it right now, though they have paper copies). After the students have turned the HW in, I’ll post it here if necessary.

  22. Jon Says:

    Small, silly, technical typo: on page 6, you claim that (B mod A) is less than B/2. But this is not true, in general (consider the case when B is less than A).

    [I'm posting this again because it looks like the less than sign is causing problems in my original comment, and because I just looked and this wasn't fixed yet.]

  23. Scott Says:

    Jon: No, B is the larger number by assumption. Sorry if that wasn’t clear.

    (What I did the last time was to change 1/2B to B/2.)

  24. CC Says:

    Scott says: “It’ll have computability, complexity, logic, cryptography, even a finite automaton here or there … the main reason there won’t be more “Theory B” material is just my own lack of knowledge.”

    What about algorithms? I know you did Euclid’s GCD algorithm in the first class but will you do other nice algorithms? Or “algorithms” simply a subset of “complexity”?

  25. Scott Says:

    CC: MIT CS majors already have two required algorithms courses, 6.006 and 6.046 (but no required computability & complexity course). Otherwise I’d say more about algorithms in my course.

  26. matt Says:

    I read at one point that the winning entry in a “shortest self-replicating C program” contest was, in fact, an empty file. Some C compilers would compile that to a program that did nothing and hence output another empty file.

  27. Michael Says:


  28. Scott Says:

    Matt: That’s correct; it was a winning IOCCC entry one year (wish I had the link). However, there’s some dispute about whether the empty string is a valid C program (and even if so, whether doing nothing is the same as printing the empty string…).

  29. Mariano M. Chouza Says:

    Scott says: “That’s correct; it was a winning IOCCC entry one year (wish I had the link)”

    Here it is!


    Source :-)

  30. TGM Says:

    Scott, looks like a great course, I enjoyed reading the first lecture, looking forward for the rest.
    Two small corrections:
    – first line of section 3: ‘can you seen’
    – it is indeed not clear (not written anywhere) that you assume B >= A. I think you should write it in the algorithm description box. And, in that case, you don’t need the second check ‘if B=0 return A’.

  31. Scott Says:

    TGM: Thanks! Fixed.

  32. Ben Says:

    what? no video? hrmmph.

  33. Scott Says:

    We’re making audio recordings, but only to help with the scribe notes. Trust me: like Al Gore, I really, really sound better in writing.

  34. TGM Says:

    Regarding homework: I was able to download it from the site, by clicking on ‘Anonymous access’.

  35. cody Says:

    yeah, i can access the homework with the visitor login/anonymous access button also.

    thanks for all the knowledge Scott!

  36. David Speyer Says:

    Your first homework exercise reminds me of this cute puzzle by my friend Aaron Dynkin. The title reads “All for one and one for all”, can you translate the other lines? Solution here.

  37. sam_nead Says:

    I too can download the HW.

    Sorry, I was so outraged by the elitism that I didn’t see the egalitarianism.

  38. Andrew Says:

    I take issue with your description of computability theory/recursion theory.

    From the notes: “Note when the rules are applied arbitrarily many times, we actually understand pretty well by now what is and isn’t possible: that’s a subject called computability theory”

    This is an extremely deficient description of the subject. The study of what’s is and isn’t recursive, and (more generally) what is an isn’t recursively approximable (Delta_2), is only a small part of computability theory/recursion theory. Delta_2 sets are only the beginning of a hierarchy of sets (the arithmetical and hyperarithmetical sets) which are extensively studied by recursion theorists. And of course, recursion theorists study many other things, for instance, randomness.

    A better description might be “the study of definability in mathematics”, though this is a bit too general (for instance, it includes descriptive set theory, while recursion theorists (usually) only study effective descriptive set theory).

    I think part of the problem is that the name computability theory is misleading. The subject was historically called recursion theory, sets were recursive, not computable, etc. This changed around 1996, when Soare published his paper “Computability and Recursion”, suggesting a terminology change from recursive to computable. Many in the field have now adopted this change in terminology. Some even engage in the bizarre practice of changing the word recursive to computable when citing old papers. Soare’s rationale for the change in terminology is twofold: first, the founders of the subject preferred the term computable to recursive (who cares after 60 years of using recursive terminology). And secondly, people unfamiliar with the field don’t understand recursive to mean computable (how much of a problem is this?). To me, the name computability theory suggests only the study of Delta_2 sets (as you’ve written above). It also seems to suggest that complexity theory is a part of computability theory (as Soare has suggested it should be). Frankly, I don’t know of much relation between the two subjects.

  39. SS Says:

    That was great. Thanks!

  40. Jakob Says:

    “Then if the casino announces red, the player could send the numbers AB and C…”
    Should that not be A and BC ? (and AB and C for black)

  41. Jon Tyson Says:

    Sounds like that lecture would make a good podcast. (I never listen to the radio any more since I found my blackberry could hold podcasts.)

  42. Anonymous Says:

    When will lecture 2 post?