## Chicken Soup for the Complexity Soul

From the comments on my last post:

scott would you be so kind, if you have some spare time, to post a list of textbooks that you’ve read in math and cs?

Now this is the kind of blog topic I like: zero expenditure of emotional energy required; lends itself to snarky one-liners. So here’s a list of math and CS books that I’ve read and you should too. Pitifully incomplete, but enough to get you started.

- Computational Complexity, by Christos Papadimitriou

The Iliad of complexity. An epic poem to read and reread until you can quote it by section number, until the pages fall out of the spine. Christos is not a textbook writer but a bard — and the only problem with his tale is that it ends around the late 1980’s. Christos, if you’re reading this: we want an Odyssey!

- Gems of Theoretical Computer Science, by Uwe Schรถning and Randall Pruim

The proofs are terse, but I love the division into short, digestible “gems.” Keep this one on your night table or by the toilet. (But not until you’ve mastered Papadimitriou and are ready to wield BP operators like a Fortnow.)

- Computational Complexity: A Modern Approach, by Sanjeev Arora

The newest entrant in the arena. Best of all is that the current draft is free and online. I love Sanjeev’s use of “lowerbound” as one word.

- Lecture notes by Luca Trevisan, Umesh Vazirani, John Preskill, …

I don’t know if these count as textbooks, but read them anyway.

- Quantum Computation and Quantum Information, by “Mike & Ike” (Michael Nielsen and Isaac Chuang)

The best quantum computing book. I open it to the section on fidelity and trace distance pretty much every time I write a paper. (I’ve heard the other sections are excellent too.)

- Randomized Algorithms, by Rajeev Motwani and Prabhakar Raghavan

Chernoff bounds, random walks, second eigenvalues, PCP’s … a 1-1/e fraction of what you need to know about randomness.

- Computers and Intractability: A Guide to the Theory of NP-Completeness, by Michael Garey and David Johnson

Changed my life when I read it at fifteen. Since then it’s sat on my shelf, but I’d take it down if I actually had to prove something NP-complete.

- An Introduction to Computational Learning Theory, by Michael Kearns and Umesh Vazirani

Since this book was co-authored by my advisor, I’ll refrain from saying anything about it except that it’s excellent. (And short.)

- Artificial Intelligence: A Modern Introduction, by Stuart Russell and Peter Norvig

Almost (but not quite) made me go into AI. My favorite chapter is the last one, which carefully demolishes the arguments of John Searle and the other confuseniks.

- Complexity and Real Computation, by Lenore Blum, Felix Cucker, Michael Shub, and Steve Smale

Decidability of the Mandelbrot set? P versus NP over the complex numbers? I may be a Boolean chauvinist, but I knows an elegant theory when I sees one.

- The Art of Computer Programming, by Donald Knuth

Three-volume set looks mighty impressive on a shelf, but beware of MMIX and constant factors.

- Set Theory and the Continuum Hypothesis, by Paul Cohen

The book that gave me the illusion of understanding logic. Short and hard. Prerequisites: None.

- Topology: A First Course, by James Munkres

What every math book should be.

- The Book of Numbers, by John Conway and Richard Guy

Since this is a popular book, obviously I couldn’t have learned anything new from it, but it was nice to “refresh my memory” about octonions, Heegner numbers, and why e^{ฯ sqrt(163)}is within 0.00000000000075 of an integer.

- The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose

Preface: “Even if you hated fractions in elementary school, have no fear! I’ve tried to make this book accessible to you as well.”

Chapter 2: “Consider a Lie algebra of sheaves over the holomorphic fibre bundle PZL(Z^{n},5)…” (Not really, but close.)

I struggled through Penrose’s masterpiece, but by the end, I felt like I’d come as close as I ever had (and possibly ever will) to understanding “post-1920’s” particle physics and the math underlying it. If you’re willing to invest the effort, you’ll find The Road to Reality so excellent that it “cancels out” Shadows of the Mind, like an electron annihilating its positronic companion.

Comment #1 April 13th, 2006 at 12:13 pm

Interesting that you are very interested in AI and Quantum Computers and Complexity Theory, but you don’t see any connection between ALL 3. Or do you? I do, I see a very strong nexus in Bayesian Networks (probabilistic graphs)

Comment #2 April 13th, 2006 at 12:27 pm

When I applied to grad school six years ago, my application was all about how I wanted to forge a connection between quantum complexity and AI. ๐

But I quickly realized there wasn’t

thatmuch to say, beyond trivial applications of Grover’s algorithm, graphical models with amplitudes instead of probabilities, etc.Having said that, I’m writing a paper right now about the learnability of quantum states, which uses sample complexity bounds from classical learning theory.

I’ve also been tossing around an open problem for years, of whether there exists an efficient quantum algorithm to learn small-depth neural networks. See here for more.

Comment #3 April 13th, 2006 at 1:06 pm

Why is the book on Bounded Arithmetic by Krajicek not here? See also the new book (draft) by Prof. Cook.

Comment #4 April 13th, 2006 at 1:18 pm

They’re not here because I haven’t read them (maybe I

shouldhave, but I don’t work on proof complexity). They might be excellent. In general, feel free to suggest your own favorite math/CS books.Comment #5 April 13th, 2006 at 1:39 pm

“But I quickly realized there wasn’t that much to say, beyond trivial applications of Grover’s algorithm, graphical models with amplitudes instead of probabilities, etc.”

Trivial is what trivial does. “graphical models with amplitudes instead of probabilities” is a fair definition of a quantum circuit (and therefore a quantum computer). That clause implies that Complexity Theory has only trivial things to say about quantum computing. I think your original instinct was correct. Luke, follow the force

Comment #6 April 13th, 2006 at 2:59 pm

“graphical models with amplitudes instead of probabilities” is a fair definition of a quantum circuit (and therefore a quantum computer).That’s wrong if the graphical model can have undirected edges, but let’s ignore that.

Suppose we accept that quantum complexity theorists have “really” been studying graphical models all along — i.e. that every result about a quantum circuit can be restated as a result about a quantum graphical model. Who cares? The question is not “Can we use this formalism?”, but rather “Does this formalism lead to nontrivial or surprising new results?” I’m not prejudging the answer to that question; I’m just saying it’s what I

wantan answer to.In other words: show me the meat, and I’ll start paying attention.

Comment #7 April 13th, 2006 at 10:49 pm

Well, asking me to defend the importance of Bayesian Nets in quantum computation, is like asking you to defend the importance of proving NP!=P in complexity theory. I could go on for hours. Here is just the beginning

(1)Why graphs? Basically, in quantum computation, one multiplies a vast number of very large, sparse matrices . Graphs encode visually, and very efficiently, which terms of the sparse matrices vanish.

(2)Why Bayesian Nets? We want graphs that refer to probabilities and enforce causality. It’s also very convenient if you can use the SAME graph to discuss both the quantum and classical-probability versions of a problem, which is possible with B nets (e.g., look at a discussion of Bell’s inequality from the point of view of Bayesian nets), but not possible with the standard quantum circuit diagrams. (Don’t get me wrong. I am not advocating replacing the standard quantum circuit diagrams by B nets. I see both types of diagrams as complementary) . Why is it convenient to discuss both classical-probability and quantum from the same graph? I find that understanding both the classical-probability and quantum versions of a problem at the same time is very enlightening; in particular, it highlights those aspects of the problem that are exclusively quantum.

(3)How are Bayesian Nets connected to AI? I defer to the very rich literature on the subject.

(4)How are Bayesian Nets connected to Complexity Theory? Again, I defer to the rich literature on the use of directed acyclic graphs in programming algorithms, linear programming, combinatoric optimization and complexity theory.

(5)Is there an example where B nets resulted in the discovery of something in Quantum Information Theory that wasn’t already known? “Does this formalism lead to nontrivial or surprising new results?” Here is one example. The measure of quantum entanglement called squashed entanglement was discovered by looking at B nets. The discoverer noticed that CMI (conditional mutual information) vanished for a particular B net in the classical-probability case, but not in the quantum case. He then proved that squashed entanglement (which is defined for any density matrix) reduces for pure-state density matrices to the Bennett et al “entropy of entanglement”. He also showed that the non-negativity of squashed entanglement is guaranteed by the strong sub-additivity entropy inequality. (By the way, the discoverer does not call it squashed entanglement, a name that he finds kind of dumb; he calls it CMI entanglement. The name “squashed entanglement” was given to the entanglement measure by others who used it much later, after the discoverer told them about it and his papers on it.)

Comment #8 April 14th, 2006 at 12:00 am

I’m suprised that you did not list Sipser’s

Introduction to the Theory of Computation. Was that on purpose, or did you never read it? This must be one of the clearest books on theoretical computer science. When teaching a course on its topics, you could practically say to the students: “Read this book. Let me know if you have any questions.”Comment #9 April 14th, 2006 at 12:13 am

Wim: I’ve only skimmed Sipser’s book, but it did look extremely good — I especially liked the “proof sketch” preceding each proof. (This is the peril of listing the books that I myself learned from, rather than attempting an “objective” list.) Thanks for bringing it up.

Comment #10 April 14th, 2006 at 12:48 am

I guess Scott also often refers to some other books like “The probabilistic method” (Alon and Spencer), “Communication complexity” (Kushilevitz and Nisan), and some tool books for information theory, matrix analysis, graph theory, combinatorics, cryptography…

Scott, this looks a very good topic. Maybe you and us readers can break it into the topics, and in each topic, list books and give reviews. Like your zoo, this may lead to a very useful resource for many researchers, from beginners to experts.

For complexity, Oded Goldreich also post his new textbook on web. New stuff like U-Connectivity in L and reviews of the new proof for the PCP theorem. Sanjeev Arora’s may contain more topics, though the current version seems far from complete (in terms of typos, references, …)

Comment #11 April 14th, 2006 at 12:53 am

tao, mossel, elkies.

Comment #12 April 14th, 2006 at 1:03 am

I can’t comment about completeness or mistakes in Arora’s book, but I think that the style of the book is great! I hope that he finds time to finish this project.

The one exception is the end-notes on the quantum computation section. Those are not so good. But that’s forgivable in a more general text.

Comment #13 April 14th, 2006 at 1:31 am

Scott, this looks a very good topic. Maybe you and us readers can break it into the topics, and in each topic, list books and give reviews.Shengyu: You should start a website for that purpose. On this blog, I want posts that are actually useful to anyone to be the exception, not the rule. ๐

Comment #14 April 14th, 2006 at 2:59 am

Scott, does my request about your tips for dating I made earlier demand for emotional energy expenditure more than you like (or can)? ๐

Comment #15 April 14th, 2006 at 11:21 am

This post has been removed by the author.

Comment #16 April 14th, 2006 at 11:24 am

Here are some books which I just love and consider

myself much better off from studying.

Kreyszig: Functional Analysis.

This book is extremely thorough and very readable. It covers the main FA theorems and focuses on Spectral Theory of operators… Plus chapter 11 is “Unbounded Linear Operators in Quantum Mechanics”.

Horn and Johnson: Matrix Analysis

All the linear algebra an applied mathematician or computer scientist will need.

Hardy and Wright: An Introduction to the Theory of NUmbers

Some of the best mathematical writing there is. I always keep this book around for when I cant sleep at night or am bored. It covers the mathematics of the game of NIm!

Nocedal and Wright: Numerical Optimization

Covers everything you could need in basic

optimization theory. Linear programming,

nonlinear programming, CG, etc. ANd it

very very readable.

Of course, there is my favorite paper by a fellow

poster to Scott’s blog:

An Introduction to the Conjugate Gradient Method Without the Agonizing Pain

http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf

There are more which I love from analysis,linear algebra,optimization, and complex variables, however the books above were indispensable to me education. If you are looking for other recommendations, let me know and I will see if I can help.

Comment #17 April 15th, 2006 at 12:41 am

Marc:

If there’s a linear algebra book you love, it’s your duty to share it with the world.

Also, I find it hard to reconcile that statement with the statement that it wasn’t indispensible to your education. My best theory is that you loved the book, but learned linear algebra elsewhere.

Comment #18 April 15th, 2006 at 7:47 am

HAHAHA.

I: There are a lot of books I enjoy reading but did not actually learn the material from. Examples of this is “Linear Algebra Done Right” by axler.

Here Spectral theory is taught as it should be,

without that pesky determinant. Proofs are

more straightforward and more theory based

then computational based.

I also refer to Lax’s Linear Algebra for slick proofs of more advanced linear concepts such as matrix calculus, Duality, etc.

On a side note, has anyone else observed the interesting evolution in mathematical education of the determinant?

in high school, the determinannt it easy. given

a 2×2 or 3×3 matrix, we follow the formula.

In undergrad, it becomes slightly more confusing,

with different algorithms for nxn matrices. Also, it is mentioned that it can be used to find volumes of parallelpipeds. Finally, in grad school it becomes the “Unique skew-symmetric multilinear form” which we can figure out gives us the above and corresponds to volume…

Comment #19 April 15th, 2006 at 7:48 am

A question about your algebra of books: Does

Mathematicacancel outA New Kind of Science?Comment #20 April 15th, 2006 at 12:48 pm

I love Sanjeev’s use of “lowerbound” as one word.There was some amusement at MIT recently about this topic. I was writing a paper entitled “Higher Lower Bounds…”, and it took a non-complexity theorist to notice. After laughing about it for a while (and considering I hadn’t slept in 36 hours), the paper ended up with the title “Higher Lower Bounds for Near-Neighbors and Further Problems”.

–mip

Comment #21 April 15th, 2006 at 1:28 pm

A question about your algebra of books: Does Mathematica cancel out A New Kind of Science?Now that I think about it, my algebra is non-commutative.

Firstyou sin,thenyou repent. It doesn’t work the other way around. The “virtuous” work might be what provides anaudiencefor the “indulgent” one in the first place, but this is not the medieval Catholic Church: no buying of intellectual indulgences in advance.Comment #22 April 15th, 2006 at 11:21 pm

marc:

thanks!

I’d been hearing the name of Axler’s book for years, but I hadn’t met anyone who’d read it. And I hadn’t heard of Lax’s book.

Comment #23 April 15th, 2006 at 11:38 pm

Am I the only one who just doesn’t like Papadimitriou’s book? Yes, for a long time it was the only book out there, but I have never found it useful for learning the material, nor for re-learning it.

As for randomized algorithms, the Motwani-Raghavan book is ok, but the new book by Mitzenmacher-Upfal is fabulous!

Comment #24 April 16th, 2006 at 9:05 am

So what kind of fiction do you read?

Comment #25 April 16th, 2006 at 3:49 pm

To echo marc, I took linear algebra from Axler using his book (

Linear algebra done right). It ruled.Comment #26 April 16th, 2006 at 5:10 pm

So what kind of fiction do you read?Most recently

Portnoy’s Complaint. Over the past year I’ve also enjoyed several of Rebecca Goldstein’s academic novels.But the sad truth is I’ve been reading very little fiction lately. When I do read for pleasure, I tend toward histories, biographies, and Dave Barry.

I went through a month-long phase back at Berkeley when I couldn’t do any research; all I could do was read one Shakespeare play after the next. Thankfully I recovered from that.

My all-time favorite is Mark Twain.

Comment #27 April 16th, 2006 at 6:34 pm

Shame on you, Scott. Reading math is not reading for pleasure ?!

Comment #28 April 16th, 2006 at 8:05 pm

My favorite math book of all time is

Elements of Information Theoryby Cover and Thomas. It is so clearly written that even a non-information theorist could learn a lot from it in the way of mathematical exposition.Another favorite is Gilbert Strang’s

Introduction to Applied Mathematics. Great bedtime reading.Comment #29 April 16th, 2006 at 10:19 pm

Shame on you, Scott. Reading math is not reading for pleasure ?!Shame on you, cheshire cat. It is?!

Comment #30 April 17th, 2006 at 8:32 pm

Scott, did you really get out of Berkeley without reading Hennessy & Patterson, or did you deliberately exclude non-theory books?

How ’bout Feynman Lectures on Computation (I had the privelege of taking the course that the book is based on), and the followon Feynman and Computation? Both are just pleasurable reads. Material from Mead, Hillis, Sussman, Bennett, Benioff…

Comment #31 April 18th, 2006 at 5:54 am

Scott, did you really get out of Berkeley without reading Hennessy & PattersonWell, if I remember correctly NO NOT THE PIPELINES NOT THE L1 CACHE IT’S ALL COMING BACK GO AWAY CONSTANT FACTORS DIE DIE DIE

(Sorry, PTSD)

Comment #32 April 19th, 2006 at 2:04 pm

PTSD:

Experiences likely to induce the condition include:

[…]

* Bad drug experiences (LSD, Magic Mushrooms)

Explains some of the stories on this blog ๐