A hacker’s hacker

Before someone else is “absolutely stunned” by my silence, let me hereby note with great sadness that Dennis Ritchie, principal designer of C, co-designer of Unix, Turing Award winner, and one of the world’s legendary computer scientists, has passed away at 70.  (See here for the NYT obituary, here for a more detailed ZDNet obituary, and here for Lance Fortnow’s encomium.)  I didn’t know Ritchie, but my father did, as a science and technology writer in the 70s and 80s.  And I often saw Ritchie in the halls when I spent several summers working at Bell Labs in college.  Mostly, though, I know Ritchie through the beautiful language he created.  It’s a testament to C’s elegance and simplicity that I, though extremely far from a hacker, find it almost as easy to express my thoughts in C as I do in my mother tongue, MS-DOS QBASIC.

Update (Oct. 26): AI pioneer and LISP inventor John McCarthy has passed away as well.  It’s been a tough month for computing revolutionaries.

42 Responses to “A hacker’s hacker”

  1. Christopher Nubbles Says:

    Dennis Richie was a great man. My grief at his death is tempered only by the immortality of his achievements and the fact that, having lived three score and ten, he enjoyed all the life a man can reasonably expect to. I plan to spend a few hours programming in C today to speed his passage to the great beyond. I hope others join me!

  2. A.anon Says:

    Ritchie was a great man, and C is a great language. No doubts about that.

    But Scott, it’s also high time you learnt Python. I bet you’ll be willing to trade in your mother tongue for your third language! 🙂

  3. Scott Says:

    A.anon: On the to-do list! 🙂

  4. JKU Says:

    Dennis Ritchie contributed enormously to the field of computer science. However, I have to disagree with the adjective “great” applied to C. It’s an important language, but hardly great. It’s more like the plane the Wright brothers flew at Kitty Hawk: it paved the way for a lot of progress but was at best a crude first design.

  5. Raoul Ohio Says:

    I am not buying the JKU analysis. There were a couple of decades of languages before C. Partly by chance, mostly by elegance, C was and is by far the most influential language ever. Nearly 40 years on, most programming is done in “the C family” {C, C++, Java, C#, D, …}.

  6. Micki St. James Says:

    Christopher, let us raise our reasonable expectations. My own father has lived four score and five already and is in excellent health. Always set the bar high! Rage, rage against the dying of the light.

  7. in love with dennis Says:

    Taken from
    http://genealogy.math.ndsu.nodak.edu/id.php?id=155881 :

    Dennis M. Ritchie
    Ph.D. Harvard University 1968
    Dissertation: Program Structure and Computational Complexity

    C is an immensely beautiful language. I think people enjoy criticizing it due to current programming practices; however, C was designed with a clear, consistent set of goals, and executed them beautifully. In fact, I know of hardly any languages which are as pure and clean, and certainly none with wide practical adoption. On the other hand, while modern languages have additional conveniences, I typically find them to have only been designed for some small number of uses and paradigms; other than that, they appear a cesspool of kitchen sinks and passive-aggressive attempts to model things outside their initial comfort zone.

    I wonder what it was like to be the logician that picked the lingua franca for low-level computer design for 4 decades? It must have been amazing. I think dennis ritchie was a deep, deep thinker.

  8. Raoul Ohio Says:

    Additional R.O. comments on the C language.

    1. I suggest thinking of C and C++ as two sides of one language. C is “almost” a subset of C++. When you need OOP, you obviously use C++. For some tasks, C is better than C++. For example, for traditional parallel programming tasks, C is simpler. Also, for scientific/engineering output, printf is a lot easier than the C++ insertion operator.

    2. Depending on what you are doing, other members of the C family (Java, C#, D, …) are appropriate. Each has strengths. C and C++ are actually gaining mind share lately because they are always faster than a language with garbage collection. People used to say “who cares, Moore’s law to the rescue”, and “GC is faster for the programmer”. But, how much power is on your cell phone, iTaco, or whatever gizmo is “all the rage” next? The best solution is available in MS VC++ and C#, where you only use GC when you need it, with gcnew. I think something like this is also included in C++11.

    3. One place where Java is better than C++ is in two of the OOP big three: inheritance and polymorphism. In C++, these are both implemented by inheritance. In Java, inheritance is done with inheritance, and polymorphism is done with interfaces. This is much better. Interfaces are a good idea, elegantly used in Java.

    4. In addition to heavy promotion, Java took off because it has a huge library of stuff to use. MS came along with a much lamer library called .net. But, in typical MS fashion, .net got stronger every year, while Sun sat on its hands, until .net is now much better than the Java library. Now that Oracle owns Java, the race is back on. Because Oracle has a reputation for screwing everyone, the Java community was in a tizzy about the sale, but so far (knock on wood), things are going great. Larry Ellison might turn out to not be so bad after all, but he sure enjoys playing the role.

    5. There is a blunder in the design of C. This does NOT include the design of the switch statement, which is exactly as it should be. Anyone too dumb to include a break statement should not be programming.

    5.1. Using integers for booleans is a major bummer. Back in the day of macho C programmers writing so that no one could figure out what the code did, this was cool. But it introduces all sorts of hard to spot errors.

    5.2. In retrospect, C is so central that it might have been able to correct the historic programming language blunder of botching the definition of the mod operator for negative integers. Too late now. Decades ago, in a “Turbo Pascal” newsletter, I read an article on why this fiasco started. As I recall, it actually made sense in assembly programming.

    For those not hip to the issue, if you write

    n % m,

    where n is negative and m is positive, what you actually get is:

    – ( (-n) % m ).

    I just tried this out for the 1000’th time, because it is so hard to believe that I keep thinking I might have dreamed it.

  9. asdf Says:

    Raoul,

    1) C++ implements polymorphism by templates and its polymorphism is much better than Java’s because of that.

    2) C’s mod operator doesn’t misbehave because C doesn’t have a mod operator. C’s % is a remainder operator (called “trunc-mod” in lisp), not mod, and its behavior is the traditional one that assembly language programmers expect from most machines. Python’s mod operator (“floor-mod”) is better for many purposes, I’ll admit, and it would be nice to have both like Lisp does.

  10. Ritchie: RIP « Entertaining Research Says:

    […] thanks to a friend (who is a colleague now) who insisted that I learn C from only that book. From Scott, I learn that Ritchie passed away. C being my (computer) mother tongue, I agree wholeheartedly with Scott’s assessment: know […]

  11. Greg Kuperberg Says:

    C is like a very old jacket. When it was new, it was a beautiful jacket and it was supremely useful. Now it is an old and worn jacket, and no longer all that beautiful, but you’re still comfortable with it and it’s still sometimes useful. At one point it was redone by a different tailor (Bjarne Stroustroup), which was not as beautiful as the original but a serviceable improvement. It would be a mistake to retire the jacket even though you have had it for a very long time.

  12. Yatima Says:

    “C is like a very old jacket.”

    This.

    Anyone who waxes lyrical in 2011 about C (and its mutant child C++) is whimsical or is developing car ECUs.

    It has its nostalgia value thought.

    To Ritchie!!!

    [And let’s not forget even older crud: http://en.wikipedia.org/wiki/B_%28programming_language%29 ]

  13. SB Says:

    Scott, you are so awesome! Thanks for writing this blog. 🙂

  14. Raoul Ohio Says:

    asdf,

    Thanks for the clarification on % vs. mod. One might be tempted to argue that of the very few functions that get elevated to “operator status”, mod makes a lot more sense than remainder. But, you can’t turn back the clock.

    On the other issue, I guess polymorphism (in OOP languages) is a polymorphic concept. I was discussing the “extends (by inheritance)” vs. “implements (by interfaces)” issue. Although a big C++ fan, I acknowledge that Java gets that one right.

    The other issue is “templates and generics”. I think C++ templates are much more powerful than anything else I am aware of. In particular, C++’s “Template Metaprogramming” appears to be an entirely new programming dimension, although I admit that trying to browse through Andrei Alexandrescu’s book made me dizzier than contemplating the Kolmogorov complexity of the Ackermann function.

    Java generics are superficially similar to C++ templates, but totally different under the covers. They might evolve into something that is easier to use than templates. A bummer is the fact that Java generics depends on all classes descending from the super class Object, so they don’t work for primitive data types, which are used a lot in scientific and engineering work. I would suggest that this case be handled by automatically treating Java generics like C++ templates when used with primitive data types. I think the compiler writers could make that happen without much work.

  15. Lou Scheffer Says:

    C is not an ‘immensely beautiful programming language’, it’s a piece of **** that set the programming world back by about 20 years.

    Indeed, it is spare and elegant, and allows portable code that is close to the machine. If its role was as a somewhat human-readable intermediate language, it would have been great.

    However, it was meant to be used by humans, and here it fails miserably. “C” ignores all sorts of human interface issues that were already well known when it was written. In any real field of engineering, such carelessness would result in lawsuits and professional disbarment, not accolades. Imagine you built a house, with plugs all the same, but some are 12V DC, some 120 VAC, some 240 VAC, and anything plugged into the wrong plug starts a fire. Imagine a chemical plant with identical taps for drinking water, sulphuric acid, and poison gas. Imagine a car where the accelerator and brake pedal feel exactly the same to your foot, so you might stomp on one thinking it’s the other, then wonder why the car accelerates wildly. ‘C’ contains all these accidents waiting to happen, and many more.

    The really galling thing is that Ritchie did not leave these checks out as a rational decision in the interest of efficiency. ‘C’ contains almost everything needed to prevent these disasters, and could easily added the rest (as C++ did) *without any loss of efficiency or expressive power*. If you really want to take the numerical value of a character and use it as an address, there’s a cast that will do that explicitly. In any rational world, such unlikely operations would need to be called out, and not accepted as is.

    However, these common sense (in any other field) measures were not used even though the problems were well known at the time. A very fundamental part of engineering is understanding the errors a user might make, and reducing the impact of these errors to the extent practical. ‘C’ ignores this completely, and is a case study in professional negligence.

  16. Raoul Ohio Says:

    I disagree with Lou Scheffer about the idea that C could be much safer *without any loss of efficiency or expressive power*.

    The major unsafe aspect of C (particularly for malware) is that “out of bounds” elements in arrays can be accessed. It is easy to see that a safe version (like in Java) of a basic operation on an array of a primitive data type x, such as:

    double a = x[i];

    would take three times as long: check for i too small, check for i too large, copy the value in x[i]. Also, the safe version requires deciding what to do in the event of a bad index, never an easy decision.

    In 1973, many or most programs were scientific or engineering applications, with mostly FP array operations. Furthermore, computers were perhaps 5000 times slower than today. Thus a programming language that was three times slower than C would not be a big seller. That calculation is different today.

    Another aspect is that C was written for co-workers at Bell Labs, back in a day when programming languages cost a lot of money. Ritchie should not be held responsible for misadventures by anyone who got a free copy. The other big language of that day was Pascal, which was particularly safe and beginner friendly. How is it doing today?

  17. Vadim Says:

    Lou,

    Your long post is short on examples. Other than a little more implicit casting than you’d prefer, which features of C are analogous to some of the engineering disasters waiting to happen that you mentioned?

    My understanding is similar to Raoul’s, that the design decisions made were either in the name of performance (i.e., lack of bounds checking), or ease of compiler design (i.e., all local variables have to be declared at the beginning of a code block; modern C compilers don’t require this). I can’t really think of anything in C that makes me scratch my head and say, “There’s really no reason for this.”

    Now, I’m young enough that the C I learned was ANSI C; I’ve only seen bits and pieces of source code written in K&R C. Maybe it had examples of bad design that I’m not aware of, but I’d say that ANSI C is a solid language, and I’m not surprised that it was widely adopted over other languages available at the time.

  18. Lou Scheffer Says:

    Bounds checking is indeed a rational decision that can be defended either way. Also, ANSI C, and C++, fixed many of the problems precisely because they were almost never what the user intended and caused horrible bugs.

    But here (from memory) are a few of the real head-scratchers:

    int b = 12;
    if (b = 0)
    printf(‘B was 0\n’),
    printf(‘B is %d\n’, b);

    Before going on, try to make this programming 101 code work, copy and pasting from the above example. Use a classic C compiler if you have one; otherwise use a modern one and redirect the warnings into a file to get a feel for using K&R C. (in csh, this is ‘cc test.c >& cc.log’.

    This illustrates at least three problems. First humans, at least the ones I know, tend to think of variables as booleans (true or false), integers, or addresses. C converts between these with complete abandon, which is almost never what you want. (And in those cases, a cast would make it clear to both the human and the compiler, at no runtime cost.).

    First, ‘if(b = 0)’ almost surely is not what you mean. It sets b to 0 and never executes the if. If you really wanted to do an assignment and test in the same statement, a reasonable alternative would be if ( (bool)(a = 0)) but that would be too obvious.

    Now, why was the ‘printf(‘B is %d’) not executed? It’s because you can combine statements with a ‘,’, so it’s really inside the if, though it does not look like it. Since C already has blocks, ‘{;}’, why allow comma separated statements?

    If you fix these, the program will seg fault. ‘Hello, world’ defines an int (not a string) and sets the bytes to ‘H’, ‘e’, ‘l’, ‘l’, ignoring the rest. Why does C let you define a character constant that is longer than the containing type is not clear – I can’t see *any* use for this.

    After making this useless int, the code it passes it to printf(), which treats it like a string and blows up (if you are lucky, it’s not well defined exactly what it will do….). That’s because C just plows on even when it knows that types are expected and what is passed. In particular, even if the functions are defined in the same file, and C knows *exactly* how many arguments of what type, it enforces nothing. This is a criminal oversight – unless I’ve explicitly allowed for variable argument lists, it’s *NEVER* what I want, and *ALWAYS* an error, and silently exploding (sometimes) when the code is finally executed is as bad as it gets.

    Here’s another that took a while to figure out:
    int (*fptr)(int a) /* define a ptr to a function */
    fptr(10); /* call the function */

    In ANSI C, this works as intended. But in original C, this will push 10 on the stack, find the address of the function using fptr, then do absolutely nothing. Obviously, if you actually intended to *call* the function, instead of merely computing its address, you would have written
    (*fptr)(10)
    (This one was fixed in ANSI C).

    Suppose you want to see if a pointer is on a 4 byte boundary. You might be tempted to ‘and ‘the pointer with 3 and see if the result is 0:

    if (p & 0x3 == 0)

    but you would be sorely disappointed. This code will compare 3 to 0, which is clearly false, then ‘and’ the pointer p with the resulting 0, then never execute the if. I once worked with an OS that randomly crashed over a several month period due to a bug like this. This illustrated two headscratchers: why allow an ‘and’ of a pointer and a boolean, and the bizarre and complex precedence scheme. (the printed precedence chart taped to many programmer’s terminal indicates this is perhaps not obvious…)

    I could go on and on like this. I would not be at all surprised if over a career of programming, I’ve wasted at least a month debugging stuff that should never have been allowed in the language in the first place. A month of programmer time might be worth $10K after overhead, etc. Given the popularity of C, and the number of programmers (here about 350,000 in the USA http://www.daniweb.com/hardware-and-software/microsoft-windows/windows-software/news/219885 ) I’m sure the original design of C has wasted billions of dollars….

  19. Anon Says:

    I can’t believe people are defending C as a usuable modern language. I don’t want to detract anything from Ritchie, I have nothing but respect for him, and C was a great language for it’s day I’m sure, but I have to agree with everything Lou is saying. Disclaimer: I might speak in absolutes but absolutes are never correct. Linux shows that C programs can still be successfully developed.

    It is amazingly unsafe and virtually impossible to develop C code in an enterprise environment (100k+ LOC, many developers, etc). It still has a place as a bare-metal programming language, especially for small projects (i.e. single person). Its biggest advantage is that compiler development has focused on C and C++, so it is quite fast despite how hard the language is to optimize (pointer aliasing being the biggest pain for compilers, lack of pure-functional constructs being another). (Fortran is easier to optimize and often beats C in performance, but Fortran has its own problems.) But decades of practical programming have taught us that C is full of problems, even from the perspective of a high-performance systems programming language.

    The biggest flaw in C, IMHO, is the preprocessor. Experience has shown how unsafe macros are. If you read any C++ textbook, one thing it will constantly warn you against is using macros (various C++ constructs reduce the need for macros) as time has proven them to be very difficult to utilize without causing pain.

    The second biggest flaw is pointers. C not only allows, but encourages (necessitates, even) pointer arithmetic. Code that does non-trivial pointer arithmetic is bug-prone, and it is the worst kind of bug (corruption, crash, or security hole). In a big project, it is very difficult to write bug-free pointer manipulation code 100% of the time. Moreover it’s very hard not to write pointer-based code that doesn’t have various implicit assumptions, which is a liability when the code is reused. C++ improves on this with references and the STL, hiding pointer arithmetic behind safer constructs without loss of performance or flexibility. D does even better by supporting safe and unsafe arrays inside the language itself (rather than through a library like the C++ STL).

    Pointers also make things really difficult for compilers, as pointer aliasing concerns prevent many optimizations from happening; compilers had to add pragmas or keywords to the language to allow the user to tell the compiler aliasing won’t happen.

    Printf is well-known to be a dangerous function. The string libraries in C are also dangerous and buffer overruns have caused countless dollars of damage. Both of these have substitutes in any other modern language.

    The lack of exceptions also makes the language dangerous, as checking return values for errors has proven to be both cumbersome and error-prone. (Of course exceptions weren’t around back when C was invented, but that doesn’t make the language any more safe.)

    Even the syntax is really bad in some cases. A well-known complaint is the C syntax is context-dependent: is “x * y” multiplication or declaring a pointer to a type x? (http://eli.thegreenplace.net/2007/11/24/the-context-sensitivity-of-cs-grammar/). Variable declarations are not very human-readable, e.g. “void *(*foo)(int *)” is not even a very complicated type of object but it requires some thought to decipher.

    There are an endless number of other pitfalls in C, despite how small the language is. The flaws are universally acknowledged: it is evident in how they have been plastered over in C++ or corrected in languages like Java and D. (Incidentally, I really hope that D replaces C++ as “the” systems programming language.) C is an “old jacket”.

  20. Raoul Ohio Says:

    RO returns for another inning of “C: criminal oversight?”!

    Everybody knows C is dangerous if misused; how could it be otherwise with unfettered access to pointers? If you are going to write something like

    free 0; (or maybe: free (void *) 0;, or delete 0)

    you are likely to be looking for the power switch next.

    It is fair to say that C is not for programmers that don’t know the difference between a char from a C-string (null terminated array of char). How is that more criminal than the auto industry selling fast cars that a drunk can slam into an oak tree with?

    I have programmed in many languages by following the common sense plan:

    1. Start small (say a “Hello World”, or, “Yo! Dude!” program).
    2. Add one small and simple thing at a time.
    3. Make sure it works.
    4. Continue.

    Thus it would never cross my mind to write something like

    if (p & 0×3 == 0)

    and, if I for some reason did, I would for sure check the operator precedence table to see if I had it right.

    Like most everyone else, I have occasionally flailed with

    if (x = 3){}

    but that is a well known misfeature, resulting from the “major bummer blunder” that I mentioned in inning 1 (point 5.1), about using integers for booleans. I suspect C programming was a macho calling back in the 1970’s, because, when you look at old code, you see stuff like:

    int factorial (int n) {
    int rv = 1;
    while (n) rv *= n–;
    return rv;
    }

    and a bizarre way of copying C-strings that I cannot recall.

    Summary:
    1. C is inherently dangerous, but OK if used with modern standards of careful programming. It is fast.
    2. C is the major inflection point in the history of programming languages. Many languages since are more or less based on C.
    3. C is the universal language available on nearly every computer. This makes it easy to experiment with new languages, just build a translator into C.

  21. Lou Scheffer Says:

    I have no trouble with the power of ‘C’, nor the inherent danger that comes with the power.

    But even comic books stress that with great power comes great responsibility, and this is where C fails. Even brilliant programmers make mistakes, and C could have just as easily been equally powerful but much safer. Ignoring the impact of potential user mistakes is an engineering error of the most arrogant kind.

    In purely practical terms, an error that manifests at run-time, at a customer site, costs thousands of times as much as an error caught at compile time. On this engineering tradeoff alone, C is not a very good language.

  22. Vadim P. Says:

    Lou,

    Thanks, your post definitely reminded me of a few “features” of C that I haven’t thought about in a while. And you’re right, in hindsight some of these seem like awful design decisions. But take something like the weak type system that allows assignments inside if statements. The benefits of strong typing in reducing run-time errors were already known when C was created, but the case could also be made that weaker type rules would lead to additional programmer efficiencies and more compact source code, as long as programmers were careful not to abuse their compiler’s forgiving nature. Without the benefit of a language like C out there in the wild, it wasn’t possible to know how the tradeoff would really play out. I don’t think it’s that Dennis Ritchie didn’t anticipate that his design *could* sometimes create problems, but he had no way of knowing that the problems it created would significantly outweigh the problems it solved (especially as computing power grew and things like source code compactness stopped mattering).

    Some of the other problems don’t really seem like problems to me. The comma operator, for example, can be useful inside a for loop where you want to have multiple counters. Why is it allowed outside that context? Probably because the prevailing thinking at the time was that giving the programmer flexibility is a good thing. Sure, there were notions even then that too much power was dangerous, but these were freewheeling times when people wrote programs in assembly. You’d never have sold a programmer on Java back then.

    I guess I’m backtracking from my previous post. Yes, C had features that turned out to be weaknesses instead of strengths, but C is what allowed us to realize that those features were weaknesses. And I’m not just saying that to be nice to Dennis Ritchie.

  23. Raoul Ohio Says:

    Why debate C when you can check out “Quantum Levitation”?

    http://www.youtube.com/watch?v=Ws6AAhTw7RA&feature=player_embedded#!

    If you don’t say “Wow!”, I will program in PL/1 for a month.

  24. david Says:

    Lou, your criticism of C is totally unfounded. Overall C is a very nice and compact language, suitable for most tasks.
    And it’s lack of compulsory parafernalia makes the code quite a bit faster than code compiled from other languages.
    Sure it has some deficiencies such the inconvenience of the string manipulation functions, but most of them are fixed in C++.

    To start with, being able to freely switch between bools, integers and characters is an *advantage*, not a disadvantage. It does NOT cause any bugs and makes the code EASIER to read, not harder as you seem to imply. I hate it when you are forced to make explicit casts between them in modern versions of C++ and other languages, it’s just so much easier to think that a character IS an integer too. And it is not rare at all, it is constantly needed (for example if you have an array indexed by characters).

    The comma operator allows you define an expression composed of several statements, whose value is that of the last statement. It is only marginally useful (for example, you can write stuff like while(scanf(“%i”, &n), n != 0) ) but it does have its purpose. The fact that you don’t know what it is doesn’t invalidate it.

    Then you go on to make downright false statements, such as “In particular, even if the functions are defined in the same file, and C knows *exactly* how many arguments of what type, it enforces nothing.” I don’t know what makes you think this but it’s simply wrong. Perhaps you confusion stems from the printf example, but note whether the string that is the first parameter to the printf function contains a %s or %d cannot be known by the compiler unless the string is a constant. It is NOT really the task of the compiler to check that you pass an integer parameter if you wrote %d, and it cannot be done in general. This is just due to the syntax of the printf function (which in most cases is much more convenient than, say, that ouf cout): it is the code of the function itself that examines the string and decides how to interpret the parameters. Anyway compilers issue a warning on this, so what’s your point?

    Finally, the if example just shows that you don’t know the operator precedence rules. They are not as intuitive as they
    could be? Probably, if I were to define the language I would definitely make different choices on this. But you are
    supposed to know the very basics of the language you use.

  25. Raoul Ohio Says:

    David,

    I agree with about 90% of your assessment, except for your view that confusing integers and booleans is a good thing.

    The cool tricks enabled by this misfeature typically save one line of code.

    On the other hand, the errors that rookies always make, and experienced users occasionally make, are a major bummer. The classic examples are

    int n = 0;
    if (n = 0) {
    printf(“blue”);
    } else {
    printf(“green”); /* green is printed */
    }

    and

    int n = 10;
    if ( 0 <= n <= 1) {
    printf("red"); /* red is printed */
    } else {
    printf("white");
    }

    Confusing code is not a good thing. Neither of these compile in Java. They do in C++, because C programs should be the equivalent C++ program, as much as possible.

  26. Lou Scheffer Says:

    David says:

    ‘Then you go on to make downright false statements, such as “In particular, even if the functions are defined in the same file, and C knows *exactly* how many arguments of what type, it enforces nothing.” I don’t know what makes you think this but it’s simply wrong.’

    No, this just shows exactly how horrible the original C language was – so bad you can’t even imagine that a language would let you do that, and I don’t blame you one bit. I remember being completely thunderstruck after chasing a bug for a few hours, finding a function of 3 arguments that was being passed two, when the function was defined just a few lines earlier *in the same file* – I could not believe than any language intended for serious professional use would allow this.

    To show this was not just me mis-remembering, on this page for LINT http://www.thinkage.ca/english/gcos/expl/lint/manu/manu.html#FunctionDeclarations

    “In earlier versions of C it was valid to pass more arguments than the function definition specifies or to pass fewer arguments.”

    Of course they fixed this in ANSI C.

    And speaking of operator priority, you say “But you are supposed to know the very basics of the language you use.” This is very true, but I’ll bet less than one in a thousand programmers *knows* the precedence rules of C (they know where to look them up, but that’s very different). I’ll be quite impressed, if in all honesty and without looking, you can state at least one operator from the 15 levels of C precedence, in order. If you can’t, then you yourself don’t know the very basics of the language you are defending.

  27. Greg Kuperberg Says:

    I really don’t see the point of eulogizing Dennis Ritchie by bitching about drawbacks of his computer language. The man accomplished a vast amount, and C is a monumentally important computer language for all of its flaws. Ritchie was also not remotely a purist; he did not think that there is One True Way to do anything in computer software. He actually said that C has “the power of assembly language and the convenience of … assembly language”. From the sound of it, he was not only a great innovator, but also a great human being who did not take himself too seriously.

    After 40 years, many of these problems with C can be addressed anyway in various ways, with extra compiler features. What is the point of complaining about it now — it’s just so out of date and out of place. In fact, since I think that cavils over shortcomings of C are a bit tasteless in context, I would not mind if the thread were temporarily or permanently closed.

  28. Vadim P. Says:

    Greg,

    In any other context I’d agree that eulogizing someone by bickering about their accomplishments is the wrong way to go about honoring their life. But nerds eulogize the way we do everything else, by arguing about technical details. I think Mr. Ritchie would understand.

  29. Mike Says:

    Completely off-topic: Scott, when is your infamous video from Setting Time Aright going to be posted?

  30. Joel Rice Says:

    Jeepers – I thought the Advice was to use LINT – anyone remember LINT ? Programmers who did not use it got into trouble. I remember quite a few who felt that they didn’t need no stinking lint, and refused to use it. They didn’t like it so much when I kept finding silly things like forgetting the pointer in fprintf or thinking that falling off the end of a function always returns 0.
    So, a hearty but late Thank You Very Much to Dennis, and to Scott for posting.

  31. david Says:

    Lou said: “I’ll be quite impressed, if in all honesty and without looking, you can state at least one operator from the 15 levels of C precedence, in order. If you can’t, then you yourself don’t know the very basics of the language you are defending.”

    That’s just nonsense. Knowing all that is not “the very basics” because it’s seldom necessary. I know the relative precedence of the operators I use often. In your example you should just know that == takes precedence over &, which you should know anyway after the first time you tried to use bitwise AND operators. If I don’t know or am not sure about the precedence, I look it up or use parenthesis (which is not a bad idea in your example anyhow). How is this different from any other language?

  32. Joel Rice Says:

    Good Grief – just on TechnologyReview and saw that John McCarthy (LISP) just passed away.

  33. Vadim P. Says:

    Not to hijack this post further (is that even possible?) but Turing Award winner John McCarthy, inventor of LISP and AI pioneer, also just passed away. Rough month for computer science.

  34. Greg Kuperberg Says:

    Vadim – Maybe you are right. Ritchie saw plenty of this belly-aching when he was alive, and it sounds like he didn’t even really mind. Do you want to know my personal beef with C? No array bounds checking. What a !@#$-ing nightmare. And more seriously, the source of countless software security holes. Oh well!

    Even so, at some early moment in my programming experience, I switched from Pascal to C as fast as I could. As the joke goes, if C is a write-only language, Pascal is a read-only language. (Actually the first language that I learned was BASIC, while the last one was Python.)

  35. Raoul Ohio Says:

    GK,

    One nice thing about Pascal is that it is easy to understand. One could argue that it still is an excellent choice for a first language for students to learn, if only to get in the habit of continuously learning new languages.

    A couple decades ago, I developed a Borland Pascal program that drew fractals based on the domain of attraction for functions under Newton’s method in the complex plane. I would love to be able to still run it for fun. Unfortunately, I included some assembly code to figure out what (1990 era) graphics card was installed (to enable a menu of graphic mode choices). I assume that is why it crashes any computer when you try to run it is a DOS box. Oops!

    Another BP program implements and compares every numerical method for differential equations I could find, plus a couple dozen of my own. I have considered trying a “Pascal to C” translator on this code, but what are the chances that would work? I dropped off a half dozen old computers at a recycling center last week, so I hope I still have that program!

  36. Scott Says:

    Greg #27: I can’t help but think Ritchie would be amused to see a post about his passing turning into yet another programming language flamewar.

  37. Scott Says:

    Mike #29:

    Completely off-topic: Scott, when is your infamous video from Setting Time Aright going to be posted?

    I decided that I didn’t want to make it publicly available—not for the reasons you’d think :-), but because I completely ran out of time, so the talk didn’t end up being a good summary of this point of view. Within a month, though, I’m planning to write a paper expanding on that talk, and will blog that as soon as it’s available.

    This week, though, 98% of my waking hours are being directed towards the STOC deadline.

  38. Mike Says:

    Well, I’m disappointed, you should release it along with your paper — we can sort through it. 🙂

  39. Lou Scheffer Says:

    At the risk of going even further off topic, the reason the deficiencies in C strike such a chord in me is that they are emblematic of problems of the field in general.

    *Ignoring human interface issues. Ultimately these tools are to be used by humans, and there is a huge body of knowledge on the types of mistakes people make. Other fields take this seriously, but in CS this is largely ignored. When programmers confuse bools, ints, and floats, language developers claim it’s a feature. When pilots confused the knobs for landing gear and flaps, now the knobs must feel different. ( http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&rgn=div5&view=text&node=14:1.0.1.3.11&idno=14#14:1.0.1.3.11.4.177.48 )

    *Not fixing known problems. Other fields find problems, then fix them so at the very least the *same exact problem* should not occur again. Here’s another example from aeronautics – the statement “The last confirmed civilian plane crash that was directly attributed to lightning in the U.S. was in 1967, when lightning caused a catastrophic fuel tank explosion”. How long do you think it will be until we can say “the last known security problem caused by buffer overflow was 40+ years ago”? How about never at the rate we’re improving?

    * Optimizing for objectives that make no sense, or at the very least no longer make sense. As an example, not checking array bounds in the interest of speed. Sure, a faster program is nice, but in almost all cases it is not nearly as important as an undetected bug. Even in the rare programs where speed *is* critical, probably 99% of all array references could be checked with no user visible speed degradation and a big improvement in security. On the remaining 1% it might make sense to turn checking off. But even recent developments such as C++’s STL default the other way.

    In general, compared to other engineering fields, the practical side of computer science feels very unprofessional.

  40. Raoul Ohio Says:

    Lou,

    In aircraft design and pilot standards, you can make an upgrade as soon as the need is clear.

    Contrast this with programming languages, where it is essential that they stay the same: a program written last year should work the same this year.

    For most applications, C++ is a much better choice than C. Then, if you want bound checked arrays, you can use the STL vector template class. The C++ committee puts a huge effort into carefully growing the language with extensions that do not break existing code. The new version: C++0x has just been finalized. x turns out to be 0xB, so they are calling it C++11.

  41. Vadim P. Says:

    Lou,

    I think the fact that a lot of the valid issues you raised with C *were* addressed in later languages like Java and C# tells me that the field has been taking these things seriously.

    C is 40 years old and, as Raoul mentioned, it’s hard to retrofit an old programming language because of the desire to be able to compile old programs with new compilers.

  42. Raoul Ohio Says:

    Why debate programming languages when you can play with the Collatz Conjecture (3n + 1 problem)? Here is a cool graphical representation:

    http://www.jasondavies.com/collatz-graph/

    (Thanks to GMSV for the tip)