On 03/12/14 6:47 PM, Eric Smith wrote:
  On Wed, Dec 3, 2014 at 4:09 PM, Sean Caron <scaron
at umich.edu> wrote:
  Clearly you're on a religious crusade here..
I just don't buy the line that
 were it not for everyone using this pesky C language, we could live in this
 mythical world where exploits don't exist... 
 If you use a language in which buffer overruns can't occur, and will
 either trigger exception handling or abort the program, then almost
 all circumstances that are privilege escalation or information
 disclosure due to buffer overruns in C or C++ program become at worst
 denial of service. I'm not arguing that we don't need to be concerned
 with DoS vulnerabilities, but that they are far less severe.
 So if simply by programming in a different language you can
 substantially reduce the severity of an entire class of bugs, why
 wouldn't you do it? 
+1000. That's it, in a nutshell.
 I haven't tried to count them, but it seems like a very large number
 of tracked vulnerabilities are due to buffer overruns and related
 problems that fall into this category.
  You must be a professional programmer? 
 Yes. 
Same.
  Certainly you have strong ideas of what's
right and what's wrong in programming practice... 
 Yes.
  but I feel like you are faulting the
 language here while giving what are essentially (sorry, strong language)
 hack programmers a pass... 
 That's the argument that only if programmers were smarter or more
 disciplined, these problems wouldn't occur. That's a nice hypothesis,
 but I don't buy it, because software written by some of the world's
 smartest and most disciplined C and C++ programmers still routinely
 exhibit these problems. 
 
Yes. There is NO "sufficiently smart programmer". But let's assume that
one exists. Imagine what they could do with better tools (let alone the
rest of us)! Sometimes it's just not about the user, it's about the tool.
Maybe we should be having this debate with the hypothetical programmer
who doesn't make type bugs, memory safety bugs, or concurrency bugs.
Perhaps they can tell us what we're all doing wrong?
Anyone with enough experience will straight out admit that they can't
for example manage mutable state with sufficient reliability, let alone
mutable state with concurrency. You'll meet many seasoned programmers
who have figured that out. We even have many good strategies for
eliminating many classes of bugs (which aren't "just use C better,
idiot"). It's professionally inept to ignore them.
 Programming is *hard*, and debugging is even *harder*. If you can use
 a tool that doesn't help much, or a different tool that helps more,
 why would you want to stick with the less helpful tool?
 
But here we are. Having to defend the idea that "simple mechanical
checks of program properties"* are a bad idea (machines happen to be
much better, not to mention millions of times faster, than humans at
this; which is why I waste hours on silly JavaScript or PHP or Blub bugs
that a compiler can check in microseconds). That is an expensive waste
of programmer time. (Also see Gershom Bazerman's quote linked below.)
 > Why should it be the responsibility of the
language to save programmers from themselves? 
Because that is _precisely_ what languages and abstractions exist to do.
 Why should a table saw have a finger guard? 
Any sufficiently smart carpenter will never injure themselves in a career!
 In the case of the table saw, having a safety feature is even less
 important than in a programming language. With an unsafe table saw,
 I'm likely to only cause harm to myself. With an unsafe programming
 language, a programmer can cause problems for literally billions of
 people (e.g., exploits of bugs in Windows, MacOS, Linux).
 
I like writing assembler from time to time. But I don't try to write
business services in it; I'd be fired, and rightly so. Tools at the
wrong level of abstraction are only shades of the same error.
--Toby
* - Benjamin Pierce's words. For more expert opinions:
http://ur1.ca/iz36o
  I'm not arguing that the language should totally
disallow doing
 anything it thinks is questionable.  I'm arguing that it should by
 *default* disable doing such things, and require the programmer to
 take explicit action to circumvent the normal checking when there's a
 good reason to do so.  (On the other hand, I think most programmers
 are too willing to jump to the conclusion that such is necessary,
 without spending enough time analyzing the real problem.)