On Feb 9, 2010, at 3:49 PM, Josh Dersch wrote:
Ok. Show me
a processor that has an "object" data type, that
has subtypes like "member" and "method" and such. There aren't
any. Translating from such incredibly high-level constructs to
registers, stacks, and memory locations is not without a lot of
overhead. Try to envision what happens in the instruction
stream during things like virtual function lookups in C++, for
example.
Ok, I'm envisioning it. A few instructions to do a vtable lookup
and a jump to the correct virtual function. Wow. So those extra
instructions are what's making every machine in the world
(apparently) very very slow?
Yes, to a large degree! You're talking about it as if it happens
ONCE, and you know it doesn't. Would you care to estimate how
many of those vtable lookup happen when someone simply clicks a
mouse in a modern OS? I don't know for certain, but I'm willing
to bet that it's thousands.
I'm not going to speculate on things I have no knowledge of. Even
if it were thousands, that would mean an overhead of tens of
thousands of instructions. On a modern CPU, in a user-interaction
scenario, that doesn't even begin to be noticable to the user.
You're speaking from the standpoint of running one program, once,
alone on one computer. That's not how (most) computers are used.
How many processes are running, right now, on your Windows box? 146
right now on my Mac, *nearly seven hundred* on the central computer
here (a big Sun). Those "this code is only 20% slower"
inefficiencies that allowed us to get back on the golf course fifteen
minutes sooner do add up.
If it's millions of calls in a tight loop doing
some heavy
calculation then it will have an impact; but anyone calling a
virtual function in such a scenario is doing it wrong.
...which happens ALL THE TIME. I've seen (and fixed) code like
that in every programming job I've ever had. Loop strength reduction
is something that nearly all optimizing compilers do, but the fact
that compilers have optimizers doesn't give us free license to write
sloppy code. There will always be situations in which the compiler
can't reduce the strength of a loop, and they will always get right
by you when you're writing that code if you don't pay attention.
...what
happens when someone uses an Integer instead of an int?
A whole object gets created when all one likely needed was a
memory location.
Depends greatly on the language. Don't confuse one implementation
with ALL OO programming languages. In C++ an integer maps to a
register. Same in C#. Same in objc. Java does this differently.
I'm going to go dig into a C++ implementation and look at the in-
memory composition of an integer object. I sure hope you're right.
And howabout
template programming. I've never seen executables
so big as the ones in which templates were overused.
Template programming is another paradigm altogether, it's basically
C++ specific and it has very little to do with OO. (It's also an
abomination along with most of C++.)
C# has a form of templates, if memory serves. I believe they're
called "generics". Name one non-OO language that has such a
construct. I don't know of any.
Note well,
however, that I'm talking about more than just the
number of instructions required to accomplish a given task. Sure,
that in itself has bad side effects when you think about what it
does to the instruction cache hit rates...the principal of
locality of reference is blown out the window. But what about
memory utilization? How big, in bytes, is an Integer compared to
an int? Ok, the difference may be only a few bytes, but what
about the program (which would be "most of them") with tens of
thousands of them? (I'm typing this on a Mac, into Mail.app,
which is currently eating 1.73GB of memory)
I beleive tha in Objective C, ints are still registers, no magical
Integer objects here. Sounds like Mail.app is poorly written. I'm
running Outlook here (written in a mix of c and c++) and it's using
100mb (with an inbox size of 10gb...).
Again I'm going to try to find the in-memory representation of
those integer objects. Regardless, however, this was just an
example...I'm sure you see my point. You're suggesting that OO
programming involves no runtime overhead over procedural/imperative
languages when run on processors whose architecture is arguably
procedural/imperative.
You keep talking about how OO programming is the
reason that
software today is so inefficient but you offer no data to back it
up other than "it doesn't map to the hardware."
I'm sorry, but knowing how processors work, it's pretty obvious
to me. The data that backs it up is lots of programs (some of
which are operating systems) that I use every day, written in OO
languages, including (perhaps especially!) OS X, are far slower
than they should be given the hardware they're running on. YOU
know how processors work too, I know you do, so I know you see my
point.
This is only a valid argument if you have an OS X written in plain
C and an OS X written in OO that you can do a real comparison
between. Anything else is speculation. OO does have its
overheads, I disagree that they are anywhere nearly as bad as you
claim them to be. Speculating that OS foo is far slower than it
"should be" based on anecdotal evidence is not proof.
It may be, at least in part, speculation...but with lots of
experience to back it up. Quite simply, almost everything I've seen
written in C++ and Java (even with native compilation) is slow, and
most everything I've seen written in C, assembler, and Forth is
fast. One such example in which the functionality is similar is
groff vs. nroff. Big speed difference between the two on similar
hardware performing similar functions.
Speculating that OS foo is far slower than it "should be" is
something that I think I can get a pretty good feel for, having used
dozens of operating systems on dozens of types of computers over
dozens of years. You're suggesting that my argument is completely
illegitimate because I'm not willing to spend the next two weeks
cooking up some sort of a benchmark suite to prove to you, by the
numbers, something that I've never heard anyone else disagree with,
ever?
In an ideal
world, one in which all programmers were competent,
OO languages wouldn't be such a problem. So I guess what I really
mean is, "Bad programmers are even more detrimental to computing
when armed with OO languages".
You really think these same programmers would somehow write better
code if only they would stop using OO?
Yes, absolutely. Most OO languages give bad programmers more code-
inflating features to misunderstand and abuse. If they don't know
how to write good code in C, which is a tiny, very fast, very low-
overhead, very simple language with very few features, how can they
be expected to write good code in C++, C# or Java, which are anything
but?
Handing an idiot a loaded rifle is dangerous. Handing an idiot a
loaded rifle with a loaded grenade launcher is MORE dangerous.
-Dave
--
Dave McGuire
Port Charlotte, FL