On Feb 9, 2010, at 12:28 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On Feb 9, 2010, at 2:19 PM, Josh Dersch wrote:
Ok.
Show me a processor that has an "object" data type, that has
subtypes like "member" and "method" and such. There aren't any.
Translating from such incredibly high-level constructs to
registers, stacks, and memory locations is not without a lot of
overhead. Try to envision what happens in the instruction stream
during things like virtual function lookups in C++, for example.
Ok, I'm envisioning it. A few instructions to do a vtable lookup
and a jump to the correct virtual function. Wow. So those extra
instructions are what's making every machine in the world
(apparently) very very slow?
Yes, to a large degree! You're talking about it as if it happens
ONCE, and you know it doesn't. Would you care to estimate how many
of those vtable lookup happen when someone simply clicks a mouse in
a modern OS? I don't know for certain, but I'm willing to bet that
it's thousands.
I'm not going to speculate on things I have no knowledge of. Even if
it were thousands, that would mean an overhead of tens of thousands
of instructions. On a modern CPU, in a user-interaction scenario,
that doesn't even begin to be noticable to the user. If it's millions
of calls in a tight loop doing some heavy calculation then it will
have an impact; but anyone calling a virtual function in such a
scenario is doing it wrong.
In C#, virtual lookups are cached at runtime so
the cost of the
first virtual call to a function goes through the vtable routine;
future calls are very fast.
That's nice. Now if only C# weren't a proprietary product from one
company which happens to be obsessed with creating vendor lock-in
situations. ;)
(though actually, hmm, that IS rather nice...)
Objects in their raw form are not
"incredibly high-level." In C++,
an object is referenced via a pointer; data fields are accessed via
offsets just like in C. There is a small overhead for virtual
function dispatch. I fail to see how this overhead is somehow
responsible for performance problems in computers today.
There's a bit of congruity between C's structs and C++/C#/Java/etc
objects, but..
What other overheads/inefficiencies are you
thinking of?
...what happens when someone uses an Integer instead of an int? A
whole object gets created when all one likely needed was a memory
location.
Depends greatly on the language. Don't confuse one implementation
with ALL OO programming languages. In C++ an integer maps to a
register. Same in C#. Same in objc. Java does this differently.
What happens when one adds that Integer to another
Integer? Add in
the address offset calculations to find out where the actual int is
stored within the Integer object, what would those be on most
architectures...two or three instructions? So our nice single-
instruction add turns into at least five instructions.
So you're arguing that increasing the number of instructions
required to execute a simple operation by a factor of five doesn't
involve overhead? Ok, howabout when it happens all the time, which
additions tend to in most programs?
See above.
In C (for example), there's no motivation at all to wrap a struct
around an int just for the sake of doing so, so it doesn't happen.
And howabout template programming. I've never seen executables so
big as the ones in which templates were overused.
Template programming is another paradigm altogether, it's basically C+
+ specific and it has very little to do with OO. (It's also an
abomination along with most of C++.)
(yes, here I have to give a nod to your point about
bad programmers
below!) All in the name of "saving programmer time", as if that's
such a big deal, consequences be damned.
Note well, however, that I'm talking about more than just the
number of instructions required to accomplish a given task. Sure,
that in itself has bad side effects when you think about what it
does to the instruction cache hit rates...the principal of locality
of reference is blown out the window. But what about memory
utilization? How big, in bytes, is an Integer compared to an int?
Ok, the difference may be only a few bytes, but what about the
program (which would be "most of them") with tens of thousands of
them? (I'm typing this on a Mac, into Mail.app, which is currently
eating 1.73GB of memory)
I beleive tha in Objective C, ints are still registers, no magical
Integer objects here. Sounds like Mail.app is poorly written. I'm
running Outlook here (written in a mix of c and c++) and it's using
100mb (with an inbox size of 10gb...).
You keep talking about how OO programming is the
reason that
software today is so inefficient but you offer no data to back it
up other than "it doesn't map to the hardware."
I'm sorry, but knowing how processors work, it's pretty obvious to
me. The data that backs it up is lots of programs (some of which
are operating systems) that I use every day, written in OO
languages, including (perhaps especially!) OS X, are far slower than
they should be given the hardware they're running on. YOU know how
processors work too, I know you do, so I know you see my point.
This is only a valid argument if you have an OS X written in plain C
and an OS X written in OO that you can do a real comparison between.
Anything else is speculation. OO does have its overheads, I disagree
that they are anywhere nearly as bad as you claim them to be.
Speculating that OS foo is far slower than it "should be" based on
anecdotal evidence is not proof.
A modern
multi-GHz Linux box running GTK is far less "responsive"-
feeling than my old SPARCstation-IPX running fvwm when just
tooling around the GUI. A little more time spent by the
programmers, ignoring the "easy way out" or heavy OO programming
and it'd be FAR faster.
So it finally comes out: it's the *bad progammers* at fault here.
I knew it all along! Don't confuse poor programmers with
programming languages. There are always efficiency tradeoffs in
programming languages and a good programmer knows how to make the
right choices.
Yes, I have to acknowledge this; you'll get no argument from me
there. But bad programmers are the rule, not the exception. Code
written by bad programmers constitutes 90% of the code written
today. It's possible to write fast, compact C++ or Java code; we've
both seen it. But it's not the norm. KDE, OS X (and several of its
apps, Mail.app comes to mind) *are* the norm, and they're both
horribly slow for the hardware they're typically run on.
In an ideal world, one in which all programmers were competent, OO
languages wouldn't be such a problem. So I guess what I really mean
is, "Bad programmers are even more detrimental to computing when
armed with OO languages".
You really think these same programmers would somehow write better
code if only they would stop using OO?
Josh
-Dave
--
Dave McGuire
Port Charlotte, FL