On Feb 9, 2010, at 2:46 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On Feb 9, 2010, at 5:22 PM, Josh Dersch wrote:
You're speaking from the standpoint of running one program, once,
alone on one computer. That's not how (most) computers are used.
How many processes are running, right now, on your Windows box?
146 right now on my Mac, *nearly seven hundred* on the central
computer here (a big Sun). Those "this code is only 20% slower"
inefficiencies that allowed us to get back on the golf course
fifteen minutes sooner do add up.
So you're clicking every button in every process running on your
machine constantly?
Of course not. But don't assume I'm the only person using this
machine. It's sitting there receiving and doing spam filtering on
about 90K emails per day, serving web pages for a small stack of web
sites, being a database server with about 20GB of data behind
it...The machine isn't sitting there idle waiting for me to click on
something. That machine doesn't even have a graphical console.
There's more to what computers do than run whatever app one
particular user is clicking at at any given time.
My point is that for *many* operations (like
user-interactions
which typically are gated on the response time of humans) virtual
call overhead is acceptable.
If that's the only thing, or one of a very small number of things,
running on the (very very fast) machine, sure.
Depends greatly on the language. Don't confuse
one
implementation with ALL OO programming languages. In C++ an
integer maps to a register. Same in C#. Same in objc. Java
does this differently.
I'm going to go dig into a C++ implementation and look at the in-
memory composition of an integer object. I sure hope you're right.
C++ definitely has no concept of an integer object. (it offers no
built/in object types, not even a base Object class.)
Again, it was an example, taken from Java, because I know Java much
better than I know C++.
C# has a
form of templates, if memory serves. I believe they're
called "generics". Name one non-OO language that has such a
construct. I don't know of any.
C# generics and C++ template metaprogramming are nowhere near the
same thing. They both let you easily define reusable container
objects (and for that use, they are efficient.). C++ templates
actually provide a Turing-complete language (an ugly one) that runs
at compile time. You can do clever things with it, you can also do
horrible things with it.
C++ metaprogramming is very much a paradigm unto itself.
Everything I've read about generics describes them as a form of
templates. Everything. You're asserting that they're completely
different? If so, I will stand corrected, and chalk it up to a lot
of bad info on peoples' web sites. I myself don't have a Windows
computer so I don't use C#, so I can't speak from direct experience
there.
You can indeed use C#, via the Mono project. Generics are a very
basic form of C++ templates that are good for creating generic
containers, and that's about it. C++ templates are considerably more
involved, and metaprogramming tricks are used to do all manner of
insane things at compile time. At a very basic level, the two are the
same. It's like saying a Yugo and a Maserati are equivalent because
they are both cars. (yes, you can use analogies here :)
It may
be, at least in part, speculation...but with lots of
experience to back it up. Quite simply, almost everything I've
seen written in C++ and Java (even with native compilation) is
slow, and most everything I've seen written in C, assembler, and
Forth is fast.
I could argue that I've also seen the exact opposite, but I'm not
sure what that would prove.
You have? Seriously?
Yep. Again, it's a case of bad programmers doing stupid things.
One such
example in which the functionality is similar is groff
vs. nroff. Big speed difference between the two on similar
hardware performing similar functions.
Speculating that OS foo is far slower than it "should be" is
something that I think I can get a pretty good feel for, having
used dozens of operating systems on dozens of types of computers
over dozens of years. You're suggesting that my argument is
completely illegitimate because I'm not willing to spend the next
two weeks cooking up some sort of a benchmark suite to prove to
you, by the numbers, something that I've never heard anyone else
disagree with, ever?
I'm suggesting that you are exaggerating the performance impact and
that you keep basing these projections on feelings.
Feelings? No, experience. Big difference. Experience with and
knowledge of the past. There is NO RATIONAL REASON why a GUI-based
OS should need more than a billion bytes of memory and a billion-
plus clock cycles per second just to boot. You and I (and many
others on this list) have plenty of examples of machines that do the
same thing with a tiny fraction of those resources.
Windows, OS X, and Linux (but only with Gnome or KDE) are fat,
bloated, slow, lumbering pigs, and it's due to sloppy programming
and misapplication of tools. That is my assertion. My proof is
that I use my dual 1.8GHz PPC with 4GB of RAM for the EXACT SAME
STUFF every day that I used my 40MHz SPARC with 32MB of RAM to do
every day, and I bump up against the performance limitations of both
to essentially the same degree.
Playing music, playing video files, telnetting, sshing, editing,
compiling, browsing the web, etc etc etc. The apps are a bit
prettier now, certainly moreso than with fvwm, but I'd happily live
without that.
See, here's where I see a disconnect; you are doing the same *class*
of thing, but you're not really doing the same thing. Programs have
gotten more complex because people want more from their software.
Regardless of whether Firefox 3.5 was written in assembly or C++ you'd
never hope to run it on your IPX. I just think you are blaming the
wrong thing (or just blaming one thing) for the performance
degradations you are perceiving. OO overhead adds some not
imperceivable overhead; so do each of extensibility, abstraction,
support for "modern standards" (CSS, JavaScript, XML) ui theming,
support for "advanced" desktop metaphors, etc... Code reuse and
abstractions also bring overhead; these exist even in C, but the
overhead is worth it in terms of maintenance and usability (from a
programming and a user perspective.)
And honestly, my current desktop machine runs circles around the
machine I was using in college, doing more or less the same things
you do; I rarely hit performance issues. The same was true of my
previous desktop, which I got six years of use from.
> In an ideal world, one in newhich all programmers
were
> competent, OO languages wouldn't be such a problem. So I guess
> what I really mean is, "Bad programmers are even more
> detrimental to computing when armed with OO languages".
You really think these same programmers would somehow write
better code if only they would stop using OO?
Yes, absolutely. Most OO languages give bad programmers more code-
inflating features to misunderstand and abuse. If they don't know
how to write good code in C, which is a tiny, very fast, very low-
overhead, very simple language with very few features, how can
they be expected to write good code in C++, C# or Java, which are
anything but?
I suppose we'll have to agree to disagree here.
I'm fine with that.
Handing
an idiot a loaded rifle is dangerous. Handing an idiot a
loaded rifle with a loaded grenade launcher is MORE dangerous
Yes. Programming languages are just like firearms.
Wow. So I'm not allowed to use analogies here?
Sorry, typing this from my phone and forgot to finish that sentence :).
And I said I wasn't going to continue dragging this offtopic...
Josh
-Dave
--
Dave McGuire
Port Charlotte, FL