Dave McGuire wrote:
On Feb 9, 2010, at 6:12 PM, Josh Dersch wrote:
You can indeed use C#, via the Mono project.
Generics are a very
basic form of C++ templates that are good for creating generic
containers, and that's about it. C++ templates are considerably more
involved, and metaprogramming tricks are used to do all manner of
insane things at compile time. At a very basic level, the two are
the same. It's like saying a Yugo and a Maserati are equivalent
because they are both cars. (yes, you can use analogies here :)
Ok, I understand that a bit better now. I will go read up on
generics a bit. Thanks for the clarification.
If you have a strong stomach, you should read up on C++ Template
Metaprogramming as well. "Clever" people can do arguably useful things
with it; but I find it unreadable and very difficult to debug (I'm not
that clever, I guess). It's kind of like a limited version of Lisp
macros, only done in a bizarre moon language based on odd C++ template
expansion & overload resolution rules, and with no debugging support...
You
have? Seriously?
Yep. Again, it's a case of bad programmers doing stupid things.
Well ok, but is that the rule or the exception? ;)
Hard to say. I'd estimate that most of the C++ code I've had to deal
with has had performance on par with what I'd expect from a C
implementation (it's just 20% uglier to look at) but I've also had to
deal with a good amount of ugly C and ugly C++ written by people who
were unclear on the concept of using the right algorithm for the problem.
And as an aside, the C/C++ code I've had to deal with has been, in my
experience, far more bug-prone and unstable than the C# code I've dealt
with :).
Playing
music, playing video files, telnetting, sshing, editing,
compiling, browsing the web, etc etc etc. The apps are a bit
prettier now, certainly moreso than with fvwm, but I'd happily live
without that.
See, here's where I see a disconnect; you are doing the same *class*
of thing, but you're not really doing the same thing. Programs have
gotten more complex because people want more from their software.
Sure, I see where you're coming from, and I agree. But I'm actually
doing the same thing. With the exception of Firefox and Mail.app, the
stuff I run is all pretty lightweight. I wish Mail.app were a bit
lighter, in particular, because I, even being a VERY heavy email user,
barely scratch the surface of [most of] its [pointless] features.
Well, there's a plethora of e-mail clients out there; I use Thunderbird
at home and while it's not the most elegant thing it does a decent job
for my needs. I've never used Mail.app (except perhaps in its earlier
incarnation as a NeXTstep app :)) so I can't speak to whether it does
magic things that are worth the memory overhead you're seeing.
Regardless of whether Firefox 3.5 was written in
assembly or C++
you'd never hope to run it on your IPX. I just think you are blaming
the wrong thing (or just blaming one thing) for the performance
degradations you are perceiving. OO overhead adds some not
imperceivable overhead; so do each of extensibility, abstraction,
support for "modern standards" (CSS, JavaScript, XML) ui theming,
support for "advanced" desktop metaphors, etc... Code reuse and
abstractions also bring overhead; these exist even in C, but the
overhead is worth it in terms of maintenance and usability (from a
programming and a user perspective.)
I do see where you're coming from. Perhaps I give OO too much
blame, but I stand by my accusations...it does deserve a lot of it, in
my opinion. It wasn't until very recent releases of common C
compilers, for example, that a simple "hello world" program in C++
generated a 600KB (yes, six hundred kilobyte) binary. I've
demonstrated that (along with its 4KB C equivalent) many times. I
was, admittedly, pleased to see that this particular brand of idiocy
has been addressed. I have no idea what was in that damn binary.
I think that C++ did wonders to malign the image of OO. C++ is just
barely OO anyway -- it barely has compile-time encapsulation and has no
real run-time encapsulation, memory management is still almost entirely
manual (which people may argue is a good thing, but in the face of C++
exceptions and other C++ features, it's a HUGE issue since it makes
memory management all that more difficult to do correctly), and
compilers have taken a long time to catch up to the point where they
generate decent code.
I think most early (and some more recent) C++ compilers & linkers did a
really terrible job with unused code removal; i.e. if you did a
"#include<iostream>" to do 'cout << "Hello,
World!";' it'd drag in all
sorts of associated I/O and support code that, despite never actually
getting called by anything in your program, would end up in the
resultant binary. I can't speak for all compilers, but the recent VS
compilers/linkers do a pretty decent job of removing unused code, as
well as folding together identical code blocks (the latter is *vital* if
you get up to metaprogramming shenanigans). All of this comes with a
fairly high compilation-time cost. (And it makes debugging optimized
builds really fun -- those 50 templated functions you built get rolled
into one function associated with one symbol name...)
- Josh
-Dave