A lot depends on what you mean by "cross-platform-programming."
See below.
Dick
----- Original Message -----
From: "Sean 'Captain Napalm' Conner" <spc(a)conman.org
To: <classiccmp(a)classiccmp.org
Sent: Sunday, April 21, 2002 4:32 PM
Subject: Re: Micro$oft Biz'droid Lusers (was: OT email response format)
It was thus said that the Great ajp166 once stated:
> The whole mid 80s thing with workstations
was a disaster in many respects
as
> everyone was trying to put more processor in a
box and unix was the OS of
> choice as it was easily ported and offered most of the higher level OS
> functions that stuff like DOS was clueless about. The problem was unix
was
> easily ported though it didn't make for
portable apps, usually due to
> underlying hardware or even the basic processor. In that respect CP/M and
> DOS made it easier as at least if it was CP/M-80 you knew your base cpu
was
> 8080/z80 and if it was DOS you could bet on 808x.
Unix back then meant
> MIPS, VAX, PDP-11, SUN/sparc, 68000, Z8000, and a few dozen I likely
missed.
Some people believe cross-platform programming
means using the resources
common to all the platoforms. That produces portability at the expense of
efficiency. If you're willing to tolerate that, well, OK.
> That doesn't make sense. UNIX you
state as being easily ported, even
> though as a kernel it has to hit the hardware pretty hard, yet you state
> applications as not being portable at all, because of the underlying
> hardware and processor (which the application shouldn't care about). If
> anything, I would think the opposite would be true.
> Now, speaking as a programmer who's
done cross platform programs, I've
> come to the conclusion that writing portable software isn't difficult and
> with enough experience it becomes quite easy in fact. It's programmers that
> make unwarrented assumptions about their code or platform that make for
> unportable applications.
> Granted, on the 8-bit systems you often
times had to code in Assembly,
> both for speed and size reasons (and because compilers for such systems
> weren't good enough) but when you get to UNIX the whole point was to avoid
> assembly in the first place [1]. Therefore, you are writing in a higher
> level, more portable language and then it becomes possible to write code
> that will run across platforms. Heck, I've written a program that has
> compiled across several different UNIX platforms (SGI, Linux on the x86,
> Linux on the DEC Alpha, OpenBSD, FreeBSD) without problems [2] and you'll
> notice that there is at least one 64-bit architecture listed there. The
> same code was successfully compiled (with one line of code change, plus a
> few other lines to get the correct header files loaded) under Microsoft
> Windows. Okay, it may not have been optimum code under Windows, but it
> still ran with minimum of changes or fuss.
This is true if you've got a set of software
tools that really exploits the
hardware effectively. Few do that. The result may be that you have a bit of
software that works well on one platform but not so well on the others, and
perhaps very badly on one or two.
> -spc (Whose first major program in C I
ported (with real minimal changes)
> between OS/2, MS-DOS, AmigaOS and UNIX ... )
> [1] Unless speed was critical (remember,
I'm talking about applications
> under UNIX, which shouldn't hit the hardware at all), at which point
> you find the bottle neck, and rewrite that portion in assembly, and
> keep the original C around in case you have to port (or someone has
> to port) the code to a new chip. The rest of the application can
> remain in C.
> [2] Okay, one problem---the DEC Alpha port
crashed, but it was tracked
> down to a bug in the C library call memchr().
My neighbor just scrapped two brand-new Alpha
stations because they wouldn't
run the 64-bit UNIX version the vendor, now history, produced them for. He
couldn't, after several months of effort even get any one to buy the boxes for
the box/PSU combo. He put them in the dumpster, still in their plastic wrap.
I wish he'd offered me the enclosures with the PSU's ...