On Sun, Nov 18, 2012 at 08:57:18PM -0500, Mouse wrote:
Are you saying Linux is more scalable and
fault-tolerant than VMS?
[Linux] scales from tiny embedded machines (the smallest I
use daily
has IIRC 32 MB of memory and a 50 MHz PPC CPU)
I went through my larval phase on VMS on an 11/780, and I think it had
something like 60M of _disk_ (well, for my first year or two on it;
after that, I think we got two 300M washing-machine drives). If you
think 32M of RAM is "tiny", you've already succumbed to serious
bloat-tolerance. (Perhaps the best videogame I've ever played occupied
a total of 24K - that's 24576 bytes - of ROM. Including the code and
all the graphics constants.)
The first machine I ever played around with had a bit under 32 KB of RAM
and ran on a U880 (eastern bloc copy of the Z80). First machine I bought
with my own money had 120 MHz CPU, 32 MB RAM, 1 GB disk and was a serious
powerhouse at the time.
The art _has_
moved on from when VMS was king of scalability.
"Moved on" to the point that thinks 32M of RAM is "tiny"?
Well, these for anything that is not some kind of embedded system? Yes.
Sic transit gloria mundi.
Well ... yes, there is bloat. Part of that comes from "nobody"[0] doing
assembly language anymore and instead using high level and script languages.
But we are also _doing_ a lot more with the machines these days, things that
were flat out impossible due to a serious lack of computrons 20-30 years
ago are now just normal.
I'm looking forward to you demonstrating realtime 1080p video decoding
on a 11/780 ;-)
Kind regards,
Alex.
[0] in quotes because I know that there is still asm in use, both for very
low level kernel/driver work and for special optimization
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison