On Mon, 10 Jun 2013, Ian King wrote:
On 6/8/13 7:23 AM, "Mr Ian Primus"
<ian_primus at yahoo.com> wrote:
Linux runs fine on older hardware. Sure, it's
a bit sluggish - ssh, for
instance, is pretty processor intensive, and is slow to initiate on a
machine this old. This is to be expected. Compiles are also very slow -
building the kernel took well over a day. But, by and large, most stuff
runs OK, and the system is very usable. Definitely would not want to
run X, however...
My first Linux machine was a 386SX, and I ran X on it. I don't recall
how much working store I had, but I can guarantee you it wasn't much.
I used the FVWM (Feeble Virtual Window Manager) and I thought it was
totally cool. Granted, I wasn't trying to do much with it - I probably
had two or three terminal windows, a clock and xeyes running. But I
also had rsh to a local college and I started playing with networking
applications and utilities.
Yes, kernel builds were slow. But I had so much fun!
The modern distributions are jam-packed with lots of stuff that someone
apparently thinks "everyone" needs and, as a result, they are HUGE. On
one hand, they're trying to meet the expectations of the Windows crowd,
but IMHO they go too far and load up with lots and lots of *stuff* that
I for one never use. There was a message on this thread that reminded
me of installing the "base" and "dev" packages and calling it good,
installing other tools as I needed them. After all, how hard is it to
un-tar and make?
I think that 386SX may have been a 4MB machine. Slow, but it got there.
A 386DX-20 and 4MB would run X with FVWM and I used just that combination
for several years. With 8MB and later 16MB, Netscape worked just fine on a
386DX under X. For comparison purposes, Netscape (the 16 bit version) also
worked fine on Windows 3.1 on a 286 with 4 to 8MB of ram. This of course
assumes you have a swap partition or paging file.
The issue of modern distributions using so much in the way of resources is
caused by several factors. At a lower level, you have userspace developers
today writing very very inefficient code. In some cases they overuse stuff
like malloc() and free() which are more CPU intensive but can potentially
use less memory, and in other cases developers are using far too many
static buffers, which while using far less CPU, can pre-allocate and eat
up a lot more memory. Those combined with just far less efficient code in
general are the worst offenders when it comes to CPU and memory usage.
As for disk space, a lot of that is due to the distribution maintainers.
Software packages today are not compiled to be space/memory/CPU efficient,
but rather to work on the largest subset of what the package maintainers
/personally/ consider to be "modern" CPUs (which right now, are multi-core
AMD and Intel CPUs). Those packages eat up a lot more disk space (and
often memory) and it is now the norm for most distributions to build
everything with debugging symbols. The thinking is that debug symbols
somehow help debug large programs that crash, however it really hurts the
performance of most software, and is very very silly for basic UNIX-like
command line utilities.
I've both developed and maintained software, as well as packaged all sorts
of software for all sorts of distributions including those that used the
RPM (RedHat) and DEB (Debian) package formats (and others), so I have some
experience in seeing this stuff from all sides. I found that any time I
tried to push for more efficient code, I got pushback from other
developers and package maintainers because "fixing" and making stuff more
CPU/memory/space efficient was considered to be a /far/ less important
task than rushing to push out the next bleeding-edge version of something.
...and don't even get me started on some of the Debian maintainers adding
buggy third-party patches to some of the software they distribute...
Dealing with upstream bug reports from users complaining about bugs and
crashes introduced by such patches wastes a lot of developer time. Your
average package maintainer does /not/ know the software better than the
upstream developers, and those that try to be clever and "fix" stuff that
doesn't really need to be fixed tend to create even more work for the
upstream developers when users begin complaining.
I'm slowing coming to the opinion that a first year C developer should be
/forced/ to develop on a CPU and memory constrained platform, such as a
386 with 4-8MB of memory vs a modern multi-core CPU with multiple
gigabytes of memory, so that they will learn first hand how to write more
efficient C code. I wonder how many of the current userspace developers
and package maintainers have ever even touched a 386 based machine, let
alone something even more resource limited?