On Thu, Jun 11, 2009 at 3:20 PM, Kirn Gill<segin2005 at gmail.com> wrote:
> *Most* "techies" now know nothing except
the x86-32 PC and Windows.
> ?DOS is a forgotten mystery; Windows 9x is historical and unknown.
> PCs have always been 32-bit and the 64-bit transition scares them.
> They have never seen or used any networking protocol other than
> TCP/IP (and that is a mystery except to specialists). They don't
> know how to use the command prompt and increasingly they have never
> used floppy disks.
This is starting to sound like those endless lists of "things today's
high school graduates have always/never seen". I'm not really arguing
with the above, but it's just an observation that as we all get older,
the sort of tech we grew up on, no matter when we got started or how
old we were then, eventually becomes lost to history and newer stuff
comes along. In my case, I started with 1970s microcomputers when
Bill Gates was still selling papertape and quickly moved on to 16-bit
and 32-bit minicomputers for a living.
I'm a part of that generation. Windows 3.0 was
released (1990-05-22) a
mere 4 days after I was born.
I guess I am lucky; I was introduced to computers when
I was 4; I know
of the exotics, those that rivaled the PC, and those that came before...
I know of their software. I might not have used all of it, but I've tried...
That is more exposure than most of the people I work with every day...
it's a mostly Java operation here, with lots of .NET, and there are
two of us in the building who have ever done any development or system
adminstration on anything older than a few years ago.
> These people think it's efficient to run a
copy of Windows 2003 on
> a server (which needs a couple of gig of RAM to work well) and then
> run multiple VMs on that containing copies of Windows 2003 or other
> ?flavours of Windows...
In our case, it's multiple VMs running Linux, but for at least one
corner of the room, the rest applies.
> They think virtualisation is new and clever
> and it doesn't occur to them that not only is this wasteful of
> resources, but it's a nightmare to keep all those copies patched
> and current.
It's no different to patch 10 virtuals than 10 physical boxes. I
agree that there is some efficiency lost with the current approach to
servers, but since present day OSes and applications are mostly
terrible at sharing CPUs when you start to have 4 or 8 or 16 CPUs per
box, I'd argue that you get more efficient CPU utilization by breaking
up the CPUs into several smaller chunks. I/O bandwidth is another
thing entirely (as is off-cache memory bandwidth), but if you have one
large server and you try to double the RAM and double the CPU, you
won't get close to double the amount of work done.
"couple of gigs"? When I think of servers,
that's what I think for
"required storage". 512MB of RAM should be enough for *everybody*,
including servers.
Ok, 1GB RAM for those servers that host multi-luser make-believe games.
You underestimate how quickly badly-written applications grab RAM as
they fail to scale up gracefully.
_Should_? Yes. _Is enough_? Not hardly. Programming resources are
so expensive compared to hardware that if someone says it will take 10
hours to change the software, test and deploy it to make it more
memory efficient, I can just about guarantee that the manager will
approve doubling the memory in the machine before approving the labor
to make the memory upgrade unnecessary. There's nothing new in this
over the past several decades. Only the details of how many dollars
buy how many units of memory changes.
-ethan