On 9 April 2013 00:56, MG <marcogb at xs4all.nl> wrote:
On 9-apr-2013 1:06, Dave McGuire wrote:
Alphas were never hard to obtain.
Okay, AlphaPCs, AXPpci and the like were offered to 'lower end'
and consumer markets. But, like I said, for what were they
actually useful and why did nobody end up getting them? I
mean, since they weren't hard to obtain, wouldn't that be
worse then?
Pick up the phone and order one, and it shows
up.
I regret to have to inform you that my phone isn't capable of
time traveling yet...
I did it myself, time and time again, through the
90s.
I did see them on offer in computer magazines. But, again,
the aforementioned variety.
From tiny desktops to several-hundred-kilobuck
AltaVista-class
machines with 8GB of RAM (in 1994!), they were all just a phone
call away.
Also a few additional loans and mortgages... (Especially the more
useful and interesting "AlphaStation"/"AlphaServer" systems.)
As far as software...you got UNIX and a C
compiler, and the
net provided the rest.
Have you recently tried to build Tru64 UNIX pkgsrc offerings?
(For instance.) That convenience, although I can't retroactively
check that, is hardly there... (Or, certainly not anymore.)
Digital/Tru64 UNIX saw quite a bit of usage, especially here.
Many companies and government agencies ran VMS and Tru64 UNIX,
but it's sadly all dead now and gone to Windows and Linux.
Life was good. Nobody in their right mind ran
Windows on
Alphas
Guess what those affordable Alphas were only capable of
running... (Hint: It starts with a /W/.)
[T]he "getting work done" part of the
networking world
neverwanted to play in that dirt.
You are forgetting about graphics and post-production now.
Look up things like SOFTIMAGE|3D, mental ray and LightWave
3D, amongst other things. Those enjoyed Windows AXP ports,
Tru64 UNIX (and VMS, needless to say) never did...
They were expensive, but no more so than their
peers.
Well guess what happened with their peers as well?
While both of you are getting increasingly inflamed and inflammatory,
the actual point to this debate - if there is one - seems to be
getting lost in the noise.
I suggest that you both retreat and attempt to clarify your positions.
Dave McG, I don't think MG is actually trolling here. He has, ISTM, a
genuine question, which , if I understand it correctly, is "what is so
special about mainframes?" And while you are getting increasingly
agitated and shouting the odds - and hurling some abuse, too - *you
are not actually answering this question.* That, ISTM, is why MG is
continuing to bait you.
"MG" - you seem now to be comparing mainframes to DEC OpenVMS boxes,
is that correct?
If so... why? Are you asking why mainframes are still around while
DEC's OpenVMS offerings are long gone? Or are you pointing out that,
toward the end, OpenVMS boxes morphed into something not unlike
high-end PCs and asking why mainframes have not done the same?
If I can attempt to answer this...
VAXes and their kin - even big ones - were not true mainframes.
Mainframes are a different /type/ of computer.
There used to be 3 types of computer: mainframes, minicomputers and
microcomputers. At the high end, micros blended into the specialised
realm of "workstations". (Obviously these are sweeping generalisations
here.)
Minis have essentially ceased to exist. So have workstations, inasmuch
as the difference between micros and workstations was one of scale and
spec: workstations were high-power, graphical computers running a
multitasking OS, aimed at presenting a rich graphical environment for
a single user. All modern micros are essentially workstations; there
are no workstations any more.
The differential between them was this:
* micros run off a microprocessor, a single-chip CPU, and were
essentially designed for a single, interactive user
* minis ran off CPUs built out of discrete parts - pre-microchip - and
were designed to serve a small number of interactive users on
terminals
* mainframes predate the whole notion of interactive users and aren't
really designed to serve interactive, logged-in users at all; instead,
they were designed and very heavily optimised for running batch jobs
with great efficiency and reliability.
Minis and high-end micros in the form of what are now called "servers"
have essentially merged. They're not real minis any more - most are
just big PCs, i.e., micros; a few, such as IBM's and Oracle's ranges,
have evolved out of proprietary RISC workstations, but apart from the
different CPU, they are pretty much PCs. They're micros, but with
their framebuffers and mouse and keyboard ports shrunk to vestigiality
and the emphasis on providing services over the network. They still
run microprocessors, though - all the old lines of pre-microprocessor
minis are dead. The closest thing is IBM i, AKA AS/400, but today,
that's just a different OS running on an IBM System P, i.e., a POWER
Server, i.e., an RS/6000. The 2 lines converged years ago.
Mainframes are a conceptually different type of computer. They don't
have keyboards and mice, obviously, unlike workstations; they don't
even support conventional terminals, i.e. dumb terminals running over
serial ports. Mainframe terminals were computers in their own right,
handling input & redrawing the screen locally - they batched up the
users' input and sent it over special cabling systems to the mainframe
in chunks. That is historical now, but the point is, mainframes are
not interactive computers, and that's why they've survived and that's
also why they never mutated into workstations as the VAX and Alpha
did.
What is so peculiar about them?
Various things.
* Specialist OSs, so that, for instance, unlike with PC or Unix
virtualisation, the hypervisor OS is nothing but a hypervisor, whereas
the guest often depends on a hypervisor for its function - some guest
OSs don't even have things like networking or filesystems, because the
host provides this. Sounds weird but it's tens to hundreds of times
more efficient than the PC model.
* Everything is offloaded. These systems have multiple processors,
sure, like a high-end server, but they have lots of different types of
processor. Some do computation, some do encryption, some manage
databases, some just do indexing, some just handle various different
types of I/O. PC servers cannot even come close to this, but the PC's
efforts at comparing are things like machines with TCP/IP offload
engines in their network cards, stacks of dozens of GPGPU cards for
number crunching, and *in the same case as the PC* both NAS storage
and SAN storage, talking iSCSI to some devices, Fibrechannel to
others, SMB to others, NFS to others - all inside a single system,
using whatever is more appropriate for each workload. Smart dedicated
sub-computers running each type of storage, so that the "main"
processor doesn't do /any/ of this for itself; it /only/ runs the
all-important batch workloads.
The result is scalability and reliability that no network of x86 boxes
running VMware can even get close to touching yet. Machines which have
/no/ single point of failure - multiple processors, memory buses,
system buses, disk controllers, network controllers, any of which can
be started and stopped independently, so that bits of a machine can be
shut down and replaced or upgraded while the rest of the machine is
still running at 100% load, flat out, handling mission-critical
workloads.
Imagine a whole server room, hell, a whole datacentre, with hundreds
of independent servers - some running Windows, some Linux, some
Solaris, some Netapp Filers, some dedicated SQL servers, all in a
single rack, managed as a single instance, with 100% compatibility and
all the components, from the processor chips to the disk drives to the
network cards to all the OSs, all coming from a single vendor, all
optimised for handling big server workloads with /better than/ 99.999%
availability.
That is why people still buy (or more to the point, rent) mainframes.
Because when it comes to the point when you are going to have to spec
an entire datacentre, hire a whole team of experts to integrate it
all, and spent a few million a year running it, then in some cases, it
makes good financial sense to just lease a single box from IBM which
does all of this in one fridge-sized cabinet that sits there and just
works. No integration, no management, software compatibility that goes
back to whole decades before the 8086 was invented in 1978 or
whatever, all guaranteed and backed up by the most solid, high-quality
SLA that has ever existed in the IT industry.
If your workloads start small and grow, and are based around PC
software running on x86, this all sounds irrelevant. It's cheaper to
use a rack full of cheap x86 kit. If you need lots of racks, these
days, buy the time off some cloud vendor.
If you are an international company with many hundreds of millions of
customers, and everything you run is bespoke and hand-coded for you,
and you don't give a flying toss what it runs on but it *ABOSLUTELY
MUST* stay running for years on end, then actually, a mainframe makes
much more sense.
What IBM did in the last decade or so is realise that this honking
great boxes can run Linux in one of their virtualization partitions
just as well as they can run weird proprietary IBM OSs. And if you run
Linux in that VM, then you get all the PC-type stuff that mainframes
don't do terribly well for free - TCP/IP, HTTP, all that sort of
stuff. But the scalability of a mainframe means that whereas on a very
well-specced x86 server, you can run dozens of VMs, maybe even a
hundred plus if you set it up very carefully and throw terabytes of
RAM at it, on a bog-standard low-end mainframe, you can run tens of
*thousands* of Linux instances all at once - because running lots and
lots of similar workloads side-by-side and keeping them all responsive
is what mainframes are built to do.
I am not talking about a system that is 5? or 10? more scalable. I'm
talking about something 50? or 100? more scalable. Not supporting
hundreds of users per box, but millions of users per box.
Sure, only on certain specialised workloads, not on pure CPU-intensive
stuff - but for finance and the like, stuff for which there is code
out there that has been in production since the 1960s, a level of
maturity that is literally impossible for x86 or Unix products.
So yes, huge, relic of a bygone age, cost millions, but absolutely
perfect for certain workloads, like a financial reconciliation app
that handles billions of dollars' worth of transactions, all day,
every day, and which never ever goes down at all ever.
But if you want to serve files on a LAN, or run a thousand instances
of MariaDB, Perl and Apache running some JSON queries and rendering
PHP, no, it's a stupid, ruinously expensive way to do that.
--
Liam Proven ? Profile:
http://lproven.livejournal.com/profile
Email: lproven at cix.co.uk ? GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lproven at
hotmail.com ? Skype/AIM/Yahoo/LinkedIn: liamproven
Tel: +44 20-8685-0498 ? Cell: +44 7939-087884