On 9 April 2013 02:40, MG <marcogb at xs4all.nl> wrote:
On 9-apr-2013 2:35, Liam Proven wrote:
I suggest that you both retreat and attempt to clarify your positions.
Feel free to point out where it has been vague and inflammatory,
because I don't at all feel addressed by such accusations.
Life's too short. It's 3AM here & I'm just doing this while waiting
for a VM to update. You know how it is.
"MG"
- you seem now to be comparing mainframes to DEC OpenVMS boxes,
is that correct?
To the degree that they're both niche platforms, and nowadays more
than they ever were before, yes. Not architecturally.
OK.
As I already said, I have never even gotten the
opportunity to use
a mainframe, other than perhaps what one can do through emulation
(e.g. through SimH) with various historical offerings.
Well, TBH, nor I. I suppose the only difference is that I've done a
lot of research & reading on big iron - partly for a white paper I
wrote about them for IBM years & years ago. (I am more of a tech
writer than a techie these days; you can find a booklet about
virtualization by me on Amazon, if you're curious.)
The closest thing to it, the 'quasi-mainframe'
as I dared to call
it, was the public access AS/400 of Rechenzentrum Kreuznach. The
only "i" I have ever seen and used (only as an unprivileged user
at that.)
I've done a *tiny* bit of maintenance/admin work on AS/400 and System/36.
Are you asking
why mainframes are still around while DEC's OpenVMS
offerings are long gone?
Because VMS is supposedly also alive, like the mainframe. But,
at least many VMS people are a bit more honest with themselves
on average and show a bit more self-criticism than the average
IBM (and especially mainframe) type I've been coming across in
the last few years.
VMS? If it ain't broke, don't fix it.
Also, industry-leading clustering. Even now, no other OS can come
remotely close. Leading to ridiculous uptimes and so on: take
individual nodes of a cluster offline, upgrade them, rejoin, and thus
upgrade or even replace an entire server farm with 100% uptime.
Or are you
pointing out that, toward the end, OpenVMS boxes morphed
into something not unlike high-end PCs and asking why mainframes
have not done the same?
No and I'm not sure how you got this out of all that came by so
far. In fact, this was never (from the beginning) VMS' strong
point.
I quote:
?I've seen smaller form factor IBM mainframes before, like recently
in a YouTube video.
Also, whatever prevented IBM from creating more and even more
compact mainframes? Like I asked before: What the hell is it
with this disturbing 'elitism', 'mainframe royalty'? That of
all people, the most republican bunch on this planet (Americans)
are defending this goes beyond me...?
That seems to me to be essentially asking why we never got miniature,
PC-size mainframes.
There /were/ PC-size mainframes, back in the day.
http://www.ricomputermuseum.org/Home/equipment/ibm-5364-s36-pc
But in the end, IBM seems to have realised that its revenues came from
the big expensive boxes and stopped trying to make little cheap ones.
Smart move.
VAXes and
their kin - even big ones - were not true mainframes.
I guess not, but then, the definition of "mainframe" is not one
I care tremendously for. I never lost sleep over whether or not
they were considered a "mainframe".
At the end of the day, it's more a functional than a systematic
definition, I think.
There used to
be 3 types of computer: mainframes, minicomputers and
microcomputers. At the high end, micros blended into the specialised
realm of "workstations". (Obviously these are sweeping generalisations
here.)
This is, of course, a bit of an IBM-dictated 'taxonomy'. I mean,
DEC didn't even call its systems "computers" literally at first.
(Think of "PDP".)
Would that mean a "PDP-11" isn't a computer therefore...?
OK, I get the hint. I had no real idea of your level of historical
knowledge; you've been too busy sparring with Mr McGuire.
Yes, I'm aware of the notion and idea of
"time sharing". Very ancient
stuff though, all in all, to be honest.
Not really, no. In essence, modern distributed virtualised datacentres
are reinventing, just breathtakingly inelegantly and inefficiently.
I may not have used a mainframe, as I said, but
I'm not /that/ unaware
about their functioning.
OK.
Many of these characteristics, like with regard to
dumb terminals,
are also true for "i" though. At least, I can't think of an "i"
(or, AS/400) that would be operated via direct graphics head/frame-
buffer interface with a 'keyboard & mouse'; you?
Well, AS/400 was the last ever new mini, essentially, and it was
designed to fit into the existing "ecosystem" of IBM kit: peripherals,
terminals, cabling, etc. So it's a weird sort of mini that instead of
using RS232 or Ethernet, used IBM 3250 or 5250 (or whatever) terminals
via SDLC over Twinax or TokenRing, etc.
That is
historical now, but the point is, mainframes are not
interactive computers, and that's why they've survived and that's
also why they never mutated into workstations as the VAX and Alpha
did.
It's funny you should mention that, but they were rather poor for
those purposes overall (eventually in the long run).
I am not sure what "they" and "those purposes" refer to here. I am
not
going to guess.
The point being that after a while flirting with newfangled ideas like
interactive sessions on terminals, mainframes have retreated back to
their core strengths, as it were.
I'm aware of some of these concepts. Say,
isn't this what the FreeBSD
"jails" are slowly, but surely, trying to mimic a bit?
In a way, yes. Conceptually different, but in some ways broadly
comparable in effect.
But imagine someone building a Linux distro that ran under VMware and
had no disk drivers, no filesystem, no display - it /only/ ran by
storing files directly in the VMware filesystem, was only accessible
via a remote session over the network, etc. You could in principle
really pare it right down by not actually having any code to talk to
keyboards, mice, displays, any I/O at all except the network, and so
on.
Some of the IBM guest OSs are like that.
Some are more stripped down still and are not OSs as such at all -
just, say, an RDBMS or a transaction monitor that runs directly on a
bare VM.
Whereas with the x86 way of doing things, you have a whole OS, gigs of
it, running VMs, inside which it software-emulates a whole PC, and on
that runs /another/ whole OS, emulating files in its emulated
filesystem on its emulated hard disk on its emulated hard disk
controller, displaying pixels on its emulated screen on its emulated
graphics card on an emulated PCI bus attached to an emulated CPU -
it's horrifically inelegant: layers and layers and layers of
duplicated code, 90% of it completely unnecessary, running imaginary
tasks on imaginary hardware, because every layer thinks it's on bare
metal.
And yes, I know about ESXi and so on. VMware are scam artists. The old
server ran a whole copy of Red Hat Linux; their precious VMkernel was
just a Linux kernel module. Now they've cut that right down and
essentially boot the kernel off a bootloader, but it's still an OS.
Compare with IBM Series P, where the firmware can create and destroy
VMs. The (equivalent of a) BIOS does virtualisation... and the OS,
too, if you want it. In several forms.
The PC version is a bodge on a bodge on a bodge.
* Everything
is offloaded. These systems have multiple processors,
sure, like a high-end server, but they have lots of different types of
processor. Some do computation, some do encryption, some manage
databases, some just do indexing, some just handle various different
types of I/O. PC servers cannot even come close to this, but the PC's
efforts at comparing are things like machines with TCP/IP offload
engines in their network cards, stacks of dozens of GPGPU cards for
number crunching, and *in the same case as the PC* both NAS storage
and SAN storage, talking iSCSI to some devices, Fibrechannel to
others, SMB to others, NFS to others - all inside a single system,
using whatever is more appropriate for each workload. Smart dedicated
sub-computers running each type of storage, so that the "main"
processor doesn't do /any/ of this for itself; it /only/ runs the
all-important batch workloads.
Like I said, it's a beautiful sounding system, I never doubted that.
My main gripe is the 'elitism' that IBM seems to instill and (like
also someone else admitted) seems to artificially keep alive.
[Shrug] It's a niche and a marginal one. It will probably die in time.
For now, they are preserving it by killing off the once-thriving
compatible-hardware industry - there used to be loads of
IBM-compatible kit and software. All gone now except for the original.
Also, by making it expensive, because what it is good at is something
only a certain type of business wants.
I gather it came as a bit of a surprise to IBM when the Linux kernel
was ported onto the mainframe CPU, and more so when a couple of
companies started selling boxed distros.
IBM's response, continuing to anthropomorphise like hell, was to shrug
and say "OK, then" and permit it - after initially trying to shut it
down.
Then it came up with a special mode of its main hypervisor OS that
could only run Linux VMs - but was really cheap.
Originally, the idea was that you could run a Linux session and have
it serve out your mainframe data over those nasty cheap plasticky PC
standards ;) and it was cheaper than buying a licence for the IBM
mainframe web server, etc.
This did well, so it came up with a special edition of the whole OS
that only hosted Linux, no other OS - and again was really cheap.
Rather to its surprise, IBM mainframe sales have been doing really
well in the last decade, and Linux hosting is driving adoption. My
impression is that it was amazed but it is a flexible company and it's
rolled with it. If people want to buy zSeries to run shedloads of
Linux sessions on, hey, IBM is happy to oblige.
Bear in mind, though, that in C21, IBM probably considers shifting a
dozen units to be a major sales spike. If it's selling hundreds, I'd
be amazed.
But then, HP's entire Itanium server range has survived for a decade
or more on total sales of just single-digit thousands of units. (I've
seen some figures. May not be accurate. But I've heard numbers of
3000-odd units, *in total*. Yes really.)
(Aside: you mentioned Cell. A similar scandal, effectively hushed-up,
was how broken Cell was. The access times for Cell to its local RAM
were in kB/sec. Yes really. Not meg, not gig.)
It also --- well, to me anyway --- gives the
impression that it's
more of a money making scheme (the 'exclusivity', so to speak)
than a sound future-proof treatment.
I think your bias is overwhelming you, TBH.
Yes, IBM is selling exclusivity etc., but there is more to the story than that.
I've never seen official performance statistics,
just IBM's own
figures. So, I can't comment on how it truly behaves in this
regard.
It's like Rolls Royce. "Power: adequate" is all they used to say. If
you need to ask, you're not in the target market.
There is a generation, I'm even willing to bet
several generations,
that grew up with nothing other than Windows and Linux. Some young
enough have never even experienced nor seen/heard an IBM PC, let
alone the term "IBM PC".
Indeed.
Why is this important? Because IBM itself, the
company, is also
falling further into obscurity like this, along with "z".
Yup.
Will banks continue to run "z"? I guess
some will, but I also
read about NonStop and, not surprisingly, the ever encroaching
Linux and even Windows.
Then there's also the question of the current and upcoming
generations, freshly indoctrinated with notions of "the cloud"
and what-not: How "cloud-ready" is "z"? And to what degree
would they prefer "z" over some Dell or 'brandless' x86 or even
ARM (those seem to be coming, too, now) server?
My personal take on it, at 45 after 25y in IT, is that, for the main
part, C21 IT is run by clueless idiots and we have got to a
lowest-common-denominator, cheap'n'nasty sort of plateau. It's why I
want out.
Someone or something will disrupt it in time.
Like I wrote before, in the case of the
"GAMEframe" setup by that
Brazilian company Hoplon, they wrote they had to offload to "Cell"
processor 'blades'; because otherwise it'd 'tax the "z" too
much'.
Bear in mind, you pay for CPU bandwidth. It's a metered commodity.
Less load = less fees.
It's not overstretching the machine, it's overstretching the bank balance.
IBM kit the boxes with loads of CPUs and then turn most of 'em off.
You pay for 'em when you need 'em and they're remotely enabled.
This kind of stunt. Ugly, kinda stinks, but these are not highly
price-sensitive performance-sensitive markets.
Also, people confuse performance with responsiveness, system size with
scalability, etc.
There used to be commercial Unix mail server programs that could
cheerfully host 30,000 users on a box with a few hundred meg of RAM.
Now, people run Hosted Exchange and spawn hundreds of new server
instances because the crappy broken software can only handle 1-2
hundred users per box.
This is not real scalability; it is using the cheapness of commodity
COTS hardware to hide profound scalability problems.
Also see, responsiveness vs. performance.
E.g. Win7. It's not faster than Vista. It's marginally slower than
Vista. But it /feels/ faster because it's been tuned for UI
responsiveness.
Said responsiveness is still utter crap - BeOS on a Pentium/90 was
vastly more responsive in 1998 or so. But it's dead, because it was a
niche product and Windows was Good Enough.
When I thus read such things, I become somewhat
doubtful of such
claims. It doesn't also help that IBM has made it so relegated
and secluded to themselves and their direct customers, that the
inquiring minds have little insight into these (and hopefully
truthful?) types of performance benchmarks and figures...
If you compare a machine with 4 CPUs of a single kind with one with
368 different CPUs of 27 different types, all running different OSes,
how do you compare performance?
If you just pick one - say the GP arithmetic-logic engine - then the
big box will look like utter crap by comparison. Its expensive chips
are slower. But its real power is that it has 3581^23 different
dedicated processors doing all these different things in different
chunks of different types of memory, all at once, and thus can run for
years on end at 100% utilisation while preserving the same response
time as with 1 task at 1% use.
Even TPC measures will be misleading in this sort of scenario, so, IBM
avoids benchmarks as much as possible.
How does this work in combination with the "time
sharing" operating
principles, though? Please bear with me, as I don't have any
direct experience myself.
I have virtually none myself. I freely admit - this is all
theoretical, paper knowledge from extensive reading, some of
confidential IBM info. Nothing more.
It works well when you have something like a multi-tier client-server
model, when the server can be offloaded to one of these big engines.
Maybe you even have 2 or 3 layers of other server between you and the
back end, but eventually, there comes a point where you absolutely
positively have to **KNOW** that at the same time as ?42.60 being
debited from Mrs Q Smith's account, it was also debiting exactly
?543,432,573,759.43? in riyals from Royal Dutch Shell's account and
crediting it to HRH R Saud of Mecca, and both *WILL* go through even
if a bomb goes off next door.
Though, with terabytes of RAM, I think many other
platforms would
also fare well...
Bolt a big enough engine on it, a barn door will fly.
Why isn't IBM more eager to speak of this and show
the world what
"z" is truly capable of? Why isn't YouTube loaded with videos
showing these kind of things of, to name something?
It's tried. It tried years ago.
Have you *looked?*
http://www.youtube.com/user/IBMSystemZ
It's there. I guess it's stopped shouting about it 'cos word-of-mouth
sales were doing just fine.
I remember the era when every British computer mag had IBM ads in.
Now they're gone. The ads and the mags.
I really don't get it, such a capable platform
(I'm told), but
absolutely no desire to expand and increase its user/install
base?
It is expanding faster than it's done in years, I believe. Mainly on
Linux workloads.
You don't see it - but I may as well paraphrase Terry Pratchett and Neil Gaiman:
"It might, or might not, have helped Anathema get a clear view of
things if she'd been allowed to spot the very obvious reason why she
couldn't see Adam's aura.
"It was for the same reason that people in Trafalgar Square can't see
England."
It's there but they are not playing in the market where people care
what brand of tin they are buying.
If you even need to ask what kind of server to run, or whether to run
Windows Server 2012 with Hyper-V 3 versus vSphere Hypervisor, then
you're not a potential customer. So there is absolutely no point
advertising. Anyone who notices ads is not a potential customer.
These guys make sales by taking the finance director for a weekend in
Monte Carlo during the 24H and quietly mentioning possible deals
between courses at dinner.
I guess that's another reason why the
"GAMEframe" used those
"Cell" processor 'blades'?
See billing, above.
But if you
want to serve files on a LAN, or run a thousand instances
of MariaDB, Perl and Apache running some JSON queries and rendering
PHP, no, it's a stupid, ruinously expensive way to do that.
I'll gladly take your word for it, it sounds like you have more
experience with mainframes than I do.
Very little - but I know what they're good for, even if I don't swim
in the kind of waters where these whales lurk. I'm a minnow.
But, what are your predictions for the future? I
mean, it's exactly
these things that are ever-expanding and becoming more and more
common nowadays, aren't they?
My predictions? Big picture?
We are in the grip of the "worse is better" (q.v.) school and have
been since Unix was invented.
But Moore's Law has stopped buying us more CPU power. Now Koomey's Law
(q.v.) holds. Worse is Better, I think, will run out of steam. The MIT
school will finally prevail. Wheels will get reinvented but some old
powers may yet rise from the grave.
I think we might see some old /styles/ of tech re-invented, but in new
forms, free from restrictive patents and copyrights. I think the whole
era of Unix-like OSs written in C-like languages and compiled down to
object files will go away and something distantly related to Lisp
Machines, or Taos/Intent Elate or something similarly radical like
that, will take over.
But like the French Revolution, it's too soon to tell. (q.v.)
Some further reading:
http://liam-on-linux.livejournal.com/33746.html
--
Liam Proven ? Profile:
http://lproven.livejournal.com/profile
Email: lproven at cix.co.uk ? GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lproven at
hotmail.com ? Skype/AIM/Yahoo/LinkedIn: liamproven
Tel: +44 20-8685-0498 ? Cell: +44 7939-087884