2009/6/11 Pontus Pihlgren <pontus at update.uu.se>:
(Sorry for keeping this OT discussion continue, but
one of my questions
are vaguely on topic)
These people think it's efficient to run a copy of Windows 2003 on a
server (which needs a couple of gig of RAM to work well) and then run
multiple VMs on that containing copies of Windows 2003 or other
flavours of Windows. They think virtualisation is new and clever and
it doesn't occur to them that not only is this wasteful of resources,
but it's a nightmare to keep all those copies patched and current.
I'm curious, what OS:es and software did virtualisation before
VMware/XEN/Virtualbox and the like ?
Answered in detail by others, but I'd also point out some non-OS
hypervisors that were around long before VMware etc. Sheep Shaver on
BeOS in 1998, for instance.
http://www.bebox.nu/os.php?s=os/macos/index
SoftWindows, SoftPC and VirtualPC on the Mac could all be considered
VM environments, allowing one OS to run as an app under another, alien
OS on an alien platform.
http://en.wikipedia.org/wiki/SoftPC
OS/2 2 could run MS-DOS or Windows 3 in VMs in the early 1990s.
Quarterdeck's DesqView could even be considered as a virtualisation
tool on the PC back in the 1980s.
Also, why is it wasteful of resources?
To understand this, one has to consider some of the earlier VM
systems. 2 more efficient methods, for example, are the IBM mainframe
style, with a relatively simple hypervisor OS to host the virtual
machines - such as IBM VM, currently z/VM - running more full-function
guest OSs that present the actual functionality needed by user
applications.
http://en.wikipedia.org/wiki/VM_(operating_system)
This way, the host OS and the guest OS are different, with relatively
little duplication of function between them.
Secondly, consider the operating-system level virtualisation functions
of OSs such as Solaris's Containers.
http://en.wikipedia.org/wiki/Solaris_Containers
http://en.wikipedia.org/wiki/Operating_system-level_virtualization
Here, essentially, a single kernel runs multiple independent
userlands, allowing near-total isolation between running processes,
with much more efficient resource sharing between them.
Parallels' Virtuozzo allows similar functionality on Windows:
http://www.parallels.com/uk/products/virtuozzo/
And finaly, why would keeping virtual installations up
to date be any
harder than non-virtual?
I think you may be missing the point. It's not that VMs are any harder
to maintain - they're not - but if you're running 10 copies of Windows
on a box rather than 1 doing 10 tasks, then that's 10 copies that must
be patched and updated - so 10x the maintenance workload of a single
OS instance. When people talk excitedly about server consolidation
using VMs, this is generally forgotten. It's the software that tends
to take lots of maintenance, not the hardware, and if you go from a
datacentre with 50 copies of Windows on 50 machines to 3 or 4 honking
great servers running all those as guests, you *still* have 50 copies
of Windows to maintain. The work level doesn't drop much at all - you
just save space and electricity.
And even that is a partly illusory saving, because much of the power
and resources that a computer will use in its typical working life of
a few years is spent in building the thing. So by replacing multiple
working hardware boxes with a single big new machine to run the same
workloads, you're wasting all that sunk-cost of the manufacture of
those boxes, while "spending" a load more non-recoverable resources
that were used to make the new box.
So it's not all that "green", either.
As for inefficiency, the point is this: duplicating functionality is wasteful.
If the Windows kernel and userland needs a gig of RAM and say 500MHz's
worth of dedicated CPU bandwidth to run effectively, *plus* the
resources used by the apps running on it, then if you run one copy of
Windows, it gets all the resources of the box. If you use that one
copy to run VMs, though, and in each VM is another full copy of
Windows, then each VM needs that gig of RAM and 500MHz of power,
*still* plus the resources needed for the app.
Let's say you're running 4 copies of Windows, in VMs, on a host copy
of Windows. That's 5 gig of RAM and 4,500MHz of CPU bandwidth blown on
all those copies of Windows, of which 4.5GB and 4000MHz are running
duplicated code that is shared by all the VMs.
If, instead, you were running an OS that could partition itself so
that the 4 workloads all ran on the same shared kernel, but completely
isolated from one another, so that one could have one version of the
core libraries and another a different version, one could have Oracle
10 and another Oracle 11, say, just to pick examples out of the air,
then you would not be "wasting" all that RAM and CPU bandwidth on
multiple duplicated OS instances - instead, it would all be going on
your applications, meaning that each server could sustain a far higher
workload.
Does that make my point clear?
--
Liam Proven ? Profile:
http://www.linkedin.com/in/liamproven
Email: lproven at cix.co.uk ? GMail/GoogleTalk/Orkut: lproven at
gmail.com
Tel: +44 20-8685-0498 ? Cell: +44 7939-087884 ? Fax: + 44 870-9151419
AOL/AIM/iChat/Yahoo/Skype: liamproven ? LiveJournal/Twitter: lproven
MSN: lproven at
hotmail.com ? ICQ: 73187508