-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Pontus Pihlgren wrote:
(Sorry for keeping this OT discussion continue, but
one of my
questions are vaguely on topic)
These people think it's efficient to run a
copy of Windows 2003
on a server (which needs a couple of gig of RAM to work well) and
then run multiple VMs on that containing copies of Windows 2003
or other flavours of Windows. They think virtualisation is new
and clever and it doesn't occur to them that not only is this
wasteful of resources, but it's a nightmare to keep all those
copies patched and current.
I'm curious, what OS:es and software did virtualisation before
VMware/XEN/Virtualbox and the like ?
Also, why is it wasteful of resources?
And finaly, why would keeping virtual installations up to date be
any harder than non-virtual?
/P
Let's stick to the realm of x86 virtualization here, because that's
the architecture I know best.
Starting off, generally, when you preform virtualization, your VM runs
(usually) under an OS. This results in the raw, over-the-wire (i.e.
raw addressing pins on the CPU) memory map of a virtual computer to be
virtualized, not as the virtual memory map of a process, but as a PART
of that processes memory map. Further, that processes memory map is
also "virtualized" through the use of virtual memory.. Also, any OS
running in the VM that's worth a damn would also be using virtual memory.
Until about 2-3 years ago, x86 CPUs lacked the hardware to reliably
perform multiple levels of virtual memory. Also, OSes perform actions
in the VM that cannot be done as a normal process of a OS (and would
crash the host OS), and this requires //emulation// of the guest CPU,
even if it's partial emulation.
One way to look at the memory complexity is this:
Let's say a process in the guest OS writes to memory in it's process
space.
This write is virtualized like this:
guestprocess.bin (0x0bfe0030) -> "raw" mem of VM, e.g. physical
address if it were real (0x00100030) ->
hypervisor's process space (0x0bef3230) -> bare metal addressing
(0x00100340)
Since the x86 CPUs seem to integrate the MMU directly into the
addressing of the CPU, and virtual memory seems to be "second nature"
to i386 and later, the "hypervisor to bare metal" address translation
is done entirely in hardware, and is the same type of translation that
any normal program faces.
The "raw mem of VM" is the virtual machine's equivalent of bare-metal
addressing, and this requires emulation of the CPU's addressing
functions, and the guest OS's processes (unless the OS lacks virtual
memory, like DOS or whatever) requires that the emulated address unit
be attached to an emulated MMU.
Also, you have to emulate "privileged" instructions, and those that
change the CPU mode. If you didn't, the hypervisor process would run
into a privilege trap, and if the x86 host actually honored instead of
trapping, the whole bare-metal system would crash.
A 2.8GHz Pentium 4 without the instruction sets for hardware-assisted
virtualization will run the guest VM with about the speed and
performance of a 800MHz Pentium-III, and the host OS will suffer a
nasty performance penalty (at least, other host-level applications
will "feel" it.)
If you were to attempt to run a full virtualization engine on, e.g., a
486DX2, the guest would probably run so slow that you could debug each
executed instruction in real-time.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla -
http://enigmail.mozdev.org
iEYEARECAAYFAkoxYDUACgkQF9H43UytGiaEiwCZAUDydfJY9xevQT5NP34eXPzg
o5cAn2Qz/yok1/ysKMlP75coDAePTb5g
=vC1U
-----END PGP SIGNATURE-----