On Sat, Nov 17, 2012 at 06:31:06PM -0500, David Riley wrote:
On Nov 17, 2012, at 17:53, Toby Thain <toby at
telegraphics.com.au> wrote:
On 17/11/12 2:23 PM, Jecel Assumpcao Jr. wrote:
For me it was PageRank that made all the difference. I remember when
Yes, you are quite right, but you need the endlessly scalable and fault-tolerant
architecture as well. So you could say two paradigm shifts were involved. No wonder Google
won. :)
Are you saying Linux is more scalable and fault-tolerant than
VMS? Because those may be fightin' words around here. :-)
Yes. It scales from tiny embedded machines (the smallest I use daily has IIRC
32 MB of memory and a 50 MHz PPC CPU) to massive supercomputers with more than
1024 CPUs running (multiple) GHz clockspeed . It scales from machines with 32
MB of memory to machines with 256 GB and more of memory. It scales from tiny
filesystems (a couple MB, just enough to hold a minimized compressed system
image and kernel) to massive systems of dozens, hundreds of TB. With the right
software setup, Linux based machine setups scale from a single node to tens of
thousands of nodes.
And fault-tolerance .. there is more than one way to cook that egg. You can
build hardware clusters that will not fail[1]. Or you can build software
environments that accept hardware failures as normal[0] and deal with it
gracefully. The later approach is usually taken in very large scale Linux
environments. Heck, we did it in our own cluster software for Linux.
The art _has_ moved on from when VMS was king of scalability.
Sorry to burst your bubble.
Kind regards,
Alex.
[0] If you have a sufficiently large number of machines (10K+), the MTBF
_will_ catch up with you and you _will_ have failures every day.
[1] Except when they do. Yes, human error _does_ count.
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison