Yeah, that's true. What you can buy today is
gigantic compared to what
was available not so long ago but now everything is digitized so stuff
we used to keep on paper and video tape or 8 tracks and film negatives
is now on the hard drive. There's never enough room any more and there
probably won't ever be again.
Boyle's Law states that a gas will expand to fit the available space.
Data, software, and commercial software products do, also.
Moore's Law (which is an observation, not a law) holds that speed
and capacity will double every 18 months.
That isn't enough when content (and size of MICROS~1 products) doubles
every 17 months.
MICROS~1 seems to use Moore's Law as a design tool - always design
everything to require twice the speed and capacity of current technology.
If speed and storage efficiency of the product isn't adequate, throw
hardware at it.
Seriously though, part of the problem is that MICROS~1 seems to believe
(or at least used to) in treating programmers well. Programmers always
use current state of the art, rather than using machines representative of
what the customers have. If a programmer fills the drive, they give them
a bigger one. If a programmer gripes about speed, they provide a faster
machine. Therefore, without necessarily any such INTENT, they are in a
world of unlimited resource, produce products that expect better resources
that what the customers have, and push the industry towards following
Moore's Law.
If a progammer has a flaky machine, it gets replaced immediately. Should
that really always be done, even for those writing exception handlers? An
unfortunate consequence is that those writing code to handle errors have
no experience with them, and SIMULATED errors do NOT behave like the real
thing. If MICROS~1 were to trade computers with us, the next round of
software would be more efficient (faster and using space more
effectively), and significantly less buggy, since they would have to put
up with slow performance, running out of disk and memory space, and would
experience the bugs of the real world.
As an example of OS bugs directly related to MICROS~1 programmers not
having adequate real world bug experience, SMARTDRV became MANDATORY
starting with Windoze 3.10 Beta. When it would hit a sector read/write
error, it would have already told the installer that that part of the task
had been successfully completed! Therefore, it could not "Ignore" and
continue. It could not "Abort" or "Fail" that part of the task,
because
it had "alredy been completed". Therefore, the ONLY option for the
critical error handler was "RETRY". Even in cases where that could not
possibly succeed. Normal SOP would previously have been to IGNORE, record
the filename, complete the installation and then go back and manually
repair/replace THAT file. But THAT is not an option. The ONLY way out of
the RETRY loop is to reboot. BUT, since SMARTDRV wrote the DIRectory
sectors AFTER all else was done (and had not been done at the time of the
REBOOT), there is no trace of the aborted installation.
On Windoze 3.10 beta installation, I hit a sector error that Spinrite,
etc. could not find. When I reported the problem, MS "beta Support"
declared it to be a "Hardware Problem - NOT OUR PROBLEM". I attempted to
point out that:
1) It IS the responsibility of the OS to properly handle hardware problems.
2) SMARTDRV installation should be OPTIONAL, or at least OVER-RIDABLE,
such as a command line switch for SETUP to specify not installing or
activating it.
3) SMARTDRV's write caching will eventually cause significant data loss.
Because of the "background" unpublicized nature of it, OTHER PROGRAMS
will get blamed for the loss.
I was not invited to any further beta testings.
MS-DOS 6.00 added SMARTDRV and disk compression.
Most word processor and spreadsheet users were in the habit of saving
their file, and turning off the machine as soon as the DOS prompt
re-appeared. Of couse, with write-caching, the file had not yet been
written when the flow of computrons ceased.
Soon, the press was flooded with stories of "Microsoft disk compression
causing data loss!"
Infoworld, for example, ran dozens of such stories. They ran a test suite
that did a bunch of office macros, then did a cold boot, in a loop. Of
course, the SMARTDRV write cacheing was wiped by the reboots, and data was
lost.
Bill Gates called the editor of Infoworld and told them that their test
procedure was flawed. What else was he going to say? that there WAS a
known intolerable problem, but that it was a DIFFERENT part of the OS
that was for real at fault? BIG TIME.
Infoworld reported that phone call as "harrassment" and
"intimidation".
Eventually, to maintain public image, MICROS~1 had to fix "the disk
compression bugs". The repair, which was released as a FREE "STEPUP"
upgrade from 6.00 to 6.20 (6.10 was in use by IBM), consisted of:
FIX DISK COMPRESSION BUGS:
1) Disable SMARTDRV write-cacheing
2) IF write-cacheing were deliberately re-enabled
a) delay re-display of DOS prompt until buffers had been written
b) disable the rearrangement of writes, so that what writes were done
would be in chronological order
(The differences between 6.20, 6.21, and 6.22 were unrelated, and due to
having the disk compression (6.20), losing the IP court case with STACer
(6.21 without compression), and re-releasing with a different compression
program (6.22))
--
Grumpy Ol' Fred cisin at
xenosoft.com