On Thu, Jun 01, 2000 at 01:08:47PM -0700, Sellam Ismail wrote:
Then the machine descriptor would reflect the actual
machine that the
software was designed to run on.
Who says it's software? What if it's a data disk? Or a totally unknown one?
Or something that's portable (e.g. an interpreted language). It just seems
like it could cause trouble if the image were hard-wired to claim it's
from one particular place. Especially if they get
tagged incorrectly, then
utilities which *could* read the disk will refuse, and ones
which *can't*
will screw themselves up trying because they believe the header.
I'd hate to make a "standard" that
doesn't
address all the various needs of every different floppy format and then
end up having to extend the standard later or not have it adopted due to
it's limited usefulness.
The opposite extreme is to not have it adopted because it's too complex to
be practical. It's definitely a good thing to anticipate future needs, but
I wouldn't get too hung up with the notion that this format will be all
things to all people. There will always be a few oddballs out there which
won't fit the framework, whatever it is.
We have the ability right now to think, argue,
recommend, specify and commit and so we might as well try to do it as
rightly as possible.
Absolutely, that's why I jumped in. I've written a bunch of floppy
utilities, and picturing extending any of them to work with verbose tagged
free-form text files with redundant header descriptors and lots of magic
numbers, is giving me a headache. One aspect of doing it right, is doing
it so that it can be implemented cleanly.
I envision all the post- and pre-processing will be
done on a more modern
host, such as a standard PC running Linux or whatnot. I would never want
the processing done on the target machine, especially if this standard
turns into a big messy markup language.
Careful, this is a *very* common pitfall these days. "I'll just assume
that everyone in the world has access to the same modern hardware and
software that I do."
This was my point about Teledisk (especially since it's a bit flakey even
on a real PC). If I have a Pro/350 and I want to write a DECmate II disk,
there's no technical reason why I can't do it (the low-level format is the
same so it's trivial), so why create an artificial limitation by depending
on a big complicated C program which doesn't run on the Pro?
This is a circular specification. What you need in
order to create a
"no-nonsense" archive IS the standard we are attempting to define, because
every machine will invariably have some wacky formats that won't allow a
sensible, straight-forward archive to be made. This needs to be taken
into account.
IMHO, that's *their* problem. If a disk format is so weird that there isn't
even any way to decide what would come first in a binary snapshot, then *that*
format should have a big tangled mass of header descriptors etc. But that
doesn't mean that *every* format needs to have a hairy wad of descriptors
intermingled with the data.
As an analogy, I *really* like the Kermit file transfer protocol.
It's designed to be a very good compromise between capability and ease of
implementation. There are lots of possible bells and whistles but most of
the time you can get away without them. It has a few basic assumptions which
don't hold true for *all* systems (files are a linear sequence of 8-bit bytes,
filenames can be represented in ASCII, the serial line can pass any ASCII
printing character through), but they fit the vast majority of real systems.
It works around most of the common problems in file transfer, but it's simple
enough that you can write a basic native Kermit implementation for almost
any architecture in a few days.
John Wilson
D Bit