On Thu, 1 Jun 2000, John Wilson wrote:
Who says it's software? What if it's a data
disk? Or a totally
unknown one? Or something that's portable (e.g. an interpreted
language). It just seems like it could cause trouble if the image
were hard-wired to claim it's from one particular place. Especially
if they get tagged incorrectly, then utilities which *could* read the
disk will refuse, and ones which *can't* will screw themselves up
trying because they believe the header.
In that case perhaps we can have generic group meta codes, such as "CP/M",
etc.
The opposite extreme is to not have it adopted because
it's too complex to
be practical. It's definitely a good thing to anticipate future needs, but
I wouldn't get too hung up with the notion that this format will be all
things to all people. There will always be a few oddballs out there which
won't fit the framework, whatever it is.
Yes, but in it's current iteration it is not very complex at all.
Detailed, yes. Complex, no. It's being designed to allow a very simple,
straight foward archive to be created in the case of no special
considerations (i.e. a "standard" floppy disk) while still being powerful
enough to allow a very bizarre format to be described as well. I think
the balance is being achieved.
Absolutely, that's why I jumped in. I've
written a bunch of floppy
utilities, and picturing extending any of them to work with verbose tagged
free-form text files with redundant header descriptors and lots of magic
numbers, is giving me a headache. One aspect of doing it right, is doing
it so that it can be implemented cleanly.
Agreed.
I envision all
the post- and pre-processing will be done on a more modern
host, such as a standard PC running Linux or whatnot. I would never want
the processing done on the target machine, especially if this standard
turns into a big messy markup language.
Careful, this is a *very* common pitfall these days. "I'll just assume
that everyone in the world has access to the same modern hardware and
software that I do."
I don't imagine anyone will be attempting to create a multi-gigabyte
archive on a Sinclair ZX80. The point is this archive will be carried
forward onto ever more powerful computers, and limiting it to be feasible
on technology that has long been passed by makes no sense to me.
Linux will run on a 386, a 68K Mac, an Atari ST, and the Amiga. I'm
satisfied with that.
This was my point about Teledisk (especially since
it's a bit flakey even
on a real PC). If I have a Pro/350 and I want to write a DECmate II disk,
there's no technical reason why I can't do it (the low-level format is the
same so it's trivial), so why create an artificial limitation by depending
on a big complicated C program which doesn't run on the Pro?
Take the specification and write an archive application that will run on
the Pro/350. As stated above, the standard as it is being defined is not
difficult to implement. And as I mentioned before, I'm considering
writing a DOS application to implement the standrad once it's near
completion. The point is I don't see this as a legitimate concern. The
standard is not currently constructed to require a computer with gobs of
memory, even if it does evolve into a markup language.
IMHO, that's *their* problem. If a disk format is
so weird that there isn't
even any way to decide what would come first in a binary snapshot, then *that*
format should have a big tangled mass of header descriptors etc. But that
doesn't mean that *every* format needs to have a hairy wad of descriptors
intermingled with the data.
I agree, and the way I am seeing the standard evolve will not require
massive headers for standard formatted disks. It may not look like it now
but that is what is in the back of my mind as we move forward with this.
We're still really in the gathering phase so don't get frustrated just
yet.
As statedin so many words before, the standard will be designed
intelligently enough to archive a standard diskette in a simple, straight
forward manner, but also allow the complexity to archive a completely
non-standard diskette.
As an analogy, I *really* like the Kermit file
transfer protocol. It's
designed to be a very good compromise between capability and ease of
implementation. There are lots of possible bells and whistles but
most of the time you can get away without them. It has a few basic
assumptions which don't hold true for *all* systems (files are a
linear sequence of 8-bit bytes, filenames can be represented in ASCII,
the serial line can pass any ASCII printing character through), but
they fit the vast majority of real systems. It works around most of
the common problems in file transfer, but it's simple enough that you
can write a basic native Kermit implementation for almost any
architecture in a few days.
We'll try to develop this standard in the same spirit.
Sellam International Man of Intrigue and Danger
-------------------------------------------------------------------------------
Looking for a six in a pile of nines...
Coming soon: VCF 4.0!
VCF East: Planning in Progress
See
http://www.vintage.org for details!