It was thus said that the Great Bob Shannon once stated:
Vintage Computer Festival wrote:
Anything simpler would involve having files take
up static areas of the
disk, perhaps defined by track boundries, with a fixed number of directory
entries (as defined by the total number of file areas) and a limited file
size (as defined by the size of each file area).
This latter approach is more like what I'm thinking of.
Either fix the file size at creation time, or even simpler, set all files to
some lenght, and longer files can be implemented (later?) as a linear set of
files, a file and its extensions. Maybe allocate 64 kbytes to each file
'slot'.
Make the slots too big, and you limit the number of files available and
waste much of the space on the disk. Too small, and you have overflow (file
is too big for a slot).
Fast, efficient (disk space usage), easy (to implement): pick any two 8-)
I have come up with a system that is easy to implement, is quite efficient
of disk space, but don't bother with measuring the speed because it will be
S-L-O-W ... if you want fast and efficient, then possibly a B-tree or B-*
disk format would be the way to go, but then that's not too easy to
implement. Using a file per track is fast and easy, but doesn't use space
efficiently.
It also depends upon the size of the disk you are going to use---up to a
few meg in size the MS-DOS FAT system (12 or 16 bit variants) will probably
be the best bet---well documented with the ability for interoperability
between most systems.
-spc (my two bits worth ... )