Vintage Computer Festival wrote:
One such way:
Have some portion of the disk set aside for a fixed number of directory
entries (the "directory").
Each directory entry has a certain number of characters for a filename (12
is good), a file type byte, a status byte, and a pointer to the portion of
the disk where the file is stored.
The file data is then stored on consecutive sectors, with the last one or
two words (depending on word size implemented) pointing to the next sector
of the file. Zero values means "end of file".
If you have a pointer to form a linked list, why use consecutive sectors?
There also needs to be a map somewhere specifying which sectors are
free/used.
Now things are getting complicated! This one component adds a major
design decision I'd rather
aviod, how to allocate unused sectors (linear versus first-fit).
This is basically how most early microcomputer DOS works. I take my
example from Apple DOS 3.3.
Anything simpler would involve having files take up static areas of the
disk, perhaps defined by track boundries, with a fixed number of directory
entries (as defined by the total number of file areas) and a limited file
size (as defined by the size of each file area).
This latter approach is more like what I'm thinking of.
Either fix the file size at creation time, or even simpler, set all
files to some lenght, and longer files can be
implemented (later?) as a linear set of files, a file and its
extensions. Maybe allocate 64 kbytes to each
file 'slot'.
New files always get allocated at the end, as in the Northstar DOS
example Patrick Rigney described.
The trick then becomes how to efficiently 'squeeze' the disk drive to
recover space from deleted files?