On Sun, 27 Jul 2003, Patrick Rigney wrote:
The thing I really don't like about this approach
is that any file that uses
only a few allocation units still wastes an entire sector with a mostly
empty map block. If you look around on these disks, I'll bet you find a lot
of mostly-empty map blocks, which is just wasted space.
I agree, but then the idea, I imagine, was that ever larger storage
mediums would render this concern obsolete. The linked-list approach in
DOS 3.3 was much more efficient.
I still think
it's a better and more efficient way to go, and not much
more work to code than the simpler schemes being suggested.
I agree that it's simple, but not that it's efficient. For small micros,
finding the first "0" bit in an arbitrarily long bit string takes a few
cycles.
You would just look for the first byte that is not a 255.
Brute force approaches will bleed time badly as the
filesystem
fills. And if you free a sparse or fragmented file in a large filesystem,
it can require resetting a lot of sparse bits in that map, which can in turn
require a lot of reads and writes to the map blocks.
I don't share the concern. With efficient coding, the overhead is
negligible.
--
Sellam Ismail Vintage Computer Festival
------------------------------------------------------------------------------
International Man of Intrigue and Danger
http://www.vintage.org
* Old computing resources for business and academia at
www.VintageTech.com *