At 11:36 PM 3/9/2011, vintagecoder at
aol.com wrote:
I understand that and it makes sense
"deleting" data or trying to overwrite
a filesystem record doesn't necessarily do what we think. But filling
the drive to its capacity with a utility like dd using zeros or random data
has to work because you can read the data back, so it's really there.
If the "file system" was smart, after you did that it would just point you
to one zeroed block all day long, if it knew that's what you wanted to see.
In virtualized environments, there's great advantage to be gained
by letting an OS think it has much more space than it is actually
using. It might look like you have 1 TB, but your disk drive container
file is only as large as the storage you're actually using. This
"thin provisioning" allows many servers to run on a single server,
and reallocate the real storage between a dozen servers that think
they might need it. It's oversubscription at the file system level.
Technological advances and improvements are moving away from one user,
one hard drive, and the one-to-one correspondence between what was once
written and what you might find, if you look deeply at the low level.
Forensics may become much more difficult because of it. Cloud storage
throws it a loop, too. Removable storage does, too, along with encrypted
file systems within file systems, or wireless network storage, etc.
- John