On Wed, Feb 1, 2023 at 1:41 AM emanuel stiebler via cctalk <
cctalk(a)classiccmp.org> wrote:
On 2023-02-01 00:00, Chuck Guzis via cctalk wrote:
On 1/31/23 20:16, Ali via cctalk wrote:
If you look at the specs for SSDs or any flash
medium for that matter,
they're rated in terms of *write* cycles, which is why you don't want to
abuse that.
right
But, in most OS you can check the SMART data, to get an idea
However, it may well be that writing is the only
way to refresh cells,
as reading won't, if I understand flash operation correctly.
Reading ensures, that the cells are checked. if they fall below specific
thresholds, they will be copied to another block
Indeed. It triggers the usual reliability engine in the drive. All data in
NAND is
stored with a number of redundant bits so that up to N errors can happen
and the data can be recovered. All NAND has a read error signal that
triggers well below N so that the data is all read and reconstructed (this
is normal on all reads), so it can then be written directly to the new
space.
Also, this isn't usually done on a PAGE pasis inside the NAND, but on an
entire erase block basis, because it's a leading indicator of problems to
come. Writing this data to a freshly erased block will mena it can be
read for some time into the future.
But
rewriting a sector or block of a file doesn't usually write back to the
original, because of the write-leveling firmware in the drive.
right
Right.
JEDEC requires
data retention of a consumer drive for at least 1 year,
which doesn't sound like much; real retention is probably much longer.
retension in case of power off.
If the power is applied all the time, the internal controller "can"
check the quality of the cells automatically (but this really depends on
the controller, controller version, and the OS has to chose the right
strategy. And the controllers improved a lot lately)
The OS might not have a choice. All the SSDs that I've used in the
past decade at $WORK have not exposed any of this to the host, not
even enough stats to know when it's going on in real time... let alone
the ability to pause these operations for a little while until we're off
peak for the day...
And JEDEC retention is also at 20C, when the NAND is maximally
worn, with certain data access / write patterns leading up to that
wear. Most other wear patterns, especially archival ones, can
lead one to expect a much longer retention in all but the tiniest
processes storing 3 or 4 bits per cell.
You can write
a script that write-refreshes every file on the drive.
Please don't :)
Just tell the controller to run a refresh ...
It would be even worse: since it could also trigger additional writes
as the new LBAs are written because the old erase blocks they
are in are now almost empty and new erase blocks will be needed
to write the new LBAs that are flooding the drive. The writes, and the
amplification writes will cause way more wear and tear on the drive
than doing a read scan of the whole drive (assuming you are impatient
about leaving the drive powered for the internal refresh...)
The easiest
thing is to buy a second drive and ping-pong the data
between them periodically. That way, if one fails, you still have the
other for backup.
Disagree here, just run a compare between the two drives.
a.) it will read all files, and the controller checks them in the
background (will move them, if necessary)
b.) you know, that after the compare you still have the data twice, on
independent drives
c) You have less wear on both drives.
While a read could trigger a move due to read disturb, but that's only after
tens of thousands (or more) reads.
Warner