On Wed, Feb 1, 2023, 2:52 PM Paul Koning via cctalk <cctalk(a)classiccmp.org>
wrote:
On Feb 1, 2023, at 3:20 PM, Fred Cisin via cctalk
<cctalk(a)classiccmp.org>
wrote:
On Wed, 1 Feb 2023, Ali via cctalk wrote:
> But does that matter? If the main purpose is to be able to refresh the
data so it is readable does it matter that the data is not in the same
block as long as it is readable?
Ah, but most of that sort of memory has a finite number of cycles, and
wears out due to use.
Testing it is heavy usage, and brings about an
even earlier end of life.
Could we call that a "nosocomial" ("not so comical") deterioratoin
:-?
It's well known that flash memory (and NVRAM generally) has write limits.
I don't know of any read limits. Some other memories have write limits as
well though they are far larger and generally far less known. I think some
of the phase change non-volatile memory types that seem to emerge from time
to time -- FRAM for example -- have write limits. Modern high density HDAs
also do, I believe, because the heads actually come closer to the surface
during write and as a result are more likely to touch the platter.
But read limits? I'm not sure about that. What sort of numbers are we
talking about?
Read disturb in NAND is a thing. However, it kicks in after millions or
hundreds of millions of page reads in a single EB. Most FTLs I've seen will
treat this like any other "too many bits in error" read when it happens.
Some drives keep track and do things with voltage thresholds to compensate.
A few try to proactively garbage collect, though the benefits of that are
slim to nil for most workloads.
Warner
P.S. some QLC NAND has worse read disturb, but it's only 100x worse. It can
come up for high volume applications but not for simple archival reading.
If all else fails there's core memory, which as far as I remember is pretty
much unlimited for both read and write.
paul