On 2023-02-02 04:38, David Brownlee wrote:
That reminds me (looks at 43.5T of zfs pool that has
not had a scrub
since 2021).
It can be nice to have a filesystem which handles redundancy and also
the option to occasionally read all the data, check end to end
checksums (in the unlikely case a device returns a successful read
with bad data), and fixup everything. Does not eliminate the need for
remote copies, but gives a little extra confidence that the master
copy is still what it should be :)
So, what else do you guys use, to make sure your data is safe for the
years to come?