On Thu, 2 Feb 2023 at 11:54, emanuel stiebler via cctalk
<cctalk(a)classiccmp.org> wrote:
On 2023-02-02 04:38, David Brownlee wrote:
That reminds me (looks at 43.5T of zfs pool that
has not had a scrub
since 2021).
It can be nice to have a filesystem which handles redundancy and also
the option to occasionally read all the data, check end to end
checksums (in the unlikely case a device returns a successful read
with bad data), and fixup everything. Does not eliminate the need for
remote copies, but gives a little extra confidence that the master
copy is still what it should be :)
So, what else do you guys use, to make sure your data is safe for the
years to come?
Code which can be public in github, code which cannot be public in
free gitlab account, (code which evokes Cthulhuian mindworms on
reading and should never be shared with others is kept with other
locally backed up files).
The sata on the main machine is held on 6 disks in ZFS raidz2 (takes 3
disks to fail to lose data). Synced to two remote machines (ZFS in
simple config to give integrity but without local redundancy).
Sync is via syncthing with staggered file versioning (keeps 365 days
of changes for any given file). Most data is pushed only from the main
machine, with remotes also able to sync between them, but some folders
are set to sync between all.
Biggest vulnerability would be an exploit in syncthing, or some common
OS level exploit, as all data is online. ("A backup is not a backup
when its online, it's just a copy")
David