Hello - this post will probably seem Greek to nearly everyone here but I'm hoping there's a few of you out there that work at Amazon AWS/Google GCE/Microsoft Azure or have a NAS/SAN at home.
The problem - I have a growing archive of Michigan Athletics video (4-5TB now). Eventually I can expand my NAS (Windows Server 2016 using ReFS - to hell with RAID card firmware bugs) to 32TB (8x8TB in RAID1) but I will run out of room at some point. I usually like to leave one spare 'dual bay' worth of free space open in the event I need to pull two hard drives and replace them with two larger ones. For example, if I had 8x4TB, 16TB useable, I'd like to leave 4TB free so I can swap out 2x4TB for 2x8TB + 6x4TB to give the NAS room to 'breathe' and grow.
The big problem I'm facing - cold storage. I'd ideally like to move older seasons onto cheap 2x2TB hard drives that are $40 apiece (far cheaper in the long run than Amazon Glacier). Each file saved on the disk would have a CHECKSUM (to detect bitrot) amd then I'd have a program run a comparison to see if the CHECKSUMS between the two disks match for every file, or at least report that one of the files has a discrepancy compared to the other.
For the CompSci guys- apparently PAR3 uses Reed Solomon correction codes - is it worth it to generate these recovery records for a single 15-20GB file (this is about the average size of a football game)?
Does anyone know of an application (freeware or otherwise) that create checksums of each file on a drive or directory, write it to a text file, and then re-scan and compare the historical checksum that was written previously with the new checksum?