Verify your master is correct, and then re-write the entire backup from time to time. Why? Because hard disks just sitting around do develop errors or lose data and it's better than piecemeal checking and updating (which you do in between the big re-writes).
Out of curiosity, how do you verify your master data?
If you exclusively use DNG files, I think those can be validated against an internal verification hash, which would be close enough to an ideal solution. However, in the absence of such a solution, for native RAW files and derivative work stored in TIFF or PSD/PSB files, you would need another solution, a kind of reference repository to compare each file against a known good duplicate. Doing a check on the file system with utilities such as chkdsk (on Windows) or fsck (BSD and Unix, possibly including Macs) will only verify file metadata and not file integrity (unless using ZFS or BTRFS as a file system). There might be other solutions which I am not aware of, hence my question how you verify your master data.
I still don't see how rewriting your backups would enable you to successfully thwart the danger of file corruption on copies, since you do not mention a verification of the copy action afterwards. You might have a fresher copy of your data, but as long as it hasn't been validated against the master data, the fact that it is identical to the original is still an untested assumption.
Bit rot can happen at the time when the files are copied (defective sector on target disk, defective cable, lousy power supply unit, etc.) and might not be detectable unless the file is read again and compared against a known good duplicate. There might be disk sectors that will progressively lose their magnetic polarisation, but this can be solved by rereading the sectors (provided sector values can be read, the disk will rewrite individual bits that have a weak polarisation, so that further degradation will not continue unnoticed), so here again, there is no benefit to a rewrite done by the end user. An SSD would be another story with the proprietary internal wear leveling algorithms which might move data to other cells, but the same behaviour for preventing bit rot would be expected.
A verification that the copy is identical to the master data would however ensure that the data is there and is valid. Which is why I suggested performing an actual in place validation of the data instead of a rewrite. Such a rewrite would IMO only make sense when data needs to be migrated to a different disk, as suggested by Bob.
Cheers,
Fabien