Short answer for question 1 is, yes.
Insanely long answer:
Do a search for silent data corruption. There are lots of articles on this and is the reason why more modern file systems being worked on are integrating chunk based checksums, RAID 1, and background scrubbing. It's know that SDC can corrupt any data on the disk: file system data, metadata, or actual data.
File system corruption can exhibit itself in all sorts of ways depending on the nature of the corruption. Files, directories or an entire disk could be inaccessible. But it's possible the file system will point to bogus allocation blocks - so when you access the file, the file itself appears corrupt even though its data blocks are OK on-disk.
Metadata and data corruption can likewise manifest in variable ways.
JHFS+/X does not use any checksumming for data, metadata or the journal. It depends entirely on the drive mechanism's error detection and correction. If the disk doesn't detect an error, or thinks it has corrected correctly but hasn't, the file system accepts the bogus data as completely valid. It has no way of verifying it's correct.
Once corruption sneaks into a primary system, with a cascading backup system without archiving like you describe, it's just a matter of time before all recycled backups contain the corruption on a primary system. Mind you this likely affects a small number of files. That's why it's insidious. It's not like a disk failure where you have a clear, cut, and dried indicator you need to fallback on backups.
Consider that 3-4 corrupt files that aren't ever accessed, get migrated to the next primary storage system, not just your backups. Over several years you have several more files that become corrupted silently and you'll copy all files including prior corrupted files.
Here's an example of using a DNG 1.2 spec checksum feature. It cannot fix corruption, but can most likely detect it, by using MD5 hashes at the time the DNG is created.
http://dpbestflow.org/data-validation/dng-validationZFS and Btrfs file systems use checksumming to identify if a file is corrupt/altered (even an embedded virus would cause a distinction to be made), and uses the chunk based RAID 1 feature to retrieve a good copy and replace the bad copy automatically.
It's a huge reason why I'm not a fan of RAID 0 with conventional file systems. It's asking for trouble. Yes you get speed, but RAID 0 systems should be strictly for speed, not longer term storage. I'd consider them useful for active working files and scratch space. Therefore they don't need to be particularly large, except for video people.
A ZFS (or eventually Btrfs once stable) based NAS is affordable, such a FreeNAS or TrueNAS product. It can do periodic snapshots, akin to rolling Time Machine backups where you can go back in time to retrieve earlier versions, and also have those snapshots remotely rsync'd to a 2nd NAS off-site.
A smaller scale solution might be one of the products from Ten's Complement, which is a ZFS product for Mac OS X using DAS (direct attached storage).
Really the problem is that short of per checksummed files, you don't have a practical way of detecting corruption, and you need detection before you can correct it (by replacing with a known good copy that isn't corrupt).
I have done very rudimentary, intentional *deletion* of a byte from a DNG, and Camera Raw will refuse to open the image. If I change (corrupt) a single byte, I get a "file is damaged" message, but it still can be opened and edited. I haven't tried this in Lightroom but I suspect similar results.
Last, I would only buy Advanced Format (so called AF) disks from now on. The larger physical sector size of 4096 bytes (compared to conventional disks for the past 20+ years of 512 bytes) allows for a more efficient and effective error detection and correction scheme to be used. Most new disks should be AF disks by now. (There are two kinds, 512e and 4Kn. The 512e are common and work just like conventional hard drives. The 4Kn ones are only just about to start shipping, and I'm not sure Mac OS X supports them yet or not.)
This probably raises more questions than it provides answers. But you have a couple of ways of at least attempting to keep some handle on detecting errors in DNG by using the optional MD5 hash checksumming.