If performance is important then you want to avoid RADIZ because RAIDZ effectively has all of your hard drives acting and moving as one, rather than being able to drive them all independently. Great for data reliability, terrible for performance (RAIDZ is slower than non-mirror/non-RAID ZFS and slower than mirrored ZFS.)
Solvable by striped RAIDZ groups. And the fact a single disk saturates GigE anyway, it's unlikely this is a real performance problem, let alone one without a work around.
Similarly, ZFS is not immune to the problems associated with RAID, where a disk dieing during rebuild can take out your entire data set.
That is expected with single parity RAID of any sort, however RAIDZ single parity is still more reliable than single parity conventional RAID due to the lack of the write hole.
The most significant feature of ZFS for photographers is that data is checksum'd by the operating system before being stored on disk. This means that if the disk or the path to the disk (including controller) causes corruption then you'll find out and hopefully be able to correct it (if using mirroring or raidz.)
In the case of mirrored data, including RAIDZ, any error detection is automatically corrected, the corrupt version automatically repaired. You'd need to check dmesg to learn of it.
Hopefully more vendors will include data checksum'ing in their disk storage products/operating systems. But what you've got to ask yourself is how likely is this to happen?
Sun, Oracle, Microsoft, various Linux distros are all offering, or imminently offering, disk storage with file systems that include data checksumming. Missing is Apple.
The answer to that question is perhaps most easily found in this forum - how many posters here have posted complaining about the operating system corrupting their image or that when they went back to look at an image from 5 years ago that they couldn't read the file on their hard drive?
Flawed logic. The forum is not a scientific sample. Users have no reliable way of determining a problem may be the result of corrupt data. You assume the problem in every case is a.) restricted to an image; and b.) results in visible artifacts; c.) that they would post any sort of "could not read the file" experience on a forum, rather than go to a backup copy, and move along with the rest of their day.
I've had perhaps 1/2 dozen image files, so far, unreadable. I'm a certified storage and file system geek (perhaps secondary to color geek) and I cannot tell you whether this was bit rot, silent data corruption, or file system corruption. But the files, as read, were not considered recognizably encoded image data by any image viewer, or any version of Photoshop going back to 5.5 (and I mean 5.5 not CS 5.5). The backups were also affected, presumably because all of the backups were copies of corrupted files. Fortunately, they were synthetic test images and were relatively easily recreated. Nevertheless, if you think this problem is anything like a mouse, you've got one mouse with this anecdote, and as they say, where there's one mouse there's bound to be more. We really have no idea how big of a problem this is based on forums. So I refuse that premise entirely.
Clearly some significant companies think it's a problem or there wouldn't be so much active development on Btrfs: Oracle, Red Hat, Fujitsu, IBM, HP, and others.
Here's some research data on the subject. Google has also done a study. http://research.cs.wisc.edu/adsl/Publications/corruption-fast08.html
You're more likely to find people complaining about hard drives themselves failing than the data becoming corrupt - but that doesn't mean data corruption isn't a problem. One of the first USB-Compact Flash adapters I used randomly corrupted data during long transfers and this wasn't visible until I attempted to open up the image in Lightroom (at first I didn't realise that it was the USB adapter that was at fault - I thought the camera had written out bad data!)
It's an example of SDC, which the paper above addresses and attributes, rather significantly, to firmware induced corruption. Message boards, unscientific samples though they are, are littered with people having "hardware" RAID controller firmware induced corruption of their RAIDs, obliterating all data upon a single disk failing because data couldn't be reconstructed from (corrupted) parity.
So it's a bigger problem than we think it is, just because we're thinking that the corruption would be obvious. And that's probably untrue. How many people get a RAID 1, 5, or 6 up and running, and actually yank a drive, pop in a spare and see if the RAID correctly rebuilds? Professionals do this. Most people doing it themselves do not. They assume the reconstruction will work. And too many people consider RAID a backup solution rather than about increasing the availability of data.