Having been involved in corporate data center support for more than a decade and worked with every level and type of storage hardware I can give you my $.02.
First, avoid RAID 5/6 like smallpox. It's an outdated technology that only mattered when 9gig SCSI drives cost $700 and using three drives -vs- four drives made financial sense. I've seen RAID 5 parity failures wipe out terrabytes of data on very expensive SAN clusters, and 99% of the time it's because a controller card went bad over a period of time and corrupted the parity stipe across multiple volumes. I've seen smart people lose jobs because of this very issue, and while I've had sympathy for them, I'm sick of arguing about it because I'm right. RAID 5/6 sucks and needs to be outlawed. If you want redundancy from HD failure stick to RAID 1. RAID 10 maybe.....in theory RAID 10 (multiple 1:1 mirrors with a stripe written across them while RAID 0/1 is reverse) should be just as hardy as RAID 1, but a faulty controller card can still kill data faster than a table saw goes through balsa wood.
I don't trust appliances - period. I kinda trust my $250,000 Hitachi SANs.....I don't trust anything else because I've seen eveything from desktops to million dollar iSeries arrays take a crap over the weekend. Me knows better. So, what's a digital photographer supposed to do other than pay a fortune for online back-ups and a T3 to your house? Let's break it down - first is the added expense of RAID. The reason we use RAID is for HD redundancy, and we all know HD's fail. If you use some of the inexpensive JBOD / Linux solutions you get a lot of cheap storage, and often good performance ratios, *BUT* you lose the peace of mind with RAID. So, how do we get rid of the need for RAID and lessen the chance of HD failure...I'm getting to that :-)
All HD's are subject to MTF (mean time to failure), and controller cards will fail as well. However, as many rodeos as I've been to and as many enterprise data centers I've been in one thing I've *never seen* is a HD fail when it's just sitting not powered up. In theory a non powered drive will last almost forever, or at least until gravity slowly bends the platters and actuators until they are out of alignment. Don't laugh - HD orientation was a problem up until the mid 90's. The short form is - if you want to maximize HD reliability, then don't plug them in. No worry about summer lighting strikes grounding through MB interfaces or power supplies burning up and sending 110VAC down through the 12volts rails.
So, my suggestion to maximize speed, durability, and price per terrabyte is to first split your data up between archive and non-working. Build a big, generic tower, stick gig ethernet in it, and use your favorite OS to create a non-proprietary storage pool you can add to at will. When a drive fills up, archive it, and unplug it. Or, use an external eSATA connector on your workstation and as you archive data unplug the drive and stick it in a cheap foam case for storage. As new HD's come out with more storage feel free to use them.
If you just have to have 5 terrabytes of online storage at all times, and have it be reliable and speedy to access, you are talking about a dedicated fiber channel or iSCSI SANs, and preferable Enterprise level - period. Otherwise, putting your drives in an offline state for archiving negates the need for RAID.