Data is written across all disk. Highest performance, with zero redundancy. Usable disk space is same as raw size.
The data is duplicated on two disks. Double read performance, same write performance, and you can survive loosing a disk. Usable disk space is half of raw size.
Parity is calculated and written across all disks. Higher read performance, bad write performance. You can survive loosing 1 disk. With disk sizes larger than 4 TB, rebuild may take several days and create a serious risk. Usable disk space is raw size minus 1 disk.
2 parity blocks are calculated and written across all disks. Higher read performance, bad write performance. You can survive loosing 2 disks. With disk sizes larger than 4 TB, rebuild may take many days and increase risk while degrading your performance, which may not be acceptable for such a long period. Usable disk space is raw size minus 2 disks.
3 parity blocks are calculated and written across all disks. Higher read performance, bad write performance. You can survive loosing 3 disks! Especially useful when using disks sized 10 TB and above, for which it may take several weeks for the RAID to rebuild.
Usually a number of 2 disk mirrors, with a stripe set on top. Very high read and write performance. You can survive loosing as many disks as you have mirrors, as long as you don't loose 2 disks in the same mirror. Rebuild period is very fast, often 30 minutes, with low performance degradation, minimising the period you are vulnerable. Using hot spares are advised, to facilitate the short rebuild periods. Usable disk space is half of raw size.
A number of RAID 5 sets, with a stripe set on top for speed. You can survive loosing 1 disk in each RAID 5. Increases write perfomance compared to RAID 5.
RAID 50x2: A RAID 50 can be built using any number of RAID 5 subsets. Specifying RAID 50x2 (or 50/2) means that this RAID 50 is built by creating 2 RAID 5 subsets with a stripe on top. RAID 50x4 means 4 subsets - ex. in a 16 slot system RAID 50x4 means you have 4 RAID 5 groups with 4 disks in each, and a stripe on top.
A number of RAID 6 sets, with a stripe set on top for speed. You can survive loosing 2 disks in each RAID 6. Increases write perfomance compared to RAID 6.
RAID 60x2: A RAID 60 can be built using any number of RAID 6 subsets. Specifying RAID 60x2 means that this RAID 60 is built by creating 2 RAID 6 subsets with a stripe on top. RAID 60x3 means 3 subsets - ex. in a 24 slot system RAID 60x3 means you have 3 RAID 6 groups with 8 disks in each, and a stripe on top.
A storage box with two separate RAIDs. Example: RAID 6/6 means you have created two separate RAID 6 groups in the same storage box, which run completely independent. This may in some situations be preferred rather than using ex. RAID 60, if you need to be 99.999% sure that IO abuse of one RAID will not hurt the other. In most situtations, RAID 60 with good monitoring is preferred, as peak performance IO are lost, by separating RAIDs.
For large disk sets, the choice is usually between RAID 10 and RAID 60. RAID 10 provides the best performance especially for writes, and your performance degraded and vulnerable periods are short. With RAID 60 you can always survive loosing 2 disks, and if you are using a large number of disks in each RAID 6 set, you only waste a minimum of space. But the cost is lower write performance.
|Level||Description||Minimum number of drives||Space loss||Fault tolerance||Failure rate||Read performance||Write performance|
|RAID 0||Stripe||2||None||None||High||High||Very high|
|RAID 1||Mirror||2||Raw / 2||Mirrored disks||Medium||High||Low|
|RAID 5||1 parity block distributed||3||Raw - 1 disk||1 disk||Medium||High||Low|
|RAID 6||2 parity blocks distributed||4||Raw - 2 disks||2 disks||Low||High||Very low|
|RAID 10||Mirroring without parity, and block-level striping||4||Raw / 2||1 in each mirror||Low||High||High|
|RAID 50||Block-level striping with distributed parity, and block-level striping||6||Raw - (1 disk * number of RAID 5 sets)||One per RAID 5||Low||Unknown||Unknown|
|RAID 60||Block-level striping with double distributed parity, and block-level striping||8||Raw - (2 disks * number of RAID 6 sets)||Two per RAID 6||Very low||Unknown||Unknown|
|RAID 100||Mirroring without parity, and two levels of block-level striping||8||Raw / 2||1 in each mirror||Low||Unknown||Unknown|
There are many more RAID levels - but none that we use or support. More nested RAID levels.
In a normal RAID the hot spare is passive, only waiting to be used when needed. In an RAID 5E (Enhanced), RAID 5EE etc. the hot spare is actively beeing used in the array, until needed as a hot spare. Running the hot spare Enhanced, means you know that the disk works. But you are also degrading the disk.
RAID 0 is called Striped Vdev's in ZFS. Vdev's also do checksumming to prevent silent data corruption.
A mirror that also do automatic checksums to prevent silent data corruption. Mirrored Vdev's also support using multiple mirrors, to allow duplicating onto more than 1 extra disk.
Same as RAID 10 but with cheksums.
ZFS RAIDs are called RAID Z instead of RAID. RAID Z writes automatic checksums and therefore, do not suffer the RAID write-hole problem, that may occur if a RAID storage system crashes, resulting in interruption of a write operation which can leave parity inconsistent. However, writing the checksum data may create slowdowns as checksum data are spread across all drives in the zvol.