So question regarding FreeNAS 8....
Looks like ZFS provides for the fault tolerance and recovery, and I see they specifically now support thin provisioning. What is the process for growing volumes? Do I simply add in additional disk and the system absorbs them in to the thinly provisioned storage pool? Or do I have to go through some process of building/incorporating them in to the existing LUNs?
Thin provisioning applies to zvols (chunks of disk space that would typically be exported via iSCSI, Fiber Channel, or FCoE). In ZFS, they are created and managed by the "zfs" command, not the "zpool" command, since they need to be created on existing storage pools.
To add space to a ZFS pool, you would install the additional drives, then add them to your pool. Since redundancy is very important, it's best to add redundant groups of drives to the pool.
I'm not sure how the FreeNAS web interface does it right now. Here's a Solaris command-line example; in this case, let's pretend we have an eight-port SAS HBA, and we're going to create a pool using two sets of mirrored drives. The HBA has already been automatically configured as controller #4 (c4) in the Solaris device addressing scheme.
zpool create mypool mirror c4t0d0 c4t1d0 mirror c4t2d0 c4t3d0
Let's say we're getting more than 2/3 full, and we want to add two more mirrored pairs:
zpool add mypool mirror c4t4d0 c4t5d0 mirror c4t6d0 c4t7d0
A different way to grow the pool, if we didn't want to use up drive slots: assuming we have the original two sets of mirrors:
BACK UP YOUR DATA FIRST
zpool set autoexpand=on mypool
zpool offline c4t0d0
(physically replace the c4t0d0 disk with a larger drive)
zpool replace -f c4t0d0
(wait for new disk to resilver)
zpool offline c4t1d0
(physically replace c4t1d0 disk with a larger drive)
zpool replace -f c4t1d0
(wait for new disk to resilver)
(optionally replace c4t2d0 and c4t3d0 in the same manner, one at a time)
As soon as the second disk in the mirror has finished resilvering, the pool will autoexpand with the additional space on the new drives.
Note that most SAS HBAs support hot swap out of the box. AHCI-compatible SATA ports can also do hot swap, but on a Solaris system, it's not enabled by default for some reason, and the line:
set sata:sata_auto_online=1
must be added to the
/etc/system file (and then the system needs to be rebooted). Hot swap is wonderful for this sort of thing, as the system can keep going while you remove and replace drives.
Food for thought: If you have two hot swap drive cages, you want to arrange your mirrors so that each mirrored pair is split between the two cages. That way, if one cage goes out, the entire pool remains available via the other cage.
Always remember that RAID is not a substitute for backups.