Summary: I built a ZFS NAS using a pair of mirrored 1TB drives to comprise the root pool. Unfortunately, it turns out you really don’t want to combine a large amount of NFS-served data with your root pool unless you have a separate ZFS Intent Log (ZIL), due to the NFS COMMIT performance bottleneck. Note that you can’t add a separate ZIL to a root pool either. Hence, I needed to separate my data from my root pool.
Quick version: use beadm to clone your BE to a new pool on a separate disk, then destroy the old BE, export/import your old pool to rename it, and repeat the process to a third pool to get the root pool back to its original name.
I installed two new 30GB OCZ SSDs and then proceeded:
- Partition the new drives identically, with a slice 0 for your new root pool. For SSDs, leave a few GB unused to extend the life of the drive (write wear). I also created a 3GB slice for a data ZIL later.
- Create a new pool (e.g. rpool2) on s0 of one drive.
- Use beadm to clone the current boot environment to the new pool.
- Create swap and dump volumes of appropriate sizes in the new pool too (or snapshot and send/recv your existing ones if you want to be absolutely sure they’re identical).
- Install GRUB on s0 of the new drive.
- Activate and boot the new BE off the new drive.
- Edit /etc/vfstab and change the path for the swap area to your new swap volume. Use swap -d/-a to dynamically change the current swap area to match.
- Use dumpadm -d to redefine the dump area to your new dump volume too.
- Destroy the old BE (might want to back it up with zfs send first!). Tip: use “zfs destroy -s” for BEs with snapshots.
- Export your old rpool, then re-import with a new name (e.g. datapool).
- Create a new pool on s0 of the other new drive, called rpool.
- Repeat the process above to clone your current BE to the new rpool, and migrate swap/dump. (This is only necessary to change the root pool name back to the standard ‘rpool’, although it has no effect on operation.)
- Once booted off the new rpool BE, destroy the previous BE and root pool. Attach that slice to rpool as a mirror. (Don’t forget to install GRUB on the drive.)
- I also added the s3 slices as a mirrored ZIL to datapool. Although it is not ideal to share a drive between the root pool and the ZIL for another pool. I figure there is unlikely to be much direct conflict (most of the rpool files will be read-only and cached), and the SSDs should be fairly low latency anyway.