-
zfs tuning
I’ve been testing a zfs based NAS. Not much about zfs here and it was new to me. The system we’re testing stumbled out of the gate, at least with our very aggressive B4M Fork growing file workflow, but after much trial and error we discovered a key setting that unlocked performance.
zfs_txg_synctime_ms This sets how often the cache dumps to disk. This defaulted to 3sec but changing it to 1sec made all the difference.
Other settings:
zfs_vdev_max_pending changed from 10 to 4
atime, sync, and compression are all disabled.
Disabling sync may not have been a contributing factor. No difference either way it seemed.
We initially set up the vdevs (LUNS in RAID speak) as 20 mirrored pairs but found a 6×6 Z2 (RAID6 in RAID) to be superior.
The Oracle server is a beast. 16 cores and 256GB of RAM with 4 10Gig ports with more as an option.
There is no time involved stripping vdevs which can take days with RAID. Also when a drive fails only the data that was present on the drive is rebuilt, not the entire drive like with RAID.
All parity calculation is done in software and, despite 35 streams of PRSQ reads and writes across 36 disks, the CPU was only 17% busy exporting NFS to 33 Mac 10.9 clients.
I am impressed with the reporting available and how little impact it has on the system. Compared to Xsan this thing is an open book and I could see when it approached it limits.
Not too much zfs based storage out there. I think it’s an option with Small Tree and Nexsan may use it on their NAS offering. I may be wrong here as searching “zfs” on either’s site returns nothing.
So far I’m a zfs fan.
Thanks
John
