I have a Solaris 11.3 install with a ZFS pool consisting of a 5-vdev-stripe each consisting of three 10K SAS disks. I've also configured a SLOG containing mirrored vdevs. I've set my ZFS tuning parameters to the following:
zil_slog_limit: 1073741824
zfs_txg_timeout:256
zfs_vdev_max_pending: 256
zfs_immediate_write_sz: 67108864
I'm experiencing much slower write performance than expected when writing to a file system with async=always
(I am trying to determine the best performance I can expect when this file system is mounted via NFS with sync on for VM disk images). When I run
time dd if=/dev/urandom of=testfile bs=512 count=10000
I get around 100 IOPS per vdev (2-disk mirrored 10K SAS disks) in the SLOG (so about 100 seconds to run with one vdev in the SLOG and 50 seconds with two in the SLOG). I don't have extra drives to try in the array, but this behaviour for one vs. two devices suggests the SLOG vdevs are working. I have also used zpool iostat -v 5
to verify they are the only devices that get writes to them while I am running the test (aside from the data disks when the SLOG gets flushed). The writes per second from zpool iostat
approximately matches the IOPS I've calculated I'm getting from timing dd
.
I am under the impression that SLOGs are supposed to be written almost entirely sequentially; 100 IOPS is what I would expect for random writes.
Edit: I tried a similar thing with a spare machine running FreeNAS. The machine has two drives in a mirror + an unmirrored SLOG. IOPS is random-I/O slow, not sequential-slow, on the pool if using a HDD for the SLOG and >10k IOPS using an SSD.
Is there something missing in my understanding and/or some parameters I need to change? Thanks in advance!
No comments:
Post a Comment