Tuesday, October 21, 2014

storage - Migrate zpool to new SAS controller



Long ago, we bought an Adaptec 31605 under the impression that: a) it could do true JBOD, and b) it was well supported on OpenSolaris. Turns out both of these were incorrect. I'm trying to get my zpool onto a NexentaStor Enterprise OS, but to do that they want us to swap our controller out with a LSI SAS 9201-16i.



I'm trying to figure out the best way to cheaply migrate the pool. The current zpool uses about 1TB across 14 SAS drives. The best I can come up with is:





  1. offline system

  2. set up 3 1TB consumer-grade SATA drives as a temporary zpool on the on-board SATA ports

  3. zfs send all the data to the temporary pool

  4. swap controllers and build a new zpool on the LSI adapter

  5. zfs send from the temporary pool to the new zpool

  6. online system



Anything I'm missing here or thoughts of a better way to do it?




If I went this route how long should I expect the process to take? My rudimentary calculations tell me 1TB would take about 3 hours to transfer at 100MBps. Can I get that kind of throughput with a zfs send/recv on consumer-grade drives?


Answer



Your plan looks good, I imagine you won't have any trouble. You mentioned three 1TB disks for the temporary pool. I assume you'd planned to use single parity raidz (2TB usable), but I'd recommend you consider a mirrored pair of 1.5 or 2TB disks instead. This way when the migration is complete, each of the disks has a complete backup of your pool. Perfect for throwing in a safe deposit box in case of catastrophic failure. As for speed, I get 90-130MB/sec on a mirrored pool of two 2TB SATA disks, so your 3 Hours/TB figure (~100MB/sec) sounds reasonable. If you're paranoid like me, you may also want to scrub the temp pool before you reformat the SAS disks.



Once you switch to letting ZFS handle whole disks without a layer of controller abstraction the grass is truely greener. You can compare performance of multiple controllers with the same disks or even just attach the disks to a new system if hardware fails. I even temporarily imported a zfs pool into a VM under VMWare ESXi using raw device mapping, no nonsense it just worked.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...