Tuesday, September 16, 2014

Windows Storage Server and SSDs in Raid 0 - Slow performance



We have a computer running Windows Storage Server 2008 R2 with 6 SSD in RAID 0.




This storage computer has one PCI-E with 4 Ethernet ports and we connected it to a gigabit switch to other computers vi iSCSI.



The problem is that we are not able to get high read/write speeds.



Using HD tune directly in the storage computer we get around 500 MB/s, but using the iSCSI link (in another computer) we only get close to 200 MB/s.



We did set MPIO with multipath, JUMBO frames and disabled CheckSum IPV4.



EDIT




I don't care about data loss. I just need speed because this is a cache computer.



EDIT



Both server and client have 4 GB NICs (1GB each adapter) and multipath and MPIO is correctly configured AFAIK.



EDIT



One thing I cant understand: we have a Dell Equallogic storage and it gets close to 200MB/s using the same switch/configuration. How is it possible? The equallogic was supposed to be a lot more slower than a 6 SSD Disk raid 0 storage.




Also, I have read that a lot of storages out there use 4 1GB NICs and they can easy get close to 500 GB/s. Included one from DELL which has only SSDs, as you guys can see here



EDIT



Also I am thinking about not using Windows Storage Edition and give OpenFiler a try. Should I consider this?


Answer



OK, problem solved.



It turned out to be NIC issues. We changed the NICS and updated them to latest drivers and now I am getting close to 500 MB/s. We tested the speed using Sql Server, and now is great.




Thanks you all.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...