Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec
Lowly PCIe x4 slot yields steady 1.2 GB/sec read
Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases.
SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected.
I'd like to be able to rely on the sheer speed of x16 slots.
What gives? What can I try? Any ideas? Please assist
Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx
Follow-up information
We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}.
The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array.
Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).
Answer
Our resolution has been to go with HP Gen8 server, but I suspect Dell T620 might work as well. Both of these machines have all PCIe lanes running in a planar fashion, without riser cards. Testing shows good reliable performance on HP Gen8.
No comments:
Post a Comment