Wednesday, November 26, 2014

linux - SSD (Intel 530) read/write speed very slow with RAID 10

Explanation:



We have a Server:




  • Model: HP ProLiant DL160 G6

  • 4 x 240GB SSD (RAID-10)

  • 72GB DDR3 RAM

  • 2 x L5639


  • HP P410 RAID Controller (256MB, V6.40, Rom version: 8.40.41.00)



SSD drives are 4 brand new 2.5" Intel 530 with 540MB/s read speed and 490MB/s write speed




  • CentOS 6

  • File systems are ext4




but this is the test result for read speed on raid 10:



hdparm -t /dev/sda

/dev/sda:
Timing buffered disk reads: 824 MB in 3.00 seconds = 274.50 MB/sec
[root@localhost ~]# hdparm -t /dev/mapper/vg_localhost-lv_root

/dev/mapper/vg_localhost-lv_root:
Timing buffered disk reads: 800 MB in 3.01 seconds = 266.19 MB/sec



and this is for write speed:



dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 4.91077 s, 109 MB/s



we were hoping for 1GB read speed with raid 10 but 270MB isn't even the speed of a single disk!



Questions:




  1. Why is it so slow?

  2. Is it because of the RAID Controller?



Update 1 - Same Read/Write Speed:




After changing some settings as mentioned in answers i have the result below:



(Any one knows why it shows 4GB instead of 400MB as read speed?!)



EDIT: looks like the command was wrong and we should've used -s144g for this amount of ram, thats why it shows 4GB (as suggested in comments by ewwhite)



[root@192 ~]# iozone -t1 -i0 -i1 -i2 -r1m -s56g
Iozone: Performance Test of File I/O
Version $Revision: 3.408 $

Compiled for 64 bit mode.
Build: linux

Record Size 1024 KB
File size set to 58720256 KB
Command line used: iozone -t1 -i0 -i1 -i2 -r1m -s56g
Output is in Kbytes/sec
Each process writes a 58720256 Kbyte file in 1024 Kbyte records

Children see throughput for 1 initial writers = 135331.80 KB/sec

Children see throughput for 1 rewriters = 124085.66 KB/sec
Children see throughput for 1 readers = 4732046.50 KB/sec
Children see throughput for 1 re-readers = 4741508.00 KB/sec
Children see throughput for 1 random readers = 4590884.50 KB/sec
Children see throughput for 1 random writers = 124082.41 KB/sec


but the old hdparm -t /dev/sda command still shows:



Timing buffered disk reads: 810 MB in 3.00 seconds = 269.85 MB/sec




Update 2 (tuned-utils pack) - Read Speed is now 600MB/s:



Finally some hope, we had disabled cache from raid controller and did some other things earlier with no luck, but because we reloaded the server and installed the OS again, we forgot to install "tuned-utils" as suggested in ewwhite's answer (Thank you ewwhite for this awesome package you suggested)



After installing tuned-utils and choosing enterprise-storage profile the read speed is now ~600MB/s+ but the write speed is still very slow (~160MB) (:



Here is the result for iozone -t1 -i0 -i1 -i2 -r1m -s144g command:



    Children see throughput for  1 initial writers  =  165331.80 KB/sec

Children see throughput for 1 rewriters = 115734.91 KB/sec
Children see throughput for 1 readers = 719323.81 KB/sec
Children see throughput for 1 re-readers = 732008.56 KB/sec
Children see throughput for 1 random readers = 549284.69 KB/sec
Children see throughput for 1 random writers = 116389.76 KB/sec


Even with hdparm -t /dev/sda command we have:



Timing buffered disk reads: 1802 MB in 3.00 seconds = 600.37 MB/sec




Any suggestions for the very slow write speed?



Update 3 - Some information requested in comments:



Write speed is still very low (~150MB/s which isn't even 1/3 of a single disk)



Output for df -h and fdisk -l:



[root@192 ~]# df -h

Filesystem Size Used Avail Use% Mounted on
/dev/sda1 441G 3.2G 415G 1% /
tmpfs 36G 0 36G 0% /dev/shm


[root@192 ~]# fdisk -l
Disk /dev/sda: 480.0 GB, 480047620096 bytes
255 heads, 63 sectors/track, 58362 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00040c3c

Device Boot Start End Blocks Id System
/dev/sda1 * 1 58363 468795392 83 Linux

No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...