Wednesday, December 21, 2016

MySQL on Linux out of memory



OS: Redhat Enterprise Linux Server Release 5.3 (Tikanga)




Architecture: Intel Xeon 64Bit



MySQL Server 5.5.20 Enterprise Server advanced edition.



Application: Liferay.



My database size is 200MB. RAM is 64GB.
The memory consumption increases gradually and we run out of memory.
Then only rebooting releases all the memory, but then process of memory consumption starts again and reaches 63-64GB in less than a day.




Parameters detail:



key_buffer_size=16M



innodb_buffer_pool_size=3GB



inndb_buffer_pool_instances=3



max_connections=1000




innodb_flush_method=O_DIRECT



innodb_change_buffering=inserts



read_buffer_size=2M



read_rnd_buffer_size=256K



It's a serious production server issue that I am facing.

What could be the reason behind this and how to resolve.



This is the report of 2pm today, after Linux was rebooted yesterday @ around 10pm.



Output of free -m




total used free shared buffers cached
Mem: 64455 22053 42402 0 1544 1164
-/+ buffers/cache: 19343 45112

Swap: 74998 0 74998





Output of vmstat 2 5




procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------

r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 43423976 1583700 1086616 0 0 1 173 22 27 1 1 98 0 0
2 0 0 43280200 1583712 1228636 0 0 0 146 1265 491 2 2 96 1 0
0 0 0 43421940 1583724 1087160 0 0 0 138 1469 738 2 1 97 0 0
1 0 0 43422604 1583728 1086736 0 0 0 5816 1615 934 1 1 97 0 0
0 0 0 43422372 1583732 1086752 0 0 0 2784 1323 545 2 1 97 0 0


Output of top -n 3 -b






top - 14:16:22 up 16:32, 5 users, load average: 0.79, 0.77, 0.93
Tasks: 345 total, 1 running, 344 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.0%us, 0.9%sy, 0.0%ni, 98.1%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 66002772k total, 22656292k used, 43346480k free, 1582152k buffers
Swap: 76798724k total, 0k used, 76798724k free, 1163616k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6434 mysql 15 0 4095m 841m 5500 S 113.5 1.3 426:53.69 mysqld

1 root 15 0 10344 680 572 S 0.0 0.0 0:03.09 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/1


Answer



I had a similar issue, and basically I changed the mysqltuner.pl script and made it more verbose and know what happened.




Basically, the memory usage, if you are using any variation of my-innodb-heavy-4G.cnf config file, the major part of memory using will be nearly like this:



memory usage = min(tmp_table_size, max_heap_table_size) 
+ key_buffer_size + query_cache_size
+ innodb_buffer_pool_size + innodb_additional_mem_pool_size + innodb_log_buffer_size
+ (max_connections *
(read_buffer_size + read_rnd_buffer_size
+ sort_buffer_size + thread_stack + join_buffer_size
)
)



This sum have not all factors, please refer mysqltuner.pl script code (and run it) to see them all.



So, it seems you need to lower a lot read_buffer_size, read_rnd_buffer_size, sort_buffer_size, thread_stack and join_buffer_size, since its sum is multiplied by 1000 from max_connections.



Other solution is to lower a little bit the max_connections number. With this huge memory for thread buffers, innodb_buffer_pool_size and all InnoDB related variables becomes a minor issue.



You can also try to figure out if your applications really a huge amount of sort_buffer_size and join_buffer_size. If not, put these values down.




Hope it helped.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...