Thursday, February 22, 2018

MySQL server stops randomly. Is it possible that system kills it during high loads or low available memory?











I have an Ubuntu webserver (Apache + MySQL + PHP) on a very small machine on Amazon Web Services (EC2 micro instance). Website runs fine, very fast. So, our little traffic doesn't seems to slow the server at all.



Anyway, MySQL randomly goes down very often (once a week at least) and I can't get why. Apache instead keeps running fine. I have to log on via SSH and restart it, then all runs fine:



$ sudo service mysql status
mysql stop/waiting
$ sudo service mysql start
mysql start/running, process 25384



I've installed Cacti for performance monitoring, and I can see every time MySQL goes down, I have a high single peak in load average (up to 10, when normally is lower than 1). This is strange because it doesn't seem to occur during cronjobs or so.



I also tried to inspect MySQL logs: slow query log (that is enabled, I'm sure), /var/log/mysql.log and /var/log/mysql.err are all empty. I thought that maybe the system automatically shut down it because of low available memory; is that possible?



Now I'm trying to setup a bigger EC2 instance, but I just found something that looks critical (but I can't understand) in /var/log/syslog. I pasted the relevant part is here (MySQL went down at 11:47).


Answer



Yeah, seems that your box ran out of free ram, and the kernel killed it to protect the system stability. Try an instance with more ram!


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...