Monday, September 26, 2016

debian - My VPS apache goes down often



I have a debian VPS with 2 GB RAM which is really good for handling big stuff.
Recently, however, my website goes down often and I still can't target the exact reason.



At first I had 512MB RAM which was really too small to handle my website. As I saw in the logs, the site used at least 450MB. I upgraded it to 2GB hoping that would solve everything, but it hasn't done anything.



Then I thought it might be that my website code was running a huge process, because it actually was. So I rebuilt a simple system to reduce the huge process that was being done. Still, the same problem persisted.



Now I'm thinking it might be a number of visitors problem. But there aren't even 30 active visitors, even less, and 2 GB of RAM should good to handle them all. After looking at the RAM usage when the site goes down, it's about 400-500MB of the 2GB, so to me that confirmed it's not the RAM's problem.




So I'm really confused now. What else could it be?



Apache error logs are all about my PHP files notices and un-important stuff that has nothing to do with taking the Apache down, but I'm sure its only an Apache problem because SSH connects and works perfectly while the website is down.



What are expected problems or anything else to check? Could it be an Apache limitations for visitor usage?


Answer



While I have little information as to what happens in regard to TCP handshaking or other network issues, it appears (by your comment) that in apache.conf that you are having over 10 users trying to be processed concurrently, and your MaxClients directive is too low to handle your traffic. I would increase the number. Since I do not know what kind of traffic your server receives, I'd set the value to at least 50, and increase it if loadtesting incurs problems. You can run a loadtest with a free service such as Load Impact. [No affiliation]



From http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients:





The MaxClients directive sets the limit on the number of simultaneous requests that will be served. Any connection attempts over the MaxClients limit will normally be queued[emplasis mine], up to a number based on the ListenBacklog directive. Once a child process is freed at the end of a different request, the connection will then be serviced.




Your connections appear to be 'hanging' since they are being queued up for processing, although I do not doubt your server can handle a bit concurrently.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...