Thursday, August 9, 2018

web server - deleting linux cached ram




I have a webserver that has 8GB of ram and is running a pretty intensive php site (1 site) that does file manipulation, graphing, emailing, forums, you name it. The environment is far from static which leads me to believe that very little could be gained from caching anything in ram since almost every request to the server creates new or updated pages. And a lot of caching is done client side so we have a ton of 304 requests when it comes to images, javascript, css.




Additionally I do have language files that are written to flat files on the server where cached ram definitely is good rather than reading from disk. But there are only a handful of files like this.



In about a two weeks I've gone from having 98% free ram to 4% free ram. This has occurred during a time in which we also push several large svn updates onto the server.



My question is whether my server will be better tuned if I periodically clear my cache (I'm aware of Linus Torvalds' feeling about cache) using the following command:



sync; echo 3 > /proc/sys/vm/drop_caches



Or would I be better off editing the following file:



/proc/sys/vm/swappiness  


If I replace the default value of 60 with 30 I should have much less swapping going on and a lot more reuse of stale cache.



It sure feels good to see all that cache freed up using the first command but I'd be lying to you if I told you this was good for the desktop environment. But what about a web server like I've described above? Thoughts?



EDIT: I'm aware that the system will acquire memory as it needs it from the cache memory but thanks for pointing that our for clarity. Am I imagining things when Apache slows down when most of the server memory is stored in cache? Is that a different issue altogether?



Answer



Clearing caches will hinder performance, not help. If the RAM was needed for something else it would be used by something else so all you are doing is reducing the cache hit/miss ratio for a while after you've performed the clear.



If the data in cache is very out of date (i.e. it is stuff cached during an unusual operation) it will be replaced with "newer" data as needed without you artificially clearing it.



The only reason for running sync; echo 3 > /proc/sys/vm/drop_caches normally is if you are going to try do some I/O performance tests and want a known state to start from (running the cache drop between runs to reduce differences in the results due to the cache being primed differently on each run).



The kernel will sometimes swap a few pages even though there is plenty of RAM it could claim back from cache/buffers, and tweaking the swappiness setting can stop that if you find it to be an issue for your server. You might see a small benefit from this, but are likely to see a temporary performance drop by clearing cache+buffer artificially.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...