Saturday, April 2, 2016

Avoid linux out-of-memory application teardown



I'm finding that on occasion my Linux box runs out of memory and it starts tearing down random processes to deal with it.



I'm curious what administrators do to avoid this? Is the only real solution to up the amount of memory (will upping the swap alone help?), or is there better ways to set up the box with software to avoid this? (i.e., quotas, or some such?).


Answer



By default Linux has a somewhat brain-damaged concept of memory management: it lets you allocate more memory than your system has, then randomly shoots a process in the head when it gets in trouble. (The actual semantics of what gets killed are more complex than that - Google "Linux OOM Killer" for lots of details and arguments about whether it's a good or bad thing).







To restore some semblance of sanity to your memory management:




  1. Disable the OOM Killer (Put vm.oom-kill = 0 in /etc/sysctl.conf)

  2. Disable memory overcommit (Put vm.overcommit_memory = 2 in /etc/sysctl.conf)
    Note that this is a trinary value: 0 = "estimate if we have enough RAM", 1 = "Always say yes", 2 = "say no if we don't have the memory")



These settings will make Linux behave in the traditional way (if a process requests more memory than is available malloc() will fail and the process requesting the memory is expected to cope with that failure).



Reboot your machine to make it reload /etc/sysctl.conf, or use the proc file system to enable right away, without reboot:




echo 2 > /proc/sys/vm/overcommit_memory 

No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...