Thursday, January 26, 2017

linux - memory tuning with rails/unicorn running on ubuntu





I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7.



It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely.



My question concerns memory usage, and what concerns I should have with what I am seeing. (if any)



Here is the scenario:



Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour.




If I drop my page caches using the linux command echo 1 > /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours.



Before:




total used free shared buffers cached
Mem: 7130244 5005376 2124868 0 113628 422856
-/+ buffers/cache: 4468892 2661352
Swap: 33554428 0 33554428



After:




total used free shared buffers cached
Mem: 7130244 4467144 2663100 0 228 11172
-/+ buffers/cache: 4455744 2674500
Swap: 33554428 0 33554428



My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour?



FWIW, my Unicorn config:




worker_processes 15

listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog => 1024
listen 8080, :tcp_nopush => true
timeout 180


pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid"

GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true

before_fork do |server, worker|
STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK"
print_gemfile_location

defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!

defined?(Resque) and Resque.redis.client.disconnect

old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# already killed
end
end


File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)}

end

after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
defined?(Resque) and Resque.redis.client.connect
end



Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour?



tia


Answer



This is the line that matters (specifically the last column):



-/+ buffers/cache: 4468892 2661352



You'll note that this number doesn't really change when you dropped your caches.




The OS will deal with the freeing buffers when the running applications demand more memory. For what you're doing, it's not productive to try being very fiddly with how the OS handles its memory, particularly given that you appear to have plenty.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...