I'm using Amazon EC2 instance C4.large, total 3.75G memory, running Amazon-Linux-2015-09-HVM
The memory usage increases day by day, like there's a memory leak. Then I kill all my program and all memory hog processes like Nginx/PHP-FPM/Redis/MySQL/sendmail
. It's very strange the memory is not released, still very high.
The line -/+ buffers/cache: 3070 696
indicates actual free memory with buffer/cache excluded:
$ free -m
total used free shared buffers cached
Mem: 3767 3412 354 4 138 203
-/+ buffers/cache: 3070 696
Swap: 0 0 0
As you can see after kill there are only a few user processes running, the highest is only 0.1% memory usage:
$ ps aux --sort=-resident|head -30
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 32397 0.0 0.1 114232 6672 ? Ss 08:04 0:00 sshd: ec2-user [priv]
ec2-user 32399 0.0 0.1 114232 4032 ? S 08:04 0:00 sshd: ec2-user@pts/0
ntp 2329 0.0 0.1 23788 4020 ? Ss Dec06 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 32400 0.0 0.0 113572 3368 pts/0 Ss 08:04 0:00 -bash
rpcuser 2137 0.0 0.0 39828 3148 ? Ss Dec06 0:00 rpc.statd
root 2303 0.0 0.0 76324 2944 ? Ss Dec06 0:00 /usr/sbin/sshd
root 2089 0.0 0.0 247360 2676 ? Sl Dec06 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root 1545 0.0 0.0 11364 2556 ? Ss Dec06 0:00 /sbin/udevd -d
root 1 0.0 0.0 19620 2540 ? Ss Dec06 0:00 /sbin/init
ec2-user 1228 0.0 0.0 117152 2480 pts/0 R+ 10:32 0:00 ps aux --sort=-resident
root 2030 0.0 0.0 9336 2264 ? Ss Dec06 0:00 /sbin/dhclient -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0
rpc 2120 0.0 0.0 35260 2264 ? Ss Dec06 0:00 rpcbind
root 2071 0.0 0.0 112040 2116 ? S root 1667 0.0 0.0 11308 2064 ? S Dec06 0:00 /sbin/udevd -d
root 1668 0.0 0.0 11308 2040 ? S Dec06 0:00 /sbin/udevd -d
root 2373 0.0 0.0 117608 2000 ? Ss Dec06 0:00 crond
ec2-user 1229 0.0 0.0 107912 1784 pts/0 S+ 10:32 0:00 head -30
root 2100 0.0 0.0 13716 1624 ? Ss Dec06 0:09 irqbalance --pid=/var/run/irqbalance.pid
root 2432 0.0 0.0 4552 1580 ttyS0 Ss+ Dec06 0:00 /sbin/agetty ttyS0 9600 vt100-nav
root 2446 0.0 0.0 4316 1484 tty6 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty6
root 2439 0.0 0.0 4316 1464 tty3 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty3
root 2437 0.0 0.0 4316 1424 tty2 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty2
root 2444 0.0 0.0 4316 1416 tty5 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty5
root 2434 0.0 0.0 4316 1388 tty1 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty1
root 2441 0.0 0.0 4316 1388 tty4 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty4
dbus 2160 0.0 0.0 21768 232 ? Ss Dec06 0:00 dbus-daemon --system
root 2383 0.0 0.0 15372 144 ? Ss Dec06 0:00 /usr/sbin/atd
root 2106 0.0 0.0 4384 88 ? Ss Dec06 0:16 rngd --no-tpm=1 --quiet
root 2 0.0 0.0 0 0 ? S Dec06 0:00 [kthreadd]
No process using high memory but system total free is only 696M out of 3.75G, is it a bug of EC2 or Amazon Linux? I have another T2.micro instance running, after kill Nginx/MySQL/PHP-FPM
the memory is released and free number bumped.
It's appreciated if someone could help.
Answer
I don't have a C4.large instance handy to check my theory, so I may be shooting in the dark, but have you checked the stats for the Xen balloon driver?
Here's a dramatic explanation of the possible mechanism: http://lowendbox.com/blog/how-to-tell-your-xen-vps-is-overselling-memory/
And here's documentation of the various sysfs paths that will give you more information: https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-devices-system-xen_memory
No comments:
Post a Comment