The non-mapped virtual memory
stat for our mongo primary has always been constant, and we never gave it much thought before yesterday. Yesterday, a series of accidental full-collection scans from a poorly designed query resulted in a big slowdown, where the mongod
process was using 100% CPU, and every query was taking tens of seconds.
After offloading the offending query to our secondaries, the performance problems disappeared, but the non-mapped virtual memory more than doubled, and hasn't gone down since. It used to hold at about 600MB
; now it's at about 1.4GB
. The increase was immediate, and exactly correlates to the slowdown, and it hasn't changed at all since.
The number of connections has been completely constant throughout, so we can be sure it isn't that.
What might cause this? Is it a problem? Should we be concerned?
Running on Ubuntu 12.04 64-bit on an EC2 instance.
Answer
Because virtual memory is effectively free, nobody bothers to clean it up or minimize its usage. So long as the resident set size is reasonable, I wouldn't worry about it.
No comments:
Post a Comment