I have the following problem: A program has a bug like the following
int main() {
for(;;) {
char *p = (char*)std::malloc(1024);
std::memset(p, 1, 1024);
}
}
And it keeps allocating memory all the way, until my system starts swapping pages of other applications out in favor of that program and i can't do anything anymore on the box. I've hit this problem several times with different applications (today, it was with moonlight 2 beta in firefox). I though the problem is because the program causes other program's memory to be swapped out, and so it could use more physical memory.
Naturally i looked into ulimit, and found two settings
-m
the maximum resident set size-v
the size of virtual memory
I read that the first denotes the total physical memory size the process can use at once. To me, it seems that this is more sensible than the total virtual memory size, because it may be shared, and it could be of no importance at all because it's swapped out anyway. So i added the following to my .bashrc
after looking at usual resident set sizes with top
, which range up to around 120MB for a usual firefox session, i found.
# limit usage to 256MB physical memory out of 1GB
ulimit -m 262144
But after running my above test snippet, it still brough my system down, and i had to wait around 5 minutes until the terminal recognized my ^C
key presses. Usually, if i don't react within the first few seconds, in these situations i can only press the reset button, which i really don't like - so does anyone have a strategy for how to solve this? Why doesn't the physical limiting work? It seems to me that this way other applications should still have enough physical memory to react sensibly.
No comments:
Post a Comment