I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly.
Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2.
strace on any of the processes yields no information, only that it was able to attach.
mod_proxy is not enabled.
KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100.
From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection).
I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour.
Any tips? I'm about to pull my hair out.
Edit : Also, there are no unusual entries in any apache log files.
Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.)
The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.
Answer
Try setting up mod-status, and in the config (debian /etc/apache2/mods-enabled/status.conf) set your IP in Allow from, and set
ExtendedStatus On
Then visit your server's default host website, and append /server-status/ to the end of the URL. That should give you more information on what the server is up to.
Sorry that this isn't really a fix, but more a way to get more information! I wasn't able to just comment on your question.
No comments:
Post a Comment