[CLUE-Tech] Memory woes

David Anselmi anselmi at americanisp.net
Wed Jul 31 18:02:03 MDT 2002


Mike Staver wrote:
> Occasionally, I keep logging onto my linux box only to find most of my
> services have stopped.  This has happened about 3 times in the last
> month, and it's concerning me more and more.  I have 1.2 gigs of memory
> in this machine, and about 512 megs of swap.  More than enough I would
> think.  I'm running Red Hat 7.3 with the latest red hat build of
> 2.4.18-5.  I have found out what caused the services to shutdown when I
> ran dmesg:
> 
[...]
> 
> Has anyone else had similar problems with the 2.4.18-5 kernel, or more
> specifically, the Red Hat builds of this on 7.3?  By having 1.2 gigs of
> memory and that swap space, I thought for sure I was safe.  

Don't have RedHat, don't have gigs of memory, don't have out of memory 
errors ;-)  Here are some ideas:

Check your other logs to see if anything odd corresponds to out of 
memory.  For example, another program may break and log errors as it 
grows out of control.  Doesn't seem like open but unused TCP connections 
would use all your memory, but maybe, and apache logs might show that. 
Before you run out of memory you probably start hitting swap heavily and 
your performance probably nosedives, so that may show up too.

You can record top (or maybe ps) output periodically to see how fast 
your memory usage grows--is it every 10 days after reboot, or are things 
fine for a while and then whamo! your memory disappears in 3 seconds? 
That may also tell you what processes are using it all so you can 
investigate them further.

What could be the problem?  The usual question about what has changed 
recently applies--maybe your kernel build is bad.  You seem to have a 
lot running on that box, maybe something has a memory leak or just 
behaves badly under load.

Apache kills its children after a certain number of requests so that if 
they leak memory they don't leak all of it.  Although the default 
MaxRequestsPerChild of 100 is probably too small, a gadzillion is too big.

In Java you can set the max heap size so that the VM doesn't use up all 
your memory.  Although Java garbage collection is nice for programmers, 
it may not return free heap memory to the OS (maybe better in newer VMs) 
and it is possible to create unused objects that can't be gc'd.

I expect ColdFusion has its own set of memory pitfalls.  Ditto for mysql.

There are tips on load testing in the O'Reilly "Web Performance Tuning" 
book, maybe that would help.  You can get it from the library but for 
your job it may be worth owning.

It would be interesting to know what's really happening.  Seems like 
Linux should kill off what's using all the memory and keep everything 
else running (which may be what's happening, just not in the order you 
might like).  But I don't know much about Linux behavior when out of 
memory so it would be nice to hear what you find out.

Dave




More information about the clue-tech mailing list