Apache Update

Thanks everyone for the helpful comments yesterday on Apache vs. lighttpd. I’ve taken the first few steps towards improving things. I’ve eliminated mod_php5, switched all PHP to FastCGI, and switched from the prefork to the event MPM.

The event MPM docs say that it’s incompatible with mod_ssl, but it worked, and googling for that turned up mailing list posts from 2006 saying it was fixed.

The only real glitch was that egroupware inexplicably depends on libapache2-mod-php5. It works fine without it, but I had to create a fakelibapache2-mod-php5 that provides libapache2-mod-php5 to convince the system not to remove egroupware when I switched to the event MPM.

I went from 28 Apache processes to 4 Apache processes plus an average of around 2 php5-cgi processes — which, despite the name, actually do grok FastCGI. Three of my Apache processes now have an RSS of about 20M, and the other of about 30M. The php5-cgi processes are at around 40M. My old Apache processes ranged from 18M to over 40M. So there is some memory savings here, though not drastic.

My Redmine processes, two of them, each use over 50M. Ruby on Rails still is leaving me with a bad taste. At least it’s not a Java app; it seems a lot of those allocate 2GB before they ever start accepting connections.

I’ll see how this goes for awhile, but for now I doubt that moving to lighttpd (or similar) will yield enough benefit to make it worth the effort. But there may be some benefit in inserting a caching proxy in front of Apache, so I may yet do that.

13 thoughts on “Apache Update

  1. Please file a bug on egroupware about that. Maybe someone will fix it if they adopt it. You might want to be that person, or make a QA upload.

  2. The reverse proxy (like pound or nginx) is not a caching proxy! The name can be a little deceiving, but the only thing they do is managing requests and making sure the right httpd handles a request. They are not supposed to cache anything (well, they usually _can_, but it’s not their reason of existence).

  3. Before blaming Rails, I would suggest you to read a bit about deployment with Ruby on Rails.

    I’m currently using Redmine with apache + passenger, in a virtual machine with 256MB and we’re good. I just set Apache and Passenger to use only 2 instances, which works perfectly for us.

    If you would like to reduce your memory footprint perRedmine instance, you could use ruby-enterprise that would save you about 33% of memory per instance…

    We are currently using MPM-worker, but I didn’t try the MPM-event.

    You could possible get a smaller footprint than 50MB (or even 33% less) with other languages (Perl, PHP, Python) but there would be trade-offs.

    Rails allows the developer to focus on system logic, instead of boiler plate code. This helps to get a cleaner and tested code and that is one reason why Redmine is awsome.

    But RAM is really cheap nowadays and I wouldn’t care about 10 instances of Redmine in a cluster using less than 500MB, if you are expecting that much traffic…

  4. You do know that using RSS as a measure of memory efficiency is rather useless, right? That the Apache processes may be sharing 20-30M of resident pages, all of which will still appear in RSS?

    As such, if it was a lack of memory that was tipping you before, it will still probably tip you now.

      1. pss (proportional set size, where each page’s size is divided by the number of processes that share that page) from /proc/$PID/smaps is probably a better metric
        summing up the others, such as Shared_Clean, Shared_Dirty, Private_Clean, and Private_Dirty can also give you a good idea of the process’s memory usage.

    1. If Redmine (i.e. Rails) forks a couple of processes to get its work done, then theoretically most of the rails code that the two processes share in common should be on copy-on-write pages. In practice, what happens is that the mark-and-sweep garbage collector marks all of the shared pages causing them to be different and unshared. Ruby enterprise edition refactors the garbage collector so that the marking done by the garbage collector is done on a few pages separate from the rest of the heap. That way all of the shared Rails code, which isn’t going to change and isn’t going to get garbage collected stays shared between the two processes.

      (If you launch the two Redmine processes separately, then you’re SOL, but I doubt that’s why you have two Redmine processes running.)

  5. Would suggestion looking into running your rails apps with ruby enterprise edition, as well as mod_rack aka mod_rails. Have done the same thing on my slice, switching to the event mpm. Would also consider running apache on another port with nginx in front or even lighttpd to cache/load static files. Less apache has to spin up processes the more your going to save in the long run.

  6. You could also try JRuby or Ruby 1.9 in the hope that they would reduce rails memory usage by using Java threads or native threads, respectively. JRuby uses a higher footprint but it will probably not increase very much the memory usage with more http requests… I’ve not tested both of them, but I really don’t have any problems with memory at my work…Give it a try.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.