dcsimg

An Introduction to Apache 2.0

By Ryan Bloom (Send Email)
Posted May 28, 2000


Most people who follow Apache development with any regularity know that the Apache Group has recently been focusing on Apache 2.0. There have been many changes to the Apache code since Version 1.3. Some of these changes make an administrator's job easier and some make it harder; however, the changes are all designed to make Apache the most flexible and portable Web server available. This column will try to explain some of the new concepts that Apache 2.0 introduces and how it differs from 1.3.

Apache 2.0 has already been through three alpha releases. In this preview, Ryan Bloom of the Apache Group previews Apache 2.0 and explains why it will make life easier for every Webmaster on the Internet.

Multi-Processing Modules
The first major change in Apache 2.0 is the introduction of Multi-Processing Modules (MPMs). To really understand the need for MPMs, it is important to look at how Apache 1.3 works. Apache 1.3 is a pre-forking server, meaning that when Apache is started the original process forks a specified number of copies of itself, which actually handle the requests. As more requests come in, more copies are forked. The original process doesn't actually do anything other than monitor the new processes to make sure there are enough of them. This model works well on Unix variants and most mainframes but it doesn't work as well on Windows. The original support for Windows actually re-wrote the section of code that created the child processes. On Windows this section created just one child process, which then had multiple threads to serve the requests. This separation between Unix and Windows was done with #ifdefs, making the code very hard to maintain.

When work started on Apache 2.0, the Apache Group had many goals. One of these goals was to include support for every platform that was supported by 1.3; another was to add new platforms. As work began, the developers realized that these goals were impossible if all of the code was shared between platforms. An abstraction layer was necessary if the project was going to be manageable. From this realization, MPMs were born. The basic job of an MPM is to start the server processes and map incoming requests onto an execution primitive. Whether that execution primitive should be a thread or a process is left up to the MPM developer. This decision should be made based on which primitive is supported best by the platform the MPM is designed for.

Each of these MPMs has strengths and weaknesses. For example, the prefork MPM will be more robust than either of the two hybrid MPMs on the same platform. The reason for this is simple: if a child process terminates unexpectedly, connections will be lost. How many connections are lost is up to which MPM is used. If the prefork MPM is used, one connection will be lost. If the mpmt_pthread MPM is used, no more than 1/n connections, where n is the number of child processes used, will be lost. If the dexter MPM is used, the number of lost connections will depend on the OS the server is being run on. However, the trade-off in robustness comes at a price: scalability. The prefork MPM is the least scalable MPM, followed by mpmt_pthread, and then dexter. Which MPM is used will depend on what the site requires. If a given site must use a lot of third-party nontrusted modules, then that site should most likely be using the prefork MPM because if the module is unstable, it will affect the site the least. However. if all a site is going to do is serve static web pages and doesn't require any modules but will need to serve thousands of hits per second, then dexter is probably the correct choice.

Page 1 of 4


Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.


 

 


Thanks for your registration, follow us on our social networks to keep up-to-date