HTTP Compression Speeds up the Web Page 3

A Browser that is capable of receiving compressed content indicates this in all of its requests for documents by supplying the following request header field when it asks for something....

  • When the Web Server sees that request field then it knows that the browser is able to receive compressed data in one of only 2 formats... either standard GZIP or the UNIX "compress" format. It is up to the Server to compress the response data using either one of those methods ( if it is capable of doing so).

  • If a compressed static version of the requested document is found on the Web Server's hard drive which matches one of the formats the browser says it can handle then the Server can simply choose to send the pre-compressed version of the document instead of the much larger uncompressed original.

  • If no static document is found on the disk which matches any of the compressed formats the browser is saying it can "Accept" then the Server can now either choose to just send the original uncompressed version of the document or make an attempt to compress it in "real-time" and send the newly compressed and much smaller version back to the browser.

    Most popular Web Servers are still unable to do this final step.

    • The Apache Web Server which has 61 percent of the Web Server market is still incapable of providing any real-time compression of requested documents even though all modern browsers have been requesting them and capable of receiving them for more than two years.

    • Microsoft's Internet Information Server is equally deficient. If it finds a pre-compressed version of a requested document it might send it but has no real-time compression capability.

      IIS 5.0 uses an ISAPI filter to support GZIP compression. It works as follows. The user requests a page, the server sends the page and then stores a copy of it "compressed" in a temporary folder. The next time a user requests the page it sends the one stored in the temp directory.

      What it then tries to do is constantly check that the pages in the temp directory are always current, and if not gets a current page and then compresses it.

    • IBM's WebSphere Server has some limited support for real-time compression but it has "appeared" and "disappeared" from the marketplace through various release versions of WebSphere.

    • The very popular Squid proxy server from NLANR also has no dynamic compression capabilities even though it is the de-facto standard proxy-caching software used just about everywhere on the Internet.

    The original designers of the HTTP protocol really did not foresee the current reality with so many people using the protocol that every single byte would count. The heavy use of pre-compressed graphics formats such as .GIF and the relative difficulty to further reduce the graphics content makes it even more important that all other exchange formats be optimized as much as possible. The same designers also did not foresee that most HTTP content from major online vendors would be generated dynamically and so there really is no real chance for there to ever be a "static" compressed version of the requested document(s). Public IETF Content-Encoding is still not a "complete" specification for the reduction of Internet content but it does work and the performance benefits achieved by using it are both obvious and dramatic.

    What is GZIP?

  • This article was originally published on Oct 13, 2000

    Thanks for your registration, follow us on our social networks to keep up-to-date