dcsimg

Improving mod_perl Driven Site's Performance -- Part II: Benchmarking Applications Page 2

By Stas Bekman (Send Email)
Posted Dec 16, 2000


Here are the numbers from Michael Parker's mod_perl presentation at the Perl Conference (Aug, 98). The script is a standard hits counter, but it logs the counts into a mysql relational DataBase:

    Benchmark: timing 100 iterations of cgi, perl...  [rate 1:28]

    cgi: 56 secs ( 0.33 usr 0.28 sys = 0.61 cpu)
    perl: 2 secs ( 0.31 usr 0.27 sys = 0.58 cpu)

    Benchmark: timing 1000 iterations of cgi,perl...  [rate 1:21]

    cgi: 567 secs ( 3.27 usr 2.83 sys = 6.10 cpu)
    perl: 26 secs ( 3.11 usr 2.53 sys = 5.64 cpu)

    Benchmark: timing 10000 iterations of cgi, perl   [rate 1:21]

    cgi: 6494 secs (34.87 usr 26.68 sys = 61.55 cpu)
    perl: 299 secs (32.51 usr 23.98 sys = 56.49 cpu)

We don't know what server configurations were used for these tests, but I guess the numbers speak for themselves.

The source code of the script was available online, but not any more :( But you can reproduce the same performance speedup, with pretty much any CGI script written in Perl.

Benchmarking Response Times with ApacheBench

ApacheBench (ab) is a tool for benchmarking your Apache HTTP server. It is designed to give you an idea of the performance that your current Apache installation can give. In particular, it shows you how many requests per second your Apache server is capable of serving. The ab tool comes bundled with the Apache source distribution.

Let's try it. We will simulate 10 users concurrently requesting a very light script at www.example.com/perl/test.pl. Each simulated user makes 10 requests.


  % ./ab -n 100 -c 10 www.example.com/perl/test.pl

The results are:

  Document Path:          /perl/test.pl
  Document Length:        319 bytes

  Concurrency Level:      10
  Time taken for tests:   0.715 seconds
  Complete requests:      100
  Failed requests:        0
  Total transferred:      60700 bytes
  HTML transferred:       31900 bytes
  Requests per second:    139.86
  Transfer rate:          84.90 kb/s received

  Connection Times (ms)
                min   avg   max
  Connect:        0     0     3
  Processing:    13    67    71
  Total:         13    67    74

We can see that under load of ten concurrent users our server is capable of processing 140 requests per second. Of course this benchmark is correct only when the script under test is used. We can also learn about the average processing time, which in this case was 67 milli-seconds. Other numbers reported by ab may or may not be of interest to you.

For example if we believe that the script perl/test.pl is not efficient we will try to improve it and run the benchmark again, to see whether we have any improve in performance.

Benchmarking Response Times with httperf

httperf is a utility written by David Mosberger. Just like ApacheBench, it measures the performance of the webserver.

A sample command line is shown below:

  httperf --server hostname --port 80 --uri /test.html    --rate 150 --num-conn 27000 --num-call 1 --timeout 5

This command causes httperf to use the web server on the host with IP name hostname, running at port 80. The web page being retrieved is /test.html and, in this simple test, the same page is retrieved repeatedly. The rate at which requests are issued is 150 per second. The test involves initiating a total of 27,000 TCP connections and on each connection one HTTP call is performed. A call consists of sending a request and receiving a reply.



Comment and Contribute

Your name/nickname

Your email

(Maximum characters: 1200). You have characters left.