ServersStaying Out of Deep Water: Performance Testing Using HTTPD-Test's Flood Page 2

Staying Out of Deep Water: Performance Testing Using HTTPD-Test’s Flood Page 2

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




Configuring Flood

Flood is configured through an XML file that is used to define the parameters for
testing the Web site. When testing, Flood uses a profile, which defines how
a given list of URLs are accessed. Requests are generated by one or more farmers that are, in turn, members of one or more farms. You can see this more clearly in the illustration
below.

As illustrated in the graphic, we have one farm, which specifies two sets of five farmers. Farmer Joe
uses ProfileA and a list of five URLs, and farmer Bob uses ProfileB with a list of three URLs. The farmers request the URLs directly from the Web server. Flood uses threads to create the farmers, and then collates the information the farmers collect into a single data file for later processing.

The XML file contains definitions for four main elements: the URL lists, profiles, farmers and farms.

The URL list is just that, a list of URLs to be accessed. URLs can be straight requests or specific types (GET, HEAD, and POST types are supported, as is the capability to supply data accordingly for dynamic driven sites).

Profiles define which URL list to use, how they should be accessed, what type of socket to use, and how the information should be reported.

Farmers are responsible for the actual request process. The only configurable elements are the profile to use and the number of times to process the profile. Profiles are executed sequentially by each farmer but can be repeated, so you would end up accessing, for example, urla, urlb, urla, urlb, and so on.

Farms specify the number of farmers to create and when. By increasing the number of
farmers created by a farm, the number of simultaneous requests is increased. Additional
settings enable you to create a number of initial farmers, and then increase that number at regular intervals. For example, you could initially create two farmers, then add a new farmer every five seconds up to a maximum of 20. Depending on your URL list and server performance, this could result in a slow rise to 20 simultaneous accesses for a given period, and then a slow fall back to zero. Alternatively, it could give the effect of a regular number of users accessing a set number of pages for a longer duration, with peaks of five or six simultaneous requests.

Note: The current version of Flood supports only one Farm, and it must be called ‘Bingo’. You can, however, specify multiple farmer definitions within the single farm, which achieves the same basic effect.

By tuning the farm, farmer, and URL list parameters you can control the number of
requests, simultaneous requests, overall duration (as a function of the URL list, repeat
count and number of farmers) and how the requests are spread over the duration of the
test. This allows you to very specifically test for different situations.

The three basic (and rough) rules of configuration with Flood to remember are:

  • The URL list defines what your farmers visit.
  • The repeat count for a farmer defines a number of users accessing your site.
  • The farmer count for a farm defines the number of simultaneous users.

A sample configuration for Flood is in the examples folder that came with the distribution; round-robin.xml is probably the easiest to one to start with. This article, however, will not discuss the specifics of editing the XML, or even processing
the data file generated.

Instead, we will examine how to tune the parameters to test different types of Web
sites. To help understand the implications of the next section, here’s a quick look at the results of the analyze-relative script from the examples directory. In this case it shows the results of a test on an internal server:

Slowest pages on average (worst 5):
   Average times (sec)
connect write   read    close   hits   URL
0.0022  0.0034  0.0268  0.0280  100    http://test.mcslp.pri/java.html
0.0020  0.0028  0.0183  0.0190  700    http://www.mcslp.pri/
0.0019  0.0033  0.0109  0.0120  100    http://test.mcslp.pri/random.html
0.0022  0.0031  0.0089  0.0107  100    http://test.mcslp.pri/testr.html
0.0019  0.0029  0.0087  0.0096  100    http://test.mcslp.pri/index.html
Requests: 1200 Time: 0.14 Req/Sec: 9454.08

From these results you can see the average connect, write (request), read (response), and close times for a single page. You also get a basic idea of the number of requests handled per second by the server.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories