ServersThe Perl You Need to Know: Benchmarking Perl Page 6

The Perl You Need to Know: Benchmarking Perl Page 6

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.




          timestr(timediff(,));
        }

Due to the scoping of my variables, you can re-use and inside each subroutine
without conflict. Remember to modify the label for each benchmark so that you can recognize
which subroutine the measurement applies to. When the script runs, for instance in a Web
environment, the resulting Web page may then contain a rundown for each subroutine and its
execution time. Because this technique does not perform many iterations, you should invoke
the script a number of times to get a sense of consistency from the measurements.

Ultimately, the goal of adding stopwatches to your script is to find any abnormally slow
subroutines or sections of code. This quick diagnosis can point you to trouble spots quickly. In
the long run, focusing on the unusually slow sections of code will yield greater efficiency than
attempting to benchmark alternatives for every routine in the script.

Troubleshooting in Context

Finding a slow segment of code is one thing – identifying the hold-up, and a solution, can be
quite another. As we saw in earlier examples, there are many times that the Perl language itself
can be the source of slow performance. We’ve seen that there can be a variety of ways to
address a problem, using Perl, and some are much faster than others. As a general rule of
thumb, Perl will perform faster when you rely on built-in functions; for example, we saw that the
map function performed much faster than building a foreach loop of our own. The lesson here is
“don’t re-invent the wheel” – let Perl do the work, whenever possible.

Equally important to identifying slow Perl syntax, though, are considerations beyond the
language itself. For example, if a particular subroutine is accessing data, consider where the
data is stored. Data retrieved from disk will likely be slower than data retrieved from memory.
Beyond that, the retrieval time of data on disk can vary considerably, depending on whether
the data is stored in a simple, slow database or a fast, indexed database. In any case, the
delay in retrieving data will inflate the execution time of such a subroutine, without being the
fault of the Perl code itself, but of the larger architecture.

Similarly, when a Perl subroutine retrieves data from a database, does it establish a connection
to the database on each invocation, or can it rely on an existing connection already
established (known as “persistent connections”)? Again, this bottleneck can inflate the
execution time of such a segment of code.

Portions of code that make network requests are going to be subject to all of the delays
inherent in networks. Once again, reducing these bottlenecks may not involve changing the
Perl code at all, but changing the architecture of the system, whether that includes other
pieces of software or the hardware itself. Ultimately, the point is that measuring execution time
of code is important, but intepreting it in context is even more important: the code itself may, or
may not be, to blame for a bottleneck.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends & analysis

Latest Posts

Related Stories