Howto: Performance Benchmarks a Webserver

by on June 9, 2006 · 34 comments· LAST UPDATED November 13, 2008

in , ,

You can benchmark Apache, IIS and other web server with apache benchmarking tool called ab. Recently I was asked to performance benchmarks for different web servers.

It is true that benchmarking a web server is not an easy task. From how to benchmark a web server:

First, benchmarking a web server is not an easy thing. To benchmark a web server the time it will take to give a page is not important: you don't care if a user can have his page in 0.1 ms or in 0.05 ms as nobody can have such delays on the Internet.

What is important is the average time it will take when you have a maximum number of users on your site simultaneously. Another important thing is how much more time it will take when there are 2 times more users: a server that take 2 times more for 2 times more users is better than another that take 4 times more for the same amount of users."

Here are few tips to carry out procedure along with an example:

Apache Benchmark Procedures

  • You need to use same hardware configuration and kernel (OS) for all tests
  • You need to use same network configuration. For example, use 100Mbps port for all tests
  • First record server load using top or uptime command
  • Take at least 3-5 readings and use the best result
  • After each test reboot the server and carry out test on next configuration (web server)
  • Again record server load using top or uptime command
  • Carry on test using static html/php files and dynamic pages
  • It also important to carry out test using the Non-KeepAlive and KeepAlive (the Keep-Alive extension to provide long-lived HTTP sessions, which allow multiple requests to be sent over the same TCP connection) features
  • Also don't forget to carry out test using fast-cgi and/or perl tests

Webserver Benchmark Examples:

Let us see how to benchmark a Apache 2.2 and lighttpd 1.4.xx web server.

Static Non-KeepAlive test for Apache web server

i) Note down server load using uptime command
$ uptime

ii) Create a static (small) html page as follows (snkpage.html) (assuming that server IP is 202.54.200.1) in /var/www/html (or use your own webroot):

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>Webserver test</title>
</head>
<body>
This is a webserver test page.
</body>
</html>
 

Login to Linux/bsd desktop computer and type following command:
$ ab -n 1000 -c 5 http://202.54.200.1/snkpage.html
Where,

  • -n 1000: ab will send 1000 number of requests to server 202.54.200.1 in order to perform for the benchmarking session
  • -c 5 : 5 is concurrency number i.e. ab will send 5 number of multiple requests to perform at a time to server 202.54.200.1

For example if you want to send 10 request, type following command:
$ ab -n 10 -c 2 http://www.somewhere.com/

Output:

This is ApacheBench, Version 2.0.41-dev <$Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.cyberciti.biz (be patient).....done
Server Software:
Server Hostname:        www.somewhere.com
Server Port:            80
Document Path:          /
Document Length:        16289 bytes
Concurrency Level:      1
Time taken for tests:   16.885975 seconds
Complete requests:      10
Failed requests:        0
Write errors:           0
Total transferred:      166570 bytes
HTML transferred:       162890 bytes
Requests per second:    0.59 [#/sec] (mean)
Time per request:       1688.597 [ms] (mean)
Time per request:       1688.597 [ms] (mean, across all concurrent requests)
Transfer rate:          9.59 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      353  375  16.1    386     391
Processing:  1240 1312  52.1   1339    1369
Waiting:      449  472  16.2    476     499
Total:       1593 1687  67.7   1730    1756
Percentage of the requests served within a certain time (ms)
  50%   1730
  66%   1733
  75%   1741
  80%   1753
  90%   1756
  95%   1756
  98%   1756
  99%   1756
 100%   1756 (longest request)

Repeat above command 3-5 times and save the best reading.

Static Non-KeepAlive test for lighttpd web server

First, reboot the server:
# reboot

Stop Apache web server. Now configure lighttpd and copy /var/www/html/snkpage.html to lighttpd webroot and run the command (from other linux/bsd system):
$ ab -n 1000 -c 5 http://202.54.200.1/snkpage.html

c) Plot graph using Spreadsheet or gnuplot.

How do I carry out Web server Static KeepAlive test?

Use -k option that enables the HTTP KeepAlive feature using ab test tool. For example:
$ ab -k -n 1000 -c 5 http://202.54.200.1/snkpage.html

Use the above procedure to create php, fast-cgi and dynmic pages to benchmarking the web server.

Please note that 1000 request is a small number you need to send bigger (i.e. the hits you want to test) requests, for example following command will send 50000 requests :
$ ab -k -n 50000 -c 2 http://202.54.200.1/snkpage.html

How do I save result as a Comma separated value?

Use -e option that allows to write a comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests:
$ ab -k -n 50000 -c 2 -e apache2r1.cvs http://202.54.200.1/snkpage.html

How do I import result into excel or gnuplot programs so that I can create graphs?

Use above command or -g option as follows:
$ ab -k -n 50000 -c 2 -g apache2r3.txt http://202.54.200.1/snkpage.html

Put following files in your webroot (/var/www/html or /var/www/cgi-bin) directory. Use ab command.

Sample test.php file

#!/usr/bin/perl
$command=`perl -v`;
$title = "Perl Version";
 
print "Content-type: text/html\n\n";
print "<html><head><title>$title</title></head>\n<body>\n\n";
 
print "<h1>$title</h1>\n";
print $command;
 
print "\n\n</body></html>";
 

Run ab command as follows:
$ ab -n 3000 -c 5 http://202.54.200.1/cgi-bin/test.pl

Sample psql.php (php+mysql) file

<html>
<head><title>Php+MySQL</title></head>
<body>
<?php
   $link = mysql_connect("localhost", "USERNAME", "PASSWORD");
   mysql_select_db("DATABASE");
 
   $query = "SELECT * FROM TABLENAME";
   $result = mysql_query($query);
 
   while ($line = mysql_fetch_array($result))
   {
      foreach ($line as $value)
       {
         print "$value\n";
      }
   }
 
    mysql_close($link);
?>
</body>
</html>
 

Run ab command as follows:
$ ab -n 1000 -c 5 http://202.54.200.1/psql.php

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 34 comments… read them below or add one }

1 Better Programmer June 10, 2006 at 4:24 am

1689ms per page view? That’s 1.7 secords, and an appalling figure for a production website… Doctor, heal theyself

You really need to spend some time profiling your web app. Repeat after me:

It ‘just works’ is not enough — it must work well!

Reply

2 nixCraft June 10, 2006 at 1:56 pm

Better Programmer,

LOL the above output is not from a real box. It is just includes so that readers can understand the output.

Reply

3 Sean June 10, 2006 at 7:28 pm

Thanks for the look at “ab”. I agree the more important metric is the average response time under production load.

Based on some scripts I use myself, I wrote a tutorial on how to monitor the response time of a real world load (though there’s nothing saying it couldn’t be used alongside ab or siege)

http://www.ibm.com/developerworks/edu/dw-esdd-webperfrrd-i.html

I’ve also got an article going up on the same site in the near future that uses truss/strace to profile Apache and the configuration, in case you’re really concerned about performance.

Reply

4 nixCraft June 10, 2006 at 8:20 pm

Sean,

Thanks for sharing information and tutorial.

There is lot of discussion going on about Sun Solaris dtrace http://www.sun.com/bigadmin/content/dtrace/

Unfortunately, it is not available for Linux :(

Reply

5 Zydoon June 13, 2006 at 2:20 am

Hello,

for better monitoring of the webserver behavior, you can take a look at ganglia, it’s more accurate than ps, top or uptaime (even if it is better used for clusters)

I suggest you httperf, I find it better than ab, just because I can play scenarii for testing.

And finally, thank you for this introduction of ab, I’m giving it a try (I’m benchmarking a web cluster).
Zydoon.

Reply

6 nixCraft June 13, 2006 at 9:21 am

Zydoon,

Thanks for suggestion.

Reply

7 juantomas May 21, 2007 at 11:57 am

I wonder if you know about an script for gnuplot to process the information obtained with the g option.

Thanks for the post good work!!

Reply

8 Sakthi November 23, 2007 at 12:26 pm

Sir,
Now i am using Apache ab to benchmark the search server in an websit. Now currently i can use n number of request and n of cuncurrency to search a same word.My problem is i want search n number of words with n number of request and cuncurrency, give me a solution.

Thanks in advance
M.Sakthi

Reply

9 Kapil Krishnan CPK November 27, 2007 at 5:22 am

Hi,

I am getting different result for the same command
ab -n 300 -c 2 http://203.168.1.15/KAPIL/queryTest.php
and at different time. May I please know why it happends like this?

Thanks & Regards

Kapil Krishnan CPK

Reply

10 nixCraft November 27, 2007 at 8:08 pm

Is it a production box?

Reply

11 China Landscape May 7, 2008 at 2:39 am

What is the ab command exactly ?

Reply

12 Testing Geek July 24, 2008 at 2:58 pm

Thanks for the tutorial. Was looking for information on the simple tools for apache benchmarking and google helped me reach here.

Reply

13 Stromanbieter August 21, 2008 at 3:08 pm

Does it make a difference to run the ‘ab’ process on the same server as the webserver (that should be benchmarked) or should the ‘ab’ process be started from another server?

Reply

14 Jan August 28, 2008 at 7:27 am

The best would be to run the ab test from another server. This create the most realistic context for the test.

Reply

15 Autarch August 30, 2008 at 11:17 am

Any suggestions for accurately testing a large numbers of concurrent users?

Reply

16 stromanbieter September 5, 2008 at 6:08 pm

with the parameter “-c” (concurrency) you can set the number of multiple requests to perform at a time. Default is one request at a time.

Reply

17 Kapil Krishnan CPK September 23, 2008 at 4:30 am

Thanks everyone. Actually it was taking random values at different instances. But the performance is still same.

If you want to know more about it just download Apache Source code and check /mod/source/mod_backtrace.c
/mod/source/mod_performance.c etc

This C files will tell you the more details.

Regards

Kapil Krishnan CPK

Reply

18 Roger Campbell September 29, 2008 at 3:21 pm

There are a bunch of commercial apps with Load Runner (HP) being probably the most well known and expensive. In addition, here are some free tools that improve on ab functionality quite a bit but take more time to understand and use.

http://www.webload.org
jakarta.apache.org/jmeter/
http://www.pushtotest.com

My company (awebstorm.com) is working on providing this as a web service for release by the end of 2008.

Roger

Reply

19 Gail Wiseman December 29, 2008 at 12:26 pm

Most of the output of ab is self explaining, but I wonder if there is a complete description of this output.

Thanks,
Gail

Reply

20 Josh Bodily January 29, 2009 at 9:08 pm

I think that the extension should be .csv, and not .cvs in your example, otherwise you won’t be able to open it with any spreadsheet apps.

Reply

21 Nung July 15, 2009 at 3:10 am

If want to test on Windows (apache) , what method.

Reply

22 fantonio July 18, 2009 at 2:56 am

Very good!

Reply

23 Pierre August 31, 2009 at 7:47 am

Here is a way to let ab produce a CSV file that covers a range of concurrencies (like 0-1,000), saving you the hurdle of running ab 1,000 times (and merging results).

It has been used to benchmark Apache, IIS 5.1 and 7.0, Nginx, Cherokee, Rock and TrustLeap G-WAN, see:

http://trustleap.ch/

You just have then to import the CSV file into Open Office to generate Charts!


#include
#include
#include
#include
#include
#include
#include

#define LOOP 1000
#define ITER 1
// ----------------------------------------------------------------------------
int main(int argc, char *argv[])
{
int i, j, nbr, best;
char str[256], buff[4000];
FILE *f, *fo=fopen("test.txt", "wb");

for(i=0; i res.txt", i?i:1);

for(best=0, j=0; j<ITER; j++)
{
system(str);
Sleep (40);

// get the information we need from res.txt
if(!(f=fopen("res.txt", "rb")))
{
printf("Can't open file\n");
return 1;
}
memset(buff, 0, sizeof(buff)-1);
fread (buff, 1, sizeof(buff)-1, f);
fclose(f);

nbr=0;
if(*buff)
{
char *p=(char*)strstr(buff, "Requests per second:");
if(p) // "Requests per second: 14,863.00 [#/sec] (mean)"
{
while(*p==' ') p++;
nbr=atoi(p);
}
}
if(best<nbr) best=nbr;
}

// save data
printf("%u,%u\n", i, best);
fprintf(fo, "%u,%u\n", i, best);
}
fclose(fo);
return 0;
}
// ----------------------------------------------------------------------------

Reply

24 Trilitheus September 7, 2009 at 2:08 pm

I’ve changed this slightly – I think the ::ffff: is something to do with IPV6 – with from remote host or localhost – I may be wrong – anyhoo….

I changed the netstat line to this:
netstat -ntu | awk ‘{print $5}’ | sed ‘s/^::ffff://’ | cut -f1 -d: | sort | uniq -c | sort -nr

note the extra sed ‘s/^::ffff//’ which converts the lines with the funny bits in to the same as the others. This was the simplest and fastest way I could think off to strip it out so the rest of the code works as expected.
Hope this helps anyone who was getting a headache with this.

Reply

25 Jek October 16, 2009 at 12:51 am

Your server is really slow…

My results on my server:

Here is the PHP page it uses:

<?php
$algos = hash_algos();
foreach ($algos as $hash) {
echo $hash.": “.hash($hash, $_GET['s']).””;
}
?>

The command I used:
ab -g gnuplot.gp -n 1000 -c 10 -q http://192.168.1.70/test.php

Here is AB’s results:

Benchmarking 192.168.1.70 (be patient)…..done

Server Software: Apache/2.2.9
Server Hostname: 192.168.1.70
Server Port: 80

Document Path: /test.php
Document Length: 3109 bytes

Concurrency Level: 10
Time taken for tests: 2.788 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3444000 bytes
HTML transferred: 3109000 bytes
Requests per second: 358.74 [#/sec] (mean)
Time per request: 27.875 [ms] (mean)
Time per request: 2.788 [ms] (mean, across all concurrent requests)
Transfer rate: 1206.55 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 13 4.1 12 41
Processing: 5 15 4.5 14 43
Waiting: 5 14 4.4 13 43
Total: 13 28 5.6 26 59

Percentage of the requests served within a certain time (ms)
50% 26
66% 28
75% 29
80% 30
90% 33
95% 38
98% 48
99% 52
100% 59 (longest request)

Reply

26 andreas November 13, 2009 at 3:19 pm

thanks for the tips

Reply

27 Ian December 18, 2009 at 4:12 pm

Jek,
looks like your testing your home network buddy (http://192.168.1.70/test.php) try your IP address that the outside world will see

Reply

28 Ittech December 19, 2009 at 2:45 am

Hello,

Thanks for your tutorial.
Is this also available on Apache on Windows? How do I run the test? Where should I enter the command to run the test?

Thanks a lot for answering ;)

Reply

29 Mahmud Ahsan January 2, 2010 at 2:58 pm

Really very helpful. Thanks

Reply

30 Bala March 7, 2010 at 2:51 am

This tutorial is very useful. Thanks for writing this

Reply

31 NavaTux April 8, 2010 at 5:49 pm

very nice;-)keep moving ;-)thanks;-)
one humble request Have you known how to use tsung for erlang server ? please forward to navaneethanit@gmail.com

Reply

32 Aryashree Pritikrishna July 15, 2011 at 11:30 am

Thanks for suggestion.

Reply

33 John S.S. August 13, 2011 at 10:11 pm

Thank you for taking the time to post this procedure.

Reply

34 wilson September 15, 2012 at 4:30 am

Hey man i keep getting an “SSL write failed – closing connection” error.

How can I test on an https / SSL site?
I’ve already tried using the -f option with SSLv3, and ALL.

Reply

Leave a Comment

Tagged as: , , , , , , , , , , , , , , , , , , ,

Previous post:

Next post: