≡ Menu

Howto: Performance Benchmarks a Webserver

You can benchmark Apache, IIS and other web server with apache benchmarking tool called ab. Recently I was asked to performance benchmarks for different web servers.

It is true that benchmarking a web server is not an easy task. From how to benchmark a web server:

First, benchmarking a web server is not an easy thing. To benchmark a web server the time it will take to give a page is not important: you don’t care if a user can have his page in 0.1 ms or in 0.05 ms as nobody can have such delays on the Internet.

What is important is the average time it will take when you have a maximum number of users on your site simultaneously. Another important thing is how much more time it will take when there are 2 times more users: a server that take 2 times more for 2 times more users is better than another that take 4 times more for the same amount of users.”

Here are few tips to carry out procedure along with an example:

Apache Benchmark Procedures

  • You need to use same hardware configuration and kernel (OS) for all tests
  • You need to use same network configuration. For example, use 100Mbps port for all tests
  • First record server load using top or uptime command
  • Take at least 3-5 readings and use the best result
  • After each test reboot the server and carry out test on next configuration (web server)
  • Again record server load using top or uptime command
  • Carry on test using static html/php files and dynamic pages
  • It also important to carry out test using the Non-KeepAlive and KeepAlive (the Keep-Alive extension to provide long-lived HTTP sessions, which allow multiple requests to be sent over the same TCP connection) features
  • Also don’t forget to carry out test using fast-cgi and/or perl tests

Webserver Benchmark Examples:

Let us see how to benchmark a Apache 2.2 and lighttpd 1.4.xx web server.

Static Non-KeepAlive test for Apache web server

i) Note down server load using uptime command
$ uptime

ii) Create a static (small) html page as follows (snkpage.html) (assuming that server IP is in /var/www/html (or use your own webroot):

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<title>Webserver test</title>
This is a webserver test page.

Login to Linux/bsd desktop computer and type following command:
$ ab -n 1000 -c 5

  • -n 1000: ab will send 1000 number of requests to server in order to perform for the benchmarking session
  • -c 5 : 5 is concurrency number i.e. ab will send 5 number of multiple requests to perform at a time to server

For example if you want to send 10 request, type following command:
$ ab -n 10 -c 2 http://www.somewhere.com/


This is ApacheBench, Version 2.0.41-dev <$Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation, http://www.apache.org/

Benchmarking www.cyberciti.biz (be patient).....done

Server Software:
Server Hostname:        www.somewhere.com
Server Port:            80

Document Path:          /
Document Length:        16289 bytes

Concurrency Level:      1
Time taken for tests:   16.885975 seconds
Complete requests:      10
Failed requests:        0
Write errors:           0
Total transferred:      166570 bytes
HTML transferred:       162890 bytes
Requests per second:    0.59 [#/sec] (mean)
Time per request:       1688.597 [ms] (mean)
Time per request:       1688.597 [ms] (mean, across all concurrent requests)
Transfer rate:          9.59 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      353  375  16.1    386     391
Processing:  1240 1312  52.1   1339    1369
Waiting:      449  472  16.2    476     499
Total:       1593 1687  67.7   1730    1756

Percentage of the requests served within a certain time (ms)
  50%   1730
  66%   1733
  75%   1741
  80%   1753
  90%   1756
  95%   1756
  98%   1756
  99%   1756
 100%   1756 (longest request)

Repeat above command 3-5 times and save the best reading.

Static Non-KeepAlive test for lighttpd web server

First, reboot the server:
# reboot

Stop Apache web server. Now configure lighttpd and copy /var/www/html/snkpage.html to lighttpd webroot and run the command (from other linux/bsd system):
$ ab -n 1000 -c 5

c) Plot graph using Spreadsheet or gnuplot.

How do I carry out Web server Static KeepAlive test?

Use -k option that enables the HTTP KeepAlive feature using ab test tool. For example:
$ ab -k -n 1000 -c 5

Use the above procedure to create php, fast-cgi and dynmic pages to benchmarking the web server.

Please note that 1000 request is a small number you need to send bigger (i.e. the hits you want to test) requests, for example following command will send 50000 requests :
$ ab -k -n 50000 -c 2

How do I save result as a Comma separated value?

Use -e option that allows to write a comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests:
$ ab -k -n 50000 -c 2 -e apache2r1.cvs

How do I import result into excel or gnuplot programs so that I can create graphs?

Use above command or -g option as follows:
$ ab -k -n 50000 -c 2 -g apache2r3.txt

Put following files in your webroot (/var/www/html or /var/www/cgi-bin) directory. Use ab command.

Sample test.php file

$command=`perl -v`;
$title = "Perl Version";
print "Content-type: text/html\n\n";
print "<html><head><title>$title</title></head>\n<body>\n\n";
print "<h1>$title</h1>\n";
print $command;
print "\n\n</body></html>";

Run ab command as follows:
$ ab -n 3000 -c 5

Sample psql.php (php+mysql) file

   $link = mysql_connect("localhost", "USERNAME", "PASSWORD");
   $query = "SELECT * FROM TABLENAME";
   $result = mysql_query($query);
   while ($line = mysql_fetch_array($result))
      foreach ($line as $value)
         print "$value\n";

Run ab command as follows:
$ ab -n 1000 -c 5

Share this on:

Your support makes a big difference:
I have a small favor to ask. More people are reading the nixCraft. Many of you block advertising which is your right, and advertising revenues are not sufficient to cover my operating costs. So you can see why I need to ask for your help. The nixCraft, takes a lot of my time and hard work to produce. If you use nixCraft, who likes it, helps me with donations:
Become a Supporter →    Make a contribution via Paypal/Bitcoin →   

Don't Miss Any Linux and Unix Tips

Get nixCraft in your inbox. It's free:

{ 35 comments… add one }
  • Better Programmer June 10, 2006, 4:24 am

    1689ms per page view? That’s 1.7 secords, and an appalling figure for a production website… Doctor, heal theyself

    You really need to spend some time profiling your web app. Repeat after me:

    It ‘just works’ is not enough — it must work well!

  • nixCraft June 10, 2006, 1:56 pm

    Better Programmer,

    LOL the above output is not from a real box. It is just includes so that readers can understand the output.

  • Sean June 10, 2006, 7:28 pm

    Thanks for the look at “ab”. I agree the more important metric is the average response time under production load.

    Based on some scripts I use myself, I wrote a tutorial on how to monitor the response time of a real world load (though there’s nothing saying it couldn’t be used alongside ab or siege)


    I’ve also got an article going up on the same site in the near future that uses truss/strace to profile Apache and the configuration, in case you’re really concerned about performance.

  • nixCraft June 10, 2006, 8:20 pm


    Thanks for sharing information and tutorial.

    There is lot of discussion going on about Sun Solaris dtrace http://www.sun.com/bigadmin/content/dtrace/

    Unfortunately, it is not available for Linux :(

  • Zydoon June 13, 2006, 2:20 am


    for better monitoring of the webserver behavior, you can take a look at ganglia, it’s more accurate than ps, top or uptaime (even if it is better used for clusters)

    I suggest you httperf, I find it better than ab, just because I can play scenarii for testing.

    And finally, thank you for this introduction of ab, I’m giving it a try (I’m benchmarking a web cluster).

  • nixCraft June 13, 2006, 9:21 am


    Thanks for suggestion.

  • juantomas May 21, 2007, 11:57 am

    I wonder if you know about an script for gnuplot to process the information obtained with the g option.

    Thanks for the post good work!!

  • Sakthi November 23, 2007, 12:26 pm

    Now i am using Apache ab to benchmark the search server in an websit. Now currently i can use n number of request and n of cuncurrency to search a same word.My problem is i want search n number of words with n number of request and cuncurrency, give me a solution.

    Thanks in advance

  • Kapil Krishnan CPK November 27, 2007, 5:22 am


    I am getting different result for the same command
    ab -n 300 -c 2
    and at different time. May I please know why it happends like this?

    Thanks & Regards

    Kapil Krishnan CPK

  • nixCraft November 27, 2007, 8:08 pm

    Is it a production box?

  • China Landscape May 7, 2008, 2:39 am

    What is the ab command exactly ?

  • Testing Geek July 24, 2008, 2:58 pm

    Thanks for the tutorial. Was looking for information on the simple tools for apache benchmarking and google helped me reach here.

  • Stromanbieter August 21, 2008, 3:08 pm

    Does it make a difference to run the ‘ab’ process on the same server as the webserver (that should be benchmarked) or should the ‘ab’ process be started from another server?

  • Jan August 28, 2008, 7:27 am

    The best would be to run the ab test from another server. This create the most realistic context for the test.

  • Autarch August 30, 2008, 11:17 am

    Any suggestions for accurately testing a large numbers of concurrent users?

  • stromanbieter September 5, 2008, 6:08 pm

    with the parameter “-c” (concurrency) you can set the number of multiple requests to perform at a time. Default is one request at a time.

  • Kapil Krishnan CPK September 23, 2008, 4:30 am

    Thanks everyone. Actually it was taking random values at different instances. But the performance is still same.

    If you want to know more about it just download Apache Source code and check /mod/source/mod_backtrace.c
    /mod/source/mod_performance.c etc

    This C files will tell you the more details.


    Kapil Krishnan CPK

  • Roger Campbell September 29, 2008, 3:21 pm

    There are a bunch of commercial apps with Load Runner (HP) being probably the most well known and expensive. In addition, here are some free tools that improve on ab functionality quite a bit but take more time to understand and use.


    My company (awebstorm.com) is working on providing this as a web service for release by the end of 2008.


  • Gail Wiseman December 29, 2008, 12:26 pm

    Most of the output of ab is self explaining, but I wonder if there is a complete description of this output.


  • Josh Bodily January 29, 2009, 9:08 pm

    I think that the extension should be .csv, and not .cvs in your example, otherwise you won’t be able to open it with any spreadsheet apps.

  • Nung July 15, 2009, 3:10 am

    If want to test on Windows (apache) , what method.

  • fantonio July 18, 2009, 2:56 am

    Very good!

  • Pierre August 31, 2009, 7:47 am

    Here is a way to let ab produce a CSV file that covers a range of concurrencies (like 0-1,000), saving you the hurdle of running ab 1,000 times (and merging results).

    It has been used to benchmark Apache, IIS 5.1 and 7.0, Nginx, Cherokee, Rock and TrustLeap G-WAN, see:


    You just have then to import the CSV file into Open Office to generate Charts!


    #define LOOP 1000
    #define ITER 1
    // ----------------------------------------------------------------------------
    int main(int argc, char *argv[])
    int i, j, nbr, best;
    char str[256], buff[4000];
    FILE *f, *fo=fopen("test.txt", "wb");

    for(i=0; i res.txt", i?i:1);

    for(best=0, j=0; j<ITER; j++)
    Sleep (40);

    // get the information we need from res.txt
    if(!(f=fopen("res.txt", "rb")))
    printf("Can't open file\n");
    return 1;
    memset(buff, 0, sizeof(buff)-1);
    fread (buff, 1, sizeof(buff)-1, f);

    char *p=(char*)strstr(buff, "Requests per second:");
    if(p) // "Requests per second: 14,863.00 [#/sec] (mean)"
    while(*p==' ') p++;
    if(best<nbr) best=nbr;

    // save data
    printf("%u,%u\n", i, best);
    fprintf(fo, "%u,%u\n", i, best);
    return 0;
    // ----------------------------------------------------------------------------

  • Trilitheus September 7, 2009, 2:08 pm

    I’ve changed this slightly – I think the ::ffff: is something to do with IPV6 – with from remote host or localhost – I may be wrong – anyhoo….

    I changed the netstat line to this:
    netstat -ntu | awk ‘{print $5}’ | sed ‘s/^::ffff://’ | cut -f1 -d: | sort | uniq -c | sort -nr

    note the extra sed ‘s/^::ffff//’ which converts the lines with the funny bits in to the same as the others. This was the simplest and fastest way I could think off to strip it out so the rest of the code works as expected.
    Hope this helps anyone who was getting a headache with this.

  • Jek October 16, 2009, 12:51 am

    Your server is really slow…

    My results on my server:

    Here is the PHP page it uses:

    $algos = hash_algos();
    foreach ($algos as $hash) {
    echo $hash.": “.hash($hash, $_GET[‘s’]).””;

    The command I used:
    ab -g gnuplot.gp -n 1000 -c 10 -q

    Here is AB’s results:

    Benchmarking (be patient)…..done

    Server Software: Apache/2.2.9
    Server Hostname:
    Server Port: 80

    Document Path: /test.php
    Document Length: 3109 bytes

    Concurrency Level: 10
    Time taken for tests: 2.788 seconds
    Complete requests: 1000
    Failed requests: 0
    Write errors: 0
    Total transferred: 3444000 bytes
    HTML transferred: 3109000 bytes
    Requests per second: 358.74 [#/sec] (mean)
    Time per request: 27.875 [ms] (mean)
    Time per request: 2.788 [ms] (mean, across all concurrent requests)
    Transfer rate: 1206.55 [Kbytes/sec] received

    Connection Times (ms)
    min mean[+/-sd] median max
    Connect: 2 13 4.1 12 41
    Processing: 5 15 4.5 14 43
    Waiting: 5 14 4.4 13 43
    Total: 13 28 5.6 26 59

    Percentage of the requests served within a certain time (ms)
    50% 26
    66% 28
    75% 29
    80% 30
    90% 33
    95% 38
    98% 48
    99% 52
    100% 59 (longest request)

  • andreas November 13, 2009, 3:19 pm

    thanks for the tips

  • Ian December 18, 2009, 4:12 pm

    looks like your testing your home network buddy ( try your IP address that the outside world will see

  • Ittech December 19, 2009, 2:45 am


    Thanks for your tutorial.
    Is this also available on Apache on Windows? How do I run the test? Where should I enter the command to run the test?

    Thanks a lot for answering ;)

  • Mahmud Ahsan January 2, 2010, 2:58 pm

    Really very helpful. Thanks

  • Bala March 7, 2010, 2:51 am

    This tutorial is very useful. Thanks for writing this

  • NavaTux April 8, 2010, 5:49 pm

    very nice;-)keep moving ;-)thanks;-)
    one humble request Have you known how to use tsung for erlang server ? please forward to navaneethanit@gmail.com

  • Aryashree Pritikrishna July 15, 2011, 11:30 am

    Thanks for suggestion.

  • John S.S. August 13, 2011, 10:11 pm

    Thank you for taking the time to post this procedure.

  • wilson September 15, 2012, 4:30 am

    Hey man i keep getting an “SSL write failed – closing connection” error.

    How can I test on an https / SSL site?
    I’ve already tried using the -f option with SSLv3, and ALL.

  • Nobody December 28, 2014, 8:06 pm

    Thanks for the great article. I just wanted to point out that psql.php is an extremely misleading name since it led me to think that the script uses PostgreSQL rather than php and MySQL.

Security: Are you a robot or human?

Leave a Comment

   Tagged with: , , , , , , , , , , , , , , , , , , ,