Apache2 mod_fastcgi: Connect to External PHP via UNIX Socket or TCP/IP Port

Posted on in Categories Apache, CentOS, fedora linux, Howto, lighttpd, Networking, php, RedHat/Fedora Linux, Security, Tips, Troubleshooting, Tuning last updated December 30, 2008

Now, mod_fastcgi is configured and running. FastCGI supports connection via UNIX sockets or TCP/IP networking. This is useful to spread load among various backends. For example, php will be severed from 192.168.1.10 and python / ruby on rails will be severed from 192.168.1.11. This is only possible with mod_fastcgi.

mod_compress: Lighttpd Gzip Compression To Improve Download and Browsing Speed

Posted on in Categories Apache, High performance computing, Howto, lighttpd, Linux, News, php, UNIX last updated April 26, 2008

Gzip compression reduces response times by reducing the size of the HTTP response. This document describes gzipping http traffic which can reduces the response size by about 70%. Approximately 90% of today’s Internet traffic travels through browsers that claim to support compression.

CentOS 5 Apache 2.2.3 files failing to download or corrupted download file issue

Posted on in Categories Apache, CentOS, File system, lighttpd, Linux, Storage, Tips, Troubleshooting last updated December 1, 2007

Recently, I noticed something strange about Apache 2.2.3 version running on CentOS Linux 5 64 bit version. We have centralized NFS server and all 3 web server load balanced using hardware front end (another box running LVS).

All Apache server picks up file via NFS i.e DocumentRoot is set over NFS. The small file such as 2 MB or 5 MB get downloaded correctly but large size files failed to download. Another problem was some clients reported that the file get download but cannot open due to file corruption issue.

After investigation and a little bit googling I came across the solution. You need to disable following two options:

  • EnableMMAP – This directive controls whether the httpd may use memory-mapping if it needs to read the contents of a file during delivery. By default, when the handling of a request requires access to the data within a file — for example, when delivering a server-parsed file using mod_include — Apache memory-maps the file if the OS supports it.
  • EnableSendfile – This directive controls whether httpd may use the sendfile support from the kernel to transmit file contents to the client. By default, when the handling of a request requires no access to the data within a file — for example, when delivering a static file — Apache uses sendfile to deliver the file contents without ever reading the file if the OS supports it.

However, these two directives are known to have problem with a network-mounted DocumentRoot (e.g., NFS or SMB), the kernel may be unable to serve the network file through its own cache. So just open httpd.conf on all boxes and changes the following:
EnableMMAP off
EnableSendfile off

Just restart the web server and voila!
# service httpd restart

Configure an Apache web server for core dump on segmentation faults

Posted on in Categories Apache, FreeBSD, Linux, Troubleshooting last updated May 9, 2006

Recently I have noticed that my Apache error log file shows it is generating segmentation faults. After doing little research I came to know that there is not simple solution to find of causes of this problem. I got an error that read as follows:

[Mon May 8 11:20:09 2006] [notice] Apache/2 (WebAppBETA) child pid 1256 exit signal Segmentation fault (11)
[Mon May 8 11:23:12 2006] [notice] Apache/2 (WebAppBETA) child pid 1301 exit signal Segmentation fault (11)

The problem is that our application development team has hacked (aka modified source code) Apache 2.0 source tree for application my company developing. To get rid of this problem I was asked to configure a Linux system so that Apache can dump core files on segmentation faults.

Apache Core Dump

Apache supports CoreDumpDirectory directive. This controls the directory to which Apache attempts to switch before dumping core. So all I need to do is put line as follows in httpd.conf:

Open httpd.conf:
# vi httpd.conf
Add following line main config section:
CoreDumpDirectory /tmp/apache2-gdb-dump
Create a directory /tmp/apache2-gdb-dump:
# mkdir -p /tmp/apache2-gdb-dump
Set permission:
# chown httpd:appserver /tmp/apache2-gdb-dump
# chmod 0777 /tmp/apache2-gdb-dump

Please note that we are using httpd user and group appserver. Please replace it with your actual Apache user:group combination.

And restart the Apache web server:
# /etc/init.d/httpd restart
OR kill Apache PID:
# kill -11 14658
Now you should see core dumps in /tmp/apache2-gdb-dump directory:
# ls /tmp/apache2-gdb-dump

How do I read the core dump files created by Apache on Linux systems?

Well I am not a developer but they are using gdb and other techniques to analyses the core dumps. Read man page of gdb for more information.

I hope that I will get a new patched version of Apache by next week. Another interesting fact I noticed that you need to configure Core Dumps on Linux only. We are also using FreeBSD for testing and it write core dump in the ServerRoot directory.

If Apache starts as root and switches to another user, the Linux kernel disables core dumps even if the directory is writable for the process. Apache (2.0.46 and later) enables core dumps on Linux 2.4 and beyond, but only if you explicitly configure a CoreDumpDirectory. 🙂