Optimizing Linux code, application and programs – system performance

last updated in Categories C Programming, GNU/Open source, High performance computing, Howto, Linux, Tuning

Wringing the value out of every processor cycle on your machine required a variety of approaches. Sure, your code has to be efficient, but you also have to have your disks configured correctly, and a multitude of other things. Swayam Prakash provides a guide to some of the lower hanging fruit you can pick.

From the article:

Performance optimization in Linux doesn’t always mean what we might think. It’s not just a matter of outright speed; sometimes it’s about tuning the system so that it fits into a small memory footprint. You’d be hard-pressed to find a programmer that does not want to make programs run faster, regardless of the platform. Linux programmers are no exception; some take an almost fanatical approach to the job of optimizing their code for performance. As hardware becomes faster, cheaper, and more plentiful, some argue that performance optimization is less critical–particularly people that try to enforce deadlines on software development.

Note this this article is all about application optimization and not about server level optimization.

Optimizing Linux System Performance

How to optimize a web page for faster and better experience

last updated in Categories Apache, High performance computing, Howto, lighttpd, Linux, Tips, Tuning, UNIX

You may have noticed that most my webpage are loading bit faster. Here is what I did:

a) CSS code moved to its own file and included CSS at the top

b) Removed unnecessary (read as fancy web 2.0 stupid stuff) external javascript snippets

c) I’ve moved external javascript to bottom of page/template engine. For example google analytics JS code moved to bottom of webpage.

d) Turn on Apache gzip/mod_deflate compression

e) Turn on WordPress caching

f) Turn on php script caching (I’m using eAccelerator)

g) Tweak MySQL for optimization. Turn on query cache and other settings.

h) If possible switch to lighttpd or use squid / lighttpd as caching server for old good Apache.

If you have tons of cash to burn (assuming that your web app demands performance):

  • Consider using CDN (Content Delivery Network) such as Akamai or SAVVIS.
  • Server load balancing

However there are some external JS script snippets such as Google Adsense which slows down loading of a webpage. In few months I may roll out a new template and I will try to fix this issue 🙂

I’m interested to know what other people’s experiences with web page optimization. Feel free to share your tips.

Howto Load balance applications under Linux

last updated in Categories High performance computing, Linux, Networking, Sys admin, Tips

This guide provides some insights about Load balancing Linux application including architectures or choices between load balancers and scaling apps with Load Balancing.

From the article:
Originally, the Web was mostly static contents, quickly delivered to a few users which spent most of their time reading between rare clicks. Now, we see real applications which hold users for tens of minutes or hours, with little content to read between clicks and a lot of work performed on the servers. The users often visit the same sites, which they know perfectly and don’t spend much time reading. They expect immediate delivery while they inconsciously inflict huge loads on the servers at every single click. This new dynamics has developped a new need for high performance and permanent availability. I hope to complete this article soon with a deeper HTTP analysis and with architecture examples.

=> Making applications scalable with Load Balancing

Speed up Apache 2.0 web access or downloads with mod_deflate

last updated in Categories Apache, CentOS, Debian Linux, High performance computing, Howto, Linux, RedHat/Fedora Linux, Tuning, Ubuntu Linux, UNIX

You can speed up downloads or web page access time with Apache mod_deflate module. The mod_deflate module provides the DEFLATE output filter that allows output from your server to be compressed before being sent to the client over the network.

This decreases the amount of time and data transmitted over the network, resulting in faster web experience or downloads for visitors.

Make sure mod_deflate included with your Apache server (by default it is now installed with all modern distro).

How can I speed up downloads from my Apache 2.0 server?

Open httpd.conf file using a text editor such as vi:
# vi httpd.conf

Append following line:
LoadModule deflate_module modules/mod_deflate.so

Append following configuration <Location /> directive:
<Location />
AddOutputFilterByType DEFLATE text/html text/plain text/xml
....
...
<Location>

Above line only compress html and xml files. Here is the configuration from one of my production box:
<Location />
...
...
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE image/svg+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/atom_xml
AddOutputFilterByType DEFLATE application/x-javascript
AddOutputFilterByType DEFLATE application/x-httpd-php
AddOutputFilterByType DEFLATE application/x-httpd-fastphp
AddOutputFilterByType DEFLATE application/x-httpd-eruby
AddOutputFilterByType DEFLATE text/html
...
...
<Location>

Close and save the file. Next restart apache web server. All of the above extension file should compressed by mod_deflate:
# /etc/init.d/httpd restart

You can also specify specific directory and enabling compression only for the html files. For example /static/help/ directory:
<Directory "/static/help">
AddOutputFilterByType DEFLATE text/html
</Directory>

In real life, there are issues with compressing other types of files such as mp3 or images. If you don’t want to compress images or mp3 files, add following to your configuration:
SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|bz2|sit|rar)$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.avi$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.mov$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.mp3$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.mp4$ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.rm$ no-gzip dont-vary

Please note that this processing takes additional CPU and memory on your server as well as on the client browser. So you must make decision which document you need to compress (thanks to mdxp).

See also:

Mount a Linux filesystem on a SAN from multiple nodes at the same time

last updated in Categories CentOS, FAQ, File system, Gentoo Linux, Hardware, High performance computing, Linux, Linux Scalability, RedHat/Fedora Linux, Storage

If you try to mount an ext3 Linux filesystem on a SAN from multiple nodes at the same time you will be in serious deep trouble.

SAN based storage allows multiple nodes to connect to same devices at the same time. Ext3/2 are not cluster aware file system. They can lead to a disaster such as kernel panic, server hang, corruption etc.

You need to use something which supports:

  1. Useful in clusters for moderate scale out and shared SAN volumes
  2. Symmetrical Parallel Cluster File System, Journaled
  3. POSIX access controls

Both GFS (RedHat Global File System) and Lustre (a scalable, secure, robust, highly available cluster file system) can be used with SAN based storage allows multiple nodes to connect to same devices at the same time.

Many newbie get confused as Linux offers a number of file systems. This paper (Linux File System Primer) discusses these file systems, why there are so many, and which ones are the best to use for which workloads and data.