≡ Menu

Linux Scalability

Install Linux On Intel Xeon 7400 Dunnington

Dunnington is Intel's first multi-core CPU - features a single-die six- (or hexa) core design with three unified 3 MB L2 caches (resembling three merged 45 nm dual-core Wolfdale dies), and 96 KB L1 cache (Data) and 16 MB of L3 cache. It features 1066 MHz FSB, fits into the Tigerton's mPGA604 socket, and is compatible with the Caneland chipset. These processors support DDR2-1066 (533 MHz), and have a maximum TDP below 130 W. They are intended for blades and other stacked computer systems.

I've Sun Blade X6450 server for ERP application:

The four-socket Sun Blade X6450 server module features Intel Xeon processor 7000 series and up to 192 GB of memory. With 24 DIMM slots per server module, it gives you 50 percent more memory capacity than competing blade servers making it an ideal fit for virtualization and server consolidation, HPC, database and enterprise applications. Fill a Sun Blade 6048 chassis for a remarkable 11TFLOPS of peak performance and up to 1152 processing cores per rack.

Fig.01: Sun Blade X6450 Server Module showing the internals

Fig.01: Sun Blade X6450 Server Module showing the internals

I tried old RHEL version and it failed to work because of old kernel. So I called to Redhat support and they told me to use at least kernel 2.6.18-92.1.10 and above. The problem is my client do not have RHEL 5.2 media and License (server came with Solaris 10). So I asked my client to get RHEL 5.2. Unfortunately, their local software vendor was out of stock for RHEL software. It took 5 days to get software media kit.

Finally, I've installed it on Sun blade server. It is working fine now. I wish I knew about latest Intel Xeon 7400 support problem earlier. It may have saved some time, effort, traveling and money on my part. I should have gone though server datasheet. This server only works with RHEL version 4.7 or 5.2 (32/64 bit) and above.

pssh: Run Command On Multiple SSH Servers

I've already written about tentakel tool and shell script hack to run a single command on multiple Linux / UNIX / BSD server. This is useful to save time and run UNIX commands on multiple machines. Linux.com has published an article about a new and better tool called pssh:

If you want to increase your productivity with SSH, you can try a tool that lets you run commands on more than one remote machine at the same time. Parallel ssh, Cluster SSH, and ClusterIt let you specify commands in a single terminal window and send them to a collection of remote machines where they can be executed.

Red Hat / CentOS Linux 4: Setup Device Mapper Multipathing

Multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer system and its mass storage devices through the buses, controllers, switches, and bridge devices connecting them.

A simple example would be a SCSI disk connected to two SCSI controllers on the same computer or a disk connected to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route I/O through the remaining controller transparently to the application, with no changes visible to the applications, other than perhaps incremental latency.

This is useful for:

  1. Dynamic load balancing
  2. Traffic shaping
  3. Automatic path management
  4. Dynamic reconfiguration

Linux device-mapper

In the Linux kernel, the device-mapper serves as a generic framework to map one block device onto another. It forms the foundation of LVM2 and EVMS, software RAIDs, dm-crypt disk encryption, and offers additional features such as file-system snapshots.

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing the resultant data on to another block device.

How do I setup device-mapper multipathing in CentOS / RHEL 4 update 2 or above?

Open /etc/multipath.conf file, enter:
# vi /etc/multipath.conf
Make sure following line exists and commented out:

devnode_blacklist {
        devnode "*"

Make sure default_path_grouping_policy option in the defaults section set to failover. Here is my sample config:

defaults {
       multipath_tool  "/sbin/multipath -v0"
       udev_dir        /dev
       polling_interval 10
       default_selector        "round-robin 0"
       default_path_grouping_policy    failover
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_min_io              100
       failback                immediate

Save and close the file. Type the following command to load drivers:
# modprobe dm-multipath
# modprobe dm-round-robin

Start the service, enter:
# /etc/init.dmultipathd start
multipath is used to detect multiple paths to devices for fail-over or performance reasons and coalesces them:
# multipath -v2
Turn on service:
# /sbin/chkconfig multipathd on
Finally, create device maps from partition tables:
# kpartx -a /dev/mapper/mpath#
You need to use fdisk on the underlying disks such as /dev/sdc.


  • man page kpartx,multipath, udev, dmsetup and hotplug

Helmer: A Linux Commodity Computing cluster in a IKEA Helmer Cabinet

Rendering is the process of generating an image from a model, by means of computer programs. POV-Ray is one of such free software for rendering images. This article explains how to build home Linux render cluster using commodity computing technique:

3D computer rendering are very CPU intensive and the best way so speed up slow render problems, are usually to distribute them on to more computers. Render farms are usually very large, expensive and run using ALLOT of energy. I wanted to build something that could be put in my home, not make too much noise and run using very little energy... and be dirt cheep, big problem? :) no computer stuff cost almost nothing these days, it just a matter of finding fun stuff to play with.

(Fig.01: Helmer Linux Cluster)

=> This is the story of Helmer. A linux cluster in a IKEA Helmer cabinet.

Linux Market Will Rise From $21 Billion To $49 Billion in 2011

I'm not surprised at all. Linux runs on tiny phone to large server systems. According to IDC researchers (prediction) - spending on the Linux ecosystem will rise from $21 billion in 2007 to more than $49 billion in 2011, driven by rising enterprise deployments of Linux server operating systems.

Linux server deployments are expanding from infrastructure-oriented applications to more commercially oriented database and enterprise resource-planning workloads "that historically have been the domain of Microsoft Windows and Unix," noted IDC analysts in a white paper commissioned by the nonprofit Linux Foundation.

"The early adoption of Linux was dominated by infrastructure-oriented workloads, often taking over those workloads from an aging Unix server or Windows NT 4.0 server that was being replaced," according to the report's authors, Al Gillen, Elaina Stergiades and Brett Waldman. These days, however, Linux is increasingly being "viewed as a solution for wider and more critical business deployments."

=> Linux Ecosystem Spending To Exceed $49 Billion

Load Balancer Open Source Software

I've worked with a various load balancing systems (LBS). They are complex pieces of hardware and software. In this post I will highlight some of the open source load balancing software. But what is load balancing?
It is nothing but a technique used to share (spared) load / services between two or more servers. For example, busy e-commerce or bank website uses load balancer to increase reliability, throughput, uptime, response time and better resource utilization. You can use following softwares as an advanced load balancing solution for web, cache, dns, mail, ftp, auth servers, VoIP services etc.

Linux Virtual Server (LVS)

LVS is ultimate open source Linux load sharing and balancing software. You can easily build a high-performance and highly available server for Linux using this software. From the project page:

Virtual server is a highly scalable and highly available server built on a cluster of real servers. The architecture of server cluster is fully transparent to end users, and the users interact with the cluster system as if it were only a single high-performance virtual server.

=> Project Web Site

Red Hat Cluster Suite

It is a high availability cluster software implementation from Linux leader Red Hat. It provide two type services:

  1. Application / Service Failover - Create n-node server clusters for failover of key applications and services
  2. IP Load Balancing - Load balance incoming IP network requests across a farm of servers

=> Product web page

The High Availability Linux Project

Linux-HA provides sophisticated high-availability (failover) capabilities on a wide range of platforms, supporting several tens of thousands of mission critical sites.

=> Project web site

Ultra Monkey

Ultra Monkey is a project to create load balanced and highly available network services. For example a cluster of web servers that appear as a single web server to end-users. The service may be for end-users across the world connected via the internet, or for enterprise users connected via an intranet.

Ultra Monkey makes use of the Linux operating system to provide a flexible solution that can be tailored to a wide range of needs. From small clusters of only two nodes to large systems serving thousands of connections per second.

=> Project web site

Personally, I've worked with both LVS and Red Hat Cluster Suite and I highly recommend these softwares.

Linux Xen High Availability Clusters Configuration Tutorial

Xen is one of the leading Virtualization software. You can use Xen virtualization to implement HA clusters. However, there are few issues you must be aware of while handling failures in a high-availability environment. This article explains configuration options using Xen:

The idea of using virtual machines to build high available clusters is not new. Some software companies claim that virtualization is the answer to your HA problems, off course that's not true. Yes, you can reduce downtime by migrating virtual machines to another physical machine for maintenance purposes or when you think hardware is about to fail, but if an application crashes you still need to make sure another application instance takes over the service. And by the time your hardware fails, it's usually already too late to initiate the migration.

So, for each and every application you still need to look at whether you want to have it constantly available, if you can afford the application to be down for some time, or if your users won't mind having to relogin when one server fails.

=> Using Xen for High Availability Clusters [onlamp.com]