Red Hat / CentOS Linux 5.x: Perl Performance Bug Fix Available

Posted on in Categories CentOS, GNU/Open source, High performance computing, Linux, Linux distribution, package management, RedHat/Fedora Linux last updated September 21, 2008

Perl version supplied with RHEL has bug, which will result code running at least 100 times slower than expected speed. Now, Red Hat updated perl packages that fix a performance issue. Earlier only solution was installing your own perl under /usr/local or other location. This fix will now take care of performance penalty.

What To Do: Users Still Wants Telnet

Posted on in Categories CentOS, fedora linux, GNU/Open source, High performance computing, Howto, Linux, package management, RedHat/Fedora Linux, Security, Ubuntu Linux, UNIX last updated August 26, 2008

TELNET (TELecommunication NETwork) is a network protocol used on the Internet or local area network (LAN) connections. It was developed in late 60s with RFC 15. Telnet is pretty old for login into remote system and it has serious security problem. Most admins will recommend using Open SSH (secure shell) for all remote activities. But you may find users who are still demanding telnet over ssh as they are comfortable with Telnet. Some users got scripts written in 90s and they don’t want to change it. So what do you do when users demands telnet?

CentOS / Red Hat Enterprise Linux 5.2 Poor NFS Performance and Solution

Posted on in Categories Apache, CentOS, data center, File system, High performance computing, Howto, Linux, Linux distribution, Networking, package management, RedHat/Fedora Linux, Security Alert, Storage, Sys admin, Troubleshooting, Tuning last updated August 22, 2008

A few days ago I noticed that NFS performance between a web server node and NFS server went down by 50%. NFS was optimized and the only thing was updated Red Hat kernel v5.2. I also noticed same trend on CentOS 5.2 64 bit edition.

Red Hat / CentOS Linux 4: Setup Device Mapper Multipathing

Posted on in Categories CentOS, data center, fedora linux, File system, Hardware, High performance computing, Howto, kernel, Linux, Linux Scalability, Linux Virtualization, Networking, RedHat/Fedora Linux, Tips last updated July 6, 2008

Multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer system and its mass storage devices through the buses, controllers, switches, and bridge devices connecting them.

A simple example would be a SCSI disk connected to two SCSI controllers on the same computer or a disk connected to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route I/O through the remaining controller transparently to the application, with no changes visible to the applications, other than perhaps incremental latency.

This is useful for:

  1. Dynamic load balancing
  2. Traffic shaping
  3. Automatic path management
  4. Dynamic reconfiguration

Linux device-mapper

In the Linux kernel, the device-mapper serves as a generic framework to map one block device onto another. It forms the foundation of LVM2 and EVMS, software RAIDs, dm-crypt disk encryption, and offers additional features such as file-system snapshots.

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing the resultant data on to another block device.

How do I setup device-mapper multipathing in CentOS / RHEL 4 update 2 or above?

Open /etc/multipath.conf file, enter:
# vi /etc/multipath.conf
Make sure following line exists and commented out:

devnode_blacklist {
        devnode "*"
}

Make sure default_path_grouping_policy option in the defaults section set to failover. Here is my sample config:

defaults {
       multipath_tool  "/sbin/multipath -v0"
       udev_dir        /dev
       polling_interval 10
       default_selector        "round-robin 0"
       default_path_grouping_policy    failover
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_min_io              100
       failback                immediate
}

Save and close the file. Type the following command to load drivers:
# modprobe dm-multipath
# modprobe dm-round-robin

Start the service, enter:
# /etc/init.dmultipathd start
multipath is used to detect multiple paths to devices for fail-over or performance reasons and coalesces them:
# multipath -v2
Turn on service:
# /sbin/chkconfig multipathd on
Finally, create device maps from partition tables:
# kpartx -a /dev/mapper/mpath#
You need to use fdisk on the underlying disks such as /dev/sdc.

References:

  • man page kpartx,multipath, udev, dmsetup and hotplug

How To Measure Linux Filesystem I/O Performance With iozone

Posted on in Categories File system, High performance computing, Linux, Linux distribution, RedHat/Fedora Linux, Storage, Sys admin, Tips, UNIX last updated July 4, 2008

IOzone is a filesystem benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many systems and runs under many operating systems including Windows, UNIX, Linux and BSD. This article gives you a jumpstart on performing benchmark on filesystem using iozone a free Filesystem Benchmark utility under Linux.