Red Hat / CentOS Linux 4: Setup Device Mapper Multipathing

Posted on in Categories CentOS, data center, fedora linux, File system, Hardware, High performance computing, Howto, kernel, Linux, Linux Scalability, Linux Virtualization, Networking, RedHat/Fedora Linux, Tips last updated December 3, 2008

Multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer system and its mass storage devices through the buses, controllers, switches, and bridge devices connecting them.

A simple example would be a SCSI disk connected to two SCSI controllers on the same computer or a disk connected to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route I/O through the remaining controller transparently to the application, with no changes visible to the applications, other than perhaps incremental latency.

This is useful for:

  1. Dynamic load balancing
  2. Traffic shaping
  3. Automatic path management
  4. Dynamic reconfiguration

Linux device-mapper

In the Linux kernel, the device-mapper serves as a generic framework to map one block device onto another. It forms the foundation of LVM2 and EVMS, software RAIDs, dm-crypt disk encryption, and offers additional features such as file-system snapshots.

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing the resultant data on to another block device.

How do I setup device-mapper multipathing in CentOS / RHEL 4 update 2 or above?

Open /etc/multipath.conf file, enter:
# vi /etc/multipath.conf
Make sure following line exists and commented out:

devnode_blacklist {
        devnode "*"
}

Make sure default_path_grouping_policy option in the defaults section set to failover. Here is my sample config:

defaults {
       multipath_tool  "/sbin/multipath -v0"
       udev_dir        /dev
       polling_interval 10
       default_selector        "round-robin 0"
       default_path_grouping_policy    failover
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_min_io              100
       failback                immediate
}

Save and close the file. Type the following command to load drivers:
# modprobe dm-multipath
# modprobe dm-round-robin

Start the service, enter:
# /etc/init.dmultipathd start
multipath is used to detect multiple paths to devices for fail-over or performance reasons and coalesces them:
# multipath -v2
Turn on service:
# /sbin/chkconfig multipathd on
Finally, create device maps from partition tables:
# kpartx -a /dev/mapper/mpath#
You need to use fdisk on the underlying disks such as /dev/sdc.

References:

  • man page kpartx,multipath, udev, dmsetup and hotplug

Howto: Create Shared Storage on Suse Linux using OCFS2 and Xen Virtualization

Posted on in Categories File system, High performance computing, Howto, Linux, Networking, Suse Linux, Sys admin, Tips last updated August 21, 2007

Arun Singh shows us how to create shared storage on SUSE Linux Enterprise Server 10 using OCFS2 (Oracle Cluster File System v2 for shared storage) and Xen Virtualization technology. Enterprise grade shared storage can cost you lots of money but here no real expensive shared storage used. The information provided here works with real shared storage as well:

This paper is to help you to understand the steps involved in creating shared storage without using expensive shared storage. Using this information you can create shared storage used by all xen guest OS and Host, avoiding copying of files between guest OS’s. Hope you will find this paper useful.

You can easily port instructions to Redhat or any other Linux distro without a problem. You can also use Redhat’s Global File System (GFS) too. We often use Fibre Channel or iSCSI, devices for GFS shared storage.

Creating shared storage on SUSE Linux Enterprise Server 10 using Xen and OCFS2 [novell.com]

On a related note there is also article about creating a highly available VMware Server environment on a Debian Etch system.