≡ Menu

gfs

Multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer system and its mass storage devices through the buses, controllers, switches, and bridge devices connecting them.

A simple example would be a SCSI disk connected to two SCSI controllers on the same computer or a disk connected to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route I/O through the remaining controller transparently to the application, with no changes visible to the applications, other than perhaps incremental latency.

This is useful for:

  1. Dynamic load balancing
  2. Traffic shaping
  3. Automatic path management
  4. Dynamic reconfiguration

Linux device-mapper

In the Linux kernel, the device-mapper serves as a generic framework to map one block device onto another. It forms the foundation of LVM2 and EVMS, software RAIDs, dm-crypt disk encryption, and offers additional features such as file-system snapshots.

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing the resultant data on to another block device.

How do I setup device-mapper multipathing in CentOS / RHEL 4 update 2 or above?

Open /etc/multipath.conf file, enter:
# vi /etc/multipath.conf
Make sure following line exists and commented out:

devnode_blacklist {
        devnode "*"
}

Make sure default_path_grouping_policy option in the defaults section set to failover. Here is my sample config:

defaults {
       multipath_tool  "/sbin/multipath -v0"
       udev_dir        /dev
       polling_interval 10
       default_selector        "round-robin 0"
       default_path_grouping_policy    failover
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_min_io              100
       failback                immediate
}

Save and close the file. Type the following command to load drivers:
# modprobe dm-multipath
# modprobe dm-round-robin

Start the service, enter:
# /etc/init.dmultipathd start
multipath is used to detect multiple paths to devices for fail-over or performance reasons and coalesces them:
# multipath -v2
Turn on service:
# /sbin/chkconfig multipathd on
Finally, create device maps from partition tables:
# kpartx -a /dev/mapper/mpath#
You need to use fdisk on the underlying disks such as /dev/sdc.

References:

  • man page kpartx,multipath, udev, dmsetup and hotplug

Security Update for Red Hat Linux Kernel

Red Hat has issued a security update for its Kernel that fixes issues related to following packages. This update has been rated as having important security impact on RHEL 4.x / 5.x, and you are recommended to update system as soon as possible.

=> Updated GFS-kernel, gnbd-kernel,dlm-kernel, cmirror-kernel, cman-kernel, Virtualization_Guide, Cluster_Administration, and lobal_File_System packages that fix module loading and others issues under RHEL 4.x and 5.x available now.

How do I update my system?

Simply type the following two commands:
# yum update
Sample output:

Loading "rhnplugin" plugin
Loading "security" plugin
rhel-x86_64-server-vt-5   100% |=========================| 1.2 kB    00:00
rhel-x86_64-server-5      100% |=========================| 1.2 kB    00:00
Skipping security plugin, no data
Setting up Update Process
Resolving Dependencies
Skipping security plugin, no data
--> Running transaction check
---> Package kernel.x86_64 0:2.6.18-92.1.6.el5 set to be installed
---> Package kernel-devel.x86_64 0:2.6.18-92.1.6.el5 set to be installed
---> Package kernel-headers.x86_64 0:2.6.18-92.1.6.el5 set to be updated
---> Package Deployment_Guide-en-US.noarch 0:5.2-11 set to be updated
--> Finished Dependency Resolution
--> Running transaction check
---> Package kernel.x86_64 0:2.6.18-53.1.21.el5 set to be erased
---> Package kernel.x86_64 0:2.6.18-92.1.6.el5 set to be installed
---> Package kernel-devel.x86_64 0:2.6.18-92.1.6.el5 set to be installed
---> Package kernel-headers.x86_64 0:2.6.18-92.1.6.el5 set to be updated
---> Package Deployment_Guide-en-US.noarch 0:5.2-11 set to be updated
---> Package kernel-devel.x86_64 0:2.6.18-53.1.21.el5 set to be erased
--> Finished Dependency Resolution
Dependencies Resolved
=============================================================================
 Package                 Arch       Version          Repository        Size
=============================================================================
Installing:
 kernel                  x86_64     2.6.18-92.1.6.el5  rhel-x86_64-server-5   16 M
 kernel-devel            x86_64     2.6.18-92.1.6.el5  rhel-x86_64-server-5  5.0 M
Updating:
 Deployment_Guide-en-US  noarch     5.2-11           rhel-x86_64-server-5  3.5 M
 kernel-headers          x86_64     2.6.18-92.1.6.el5  rhel-x86_64-server-5  880 k
Removing:
 kernel                  x86_64     2.6.18-53.1.21.el5  installed          75 M
 kernel-devel            x86_64     2.6.18-53.1.21.el5  installed          15 M
Transaction Summary
=============================================================================
Install      2 Package(s)
Update       2 Package(s)
Remove       2 Package(s)
Total download size: 25 M
Is this ok [y/N]: y

Arun Singh shows us how to create shared storage on SUSE Linux Enterprise Server 10 using OCFS2 (Oracle Cluster File System v2 for shared storage) and Xen Virtualization technology. Enterprise grade shared storage can cost you lots of money but here no real expensive shared storage used. The information provided here works with real shared storage as well:

This paper is to help you to understand the steps involved in creating shared storage without using expensive shared storage. Using this information you can create shared storage used by all xen guest OS and Host, avoiding copying of files between guest OS's. Hope you will find this paper useful.

You can easily port instructions to Redhat or any other Linux distro without a problem. You can also use Redhat's Global File System (GFS) too. We often use Fibre Channel or iSCSI, devices for GFS shared storage.

Creating shared storage on SUSE Linux Enterprise Server 10 using Xen and OCFS2 [novell.com]

On a related note there is also article about creating a highly available VMware Server environment on a Debian Etch system.