Linux tgtadm: Setup iSCSI Target ( SAN )

Posted on in Categories CentOS, data center, Debian Linux, fedora linux, File system, GNU/Open source, Hardware, Linux, Storage last updated May 25, 2012

Linux target framework (tgt) aims to simplify various SCSI target driver (iSCSI, Fibre Channel, SRP, etc) creation and maintenance. The key goals are the clean integration into the scsi-mid layer and implementing a great portion of tgt in user space.

The developer of IET is also helping to develop Linux SCSI target framework (stgt) which looks like it might lead to an iSCSI target implementation with an upstream kernel component. iSCSI Target can be useful:

a] To setup stateless server / client (used in diskless setups).
b] Share disks and tape drives with remote client over LAN, Wan or the Internet.
c] Setup SAN – Storage array.
d] To setup loadbalanced webcluser using cluster aware Linux file system etc.

In this tutorial you will learn how to have a fully functional Linux iSCSI SAN using tgt framework.

Seagate Barracuda: 1.5TB Hard Drive Launched

Posted on in Categories Business, data center, Data recovery, File system, Hardware, Linux, Linux desktop, Storage, Sys admin, UNIX last updated October 21, 2008

Wow, this is a large size desktop hard disk for storing movies, tv shows, music / mp3s, and photos. You can also load multiple operating systems using vmware or other software for testing purpose. This hard disk comes with 5 year warranty and can transfer at 300MB/s. But, How reliable is the 1.5TB hard disk?

CentOS / Red Hat Enterprise Linux 5.2 Poor NFS Performance and Solution

Posted on in Categories Apache, CentOS, data center, File system, High performance computing, Howto, Linux, Linux distribution, Networking, package management, RedHat/Fedora Linux, Security Alert, Storage, Sys admin, Troubleshooting, Tuning last updated August 22, 2008

A few days ago I noticed that NFS performance between a web server node and NFS server went down by 50%. NFS was optimized and the only thing was updated Red Hat kernel v5.2. I also noticed same trend on CentOS 5.2 64 bit edition.

Red Hat / CentOS Linux 4: Setup Device Mapper Multipathing

Posted on in Categories CentOS, data center, fedora linux, File system, Hardware, High performance computing, Howto, kernel, Linux, Linux Scalability, Linux Virtualization, Networking, RedHat/Fedora Linux, Tips last updated December 3, 2008

Multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer system and its mass storage devices through the buses, controllers, switches, and bridge devices connecting them.

A simple example would be a SCSI disk connected to two SCSI controllers on the same computer or a disk connected to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route I/O through the remaining controller transparently to the application, with no changes visible to the applications, other than perhaps incremental latency.

This is useful for:

  1. Dynamic load balancing
  2. Traffic shaping
  3. Automatic path management
  4. Dynamic reconfiguration

Linux device-mapper

In the Linux kernel, the device-mapper serves as a generic framework to map one block device onto another. It forms the foundation of LVM2 and EVMS, software RAIDs, dm-crypt disk encryption, and offers additional features such as file-system snapshots.

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing the resultant data on to another block device.

How do I setup device-mapper multipathing in CentOS / RHEL 4 update 2 or above?

Open /etc/multipath.conf file, enter:
# vi /etc/multipath.conf
Make sure following line exists and commented out:

devnode_blacklist {
        devnode "*"
}

Make sure default_path_grouping_policy option in the defaults section set to failover. Here is my sample config:

defaults {
       multipath_tool  "/sbin/multipath -v0"
       udev_dir        /dev
       polling_interval 10
       default_selector        "round-robin 0"
       default_path_grouping_policy    failover
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_min_io              100
       failback                immediate
}

Save and close the file. Type the following command to load drivers:
# modprobe dm-multipath
# modprobe dm-round-robin

Start the service, enter:
# /etc/init.dmultipathd start
multipath is used to detect multiple paths to devices for fail-over or performance reasons and coalesces them:
# multipath -v2
Turn on service:
# /sbin/chkconfig multipathd on
Finally, create device maps from partition tables:
# kpartx -a /dev/mapper/mpath#
You need to use fdisk on the underlying disks such as /dev/sdc.

References:

  • man page kpartx,multipath, udev, dmsetup and hotplug