Accessing A Single iSCSI LUN From Multiple Linux Systems

last updated in Categories , , , , ,

Q. I’d like to share iSCSI storage with our 3 node web server cluster. How can multiple systems access a single iSCSI LUN under Linux operating system? Can I connect multiple servers to a single iSCSI LUN?


A. Short answer – no it is not possible.

Long answer – It is possible to access a single iSCSI LUN using cluster aware file system such as GFS or OCFS2. You can also use NFS or Samba for sharing file system. iSCSI provides no file locking. This may result into serious data corruption. To share LUN simply use cluster aware file system.


Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

4 comment

  1. Using NFS or Samba will allow all the hosts to access the same file system, but that solution comes with a single point of failure in the NFS or SMB server. Even the iSCSI server is a single point of failure as well. Storage on a highly-available SAN with multiple paths to the end nodes has proven to be an effective solution for this scenario.

  2. To alleviate the single point of error in an iscsi SAN, it’s best to, as mentioned above, use multiple paths, but also to use DRBD to mirror files from one storage box to another, and heartbeat to make sure that a working machine is placed at the fore. There’s a couple of good faqs on how to do that too. Even better, both DRBD and Heartbeat are in yum.

  3. I have been using an iSCSI SAN in production for over a year with open-iscsi, heartbeat, and samba. The way I look at it is that you need to go with the simplest solution possible. I have used DRBD and OCFS2, and to be quite honest I think that those solutions have too many moving parts. With DRBD in particular, I have had several issues including complete breakage during host upgrades, resulting in many hours lost recovering from backups. By comparison, samba has had a relatively clean upgrade process for me over the past several years. I’m also not sure how DRBD addresses the situation described in the original question. It sounds like the scenario involves a single iSCSI storage device. Though with multiple storage devices, DRBD could be applicable for synchronization.

    With the open-iscsi, heartbeat, and samba solution, you simply use open-iscsi to make the SAN available to the servers, heartbeat to manage the SAN mountpoint and samba daemon startup, and samba to provide file-level access to the cluster. Block-level access via OCFS2 is a killer feature, but it comes at the cost of added configuration complexity and runtime overhead.

    Multipathing is completely different. If you need maximum uptime, then add the additional hardware (Ethernet cards and switch) and configure, e.g. device bonding. I suppose there’s also a marginal throughput benefit to this approach depending on many factors, such as the RAID type, disc partitioning, file system, I/O, etc., and requires block-level access (cluster file system) to the shared LUN.

    Still, have a question? Get help on our forum!