CentOS / Red Hat Linux: Install and manage iSCSI Volume

Posted on in Categories Backup, CentOS, Linux, RedHat/Fedora Linux, Storage last updated February 18, 2011

Internet SCSI (iSCSI) is a network protocol s that allows you to use of the SCSI protocol over TCP/IP networks. It is good alternative to Fibre Channel-based SANs. You can easily manage, mount and format iSCSI Volume under Linux. It allows access to SAN storage over Ethernet.

Open-iSCSI Project

Open-iSCSI project is a high-performance, transport independent, multi-platform implementation of iSCSI. Open-iSCSI is partitioned into user and kernel parts.

Instructions are tested on:
[a] RHEL 5
[b] CentOS 5
[c] Fedora 7
[d] Debian / Ubuntu Linux

Install Required Package

iscsi-initiator-utils RPM package – The iscsi package provides the server daemon for the iSCSI protocol, as well as the utility programs used to manage it. iSCSI is a protocol for distributed disk access using SCSI commands sent over Internet Protocol networks. This package is available under Redhat Enterprise Linux / CentOS / Fedora Linux and can be installed using yum command:
# yum install iscsi-initiator-utils

A note about Debian / Ubuntu Linux

If you are using Debian / Ubuntu Linux install open-iscsi package, enter:
$ sudo apt-get install open-iscsi

iSCSI Configuration

There are three steps needed to set up a system to use iSCSI storage:

  1. iSCSI startup using the init script or manual startup. You need to edit and configure iSCSI via /etc/iscsi/iscsid.conf file
  2. Discover targets.
  3. Automate target logins for future system reboots.
  4. You also need to obtain iSCSI username, password and storage server IP address (target host)

Step # 1: Configure iSCSI

Open /etc/iscsi/iscsid.conf with vi text editor:
# vi /etc/iscsi/iscsid.conf
Setup username and password:
node.session.auth.username = My_ISCSI_USR_NAME
node.session.auth.password = MyPassword
discovery.sendtargets.auth.username = My_ISCSI_USR_NAME
discovery.sendtargets.auth.password = MyPassword

Where,

  • node.session.* is used to set a CHAP username and password for initiator authentication by the target(s).
  • discovery.sendtargets.* is used to set a discovery session CHAP username and password for the initiator authentication by the target(s)

You may also need to tweak and set other options. Refer to man page for more information. Now start the iscsi service:
# /etc/init.d/iscsi start

Step # 2: Discover targets

Now use iscsiadm command, which is a command-line tool allowing discovery and login to iSCSI targets, as well as access and management of the open-iscsi database. If your storage server IP address is 192.168.1.5, enter:
# iscsiadm -m discovery -t sendtargets -p 192.168.1.5
# /etc/init.d/iscsi restart

Now there should be a block device under /dev directory. To obtain new device name, type:
# fdisk -l
or
# tail -f /var/log/messages
Output:

Oct 10 12:42:20 ora9is2 kernel:   Vendor: EQLOGIC   Model: 100E-00           Rev: 3.2 
Oct 10 12:42:20 ora9is2 kernel:   Type:   Direct-Access                      ANSI SCSI revision: 05
Oct 10 12:42:20 ora9is2 kernel: SCSI device sdd: 41963520 512-byte hdwr sectors (21485 MB)
Oct 10 12:42:20 ora9is2 kernel: sdd: Write Protect is off
Oct 10 12:42:20 ora9is2 kernel: SCSI device sdd: drive cache: write through
Oct 10 12:42:20 ora9is2 kernel: SCSI device sdd: 41963520 512-byte hdwr sectors (21485 MB)
Oct 10 12:42:20 ora9is2 kernel: sdd: Write Protect is off
Oct 10 12:42:20 ora9is2 kernel: SCSI device sdd: drive cache: write through
Oct 10 12:42:20 ora9is2 kernel:  sdd: unknown partition table
Oct 10 12:42:20 ora9is2 kernel: sd 3:0:0:0: Attached scsi disk sdd
Oct 10 12:42:20 ora9is2 kernel: sd 3:0:0:0: Attached scsi generic sg3 type 0
Oct 10 12:42:20 ora9is2 kernel: rtc: lost some interrupts at 2048Hz.
Oct 10 12:42:20 ora9is2 iscsid: connection0:0 is operational now

/dev/sdd is my new block device.

Step # 3: Format and Mount iSCSI Volume

You can now partition and create a filesystem on the target using usual fdisk and mkfs.ext3 commands:
# fdisk /dev/sdd
# mke2fs -j -m 0 -O dir_index /dev/sdd1

OR
# mkfs.ext3 /dev/sdd1

Tip: If your volume is large size like 1TB, run mkfs.ext3 in background using nohup:
# nohup mkfs.ext3 /dev/sdd1 &

Mount new partition:
# mkdir /mnt/iscsi
# mount /dev/sdd1 /mnt/iscsi

Step #4: Mount iSCSI drive automatically at boot time

First make sure iscsi service turned on at boot time:
# chkconfig iscsi on
Open /etc/fstab file and append config directive:
/dev/sdd1 /mnt/iscsi ext3 _netdev 0 0
Save and close the file.

Further readings:

56 comment

  1. I am trying according to your document and run these steps and found these errors.
    any good solution?
    # /etc/init.d/iscsi start
    Checking iscsi [ OK ]
    Loading iscsi driver: FATAL: Module iscsi_sfnet not found. [FAILED]

    #/sbin/iscsi-ls -l
    iSCSI driver is not loaded

  2. hiya, well done, thanks.
    Works perfect with Scientific Linux 5 (32 and 64 bit) and MSA 2012i, I just used xfs for my instalation.

    cheers/zdrowka!

  3. When I run isciadmin i get the following error. Help !

    10.42.40.198 is the ip address of the server running iscsi and iscsid.

    [[email protected] iscsi]# iscsiadm -m discovery -t sendtargets -p 10.42.40.198
    iscsiadm: cannot make connection to 10.42.40.198:3260 (111)
    iscsiadm: connection to discovery address 10.42.40.198 failed
    iscsiadm: cannot make connection to 10.42.40.198:3260 (111)
    iscsiadm: connection to discovery address 10.42.40.198 failed
    iscsiadm: cannot make connection to 10.42.40.198:3260 (111)
    iscsiadm: connection to discovery address 10.42.40.198 failed

    Paras.

    1. I had the same issue. Turns out SELinux is preventing access. I don’t really require that on this box, so I set it to permissive.

  4. How do you tell which targets are associated with which device? The discovery process found both iscsi targets on the zfs server I’m using and want to make sure I’m making a new filesystem on the right one…

  5. Robert and Paras: Is a network ACL installed at the ISCSI target system?

    Is the connection working proper? If your useing for example openfiler as your
    target system check the status of the connection by useing: “netcat -an | grep “3260” ” you will see the remote system is trying to connect to your filer.

    Or you can add a hig debug-level to you iscsiadm command:
    iscsiadm -d 9 -m discovery -t sendtargets -p 192.168.1.5

    Bjoern

  6. You can also label the devices this way if they change or are renamed etc on reboot you will be mounting by label and not device name.

    e2label on the /dev/vg/lv
    eg
    e2label e2label /dev/vg4/lv4 /home4

    The in etc/fstab
    LABEL=/home4 /home4 ext3 _netdev 0 0

  7. Great Guide, wanted to say thanks. This saved me god knows how many hours of research. Also wanted to point out a great unti
    lshw
    Which can be install using
    yum isntall lshw
    this makes it about 100 times easyer to find the new device name

  8. Little note: “netfs” service must be on in chkconfig in red-hat, else the mountpoints are not mounted.

  9. Thanks for the guide,

    Maybe stupid question, but i don’t undertand where i can find or set initiator name? Or i don’t need it? What should i set on storage site?

    Thank you,
    Jap

  10. Robert and Paras, you probably get this error because iscsi and iscsid is not started on your client nodes, reboot those (or use service iscsi start; service iscsid start) and you will be able to discover your devices.

    I had the same error, chkconfig showed me that they were on – but a reboot of the clients solved my problems

  11. Good job !!!
    But when i check my fs on my das this is the result :

    real 0m24.005s
    user 0m0.000s
    sys 0m0.000s
    ##########
    0+10 enregistrements lus.
    0+10 enregistrements écrits.

    real 0m0.001s
    user 0m0.000s
    sys 0m0.001s
    ##########
    0+10 enregistrements lus.
    0+10 enregistrements écrits.

    real 0m0.001s
    user 0m0.000s
    sys 0m0.001s
    ##########
    0+9 enregistrements lus.
    0+9 enregistrements écrits.

    real 0m0.535s
    user 0m0.000s
    sys 0m0.001s

    I do not know why sometime the response time is very long

    Thx

  12. I was able to mount and use the device but after stopping the iscsi service and unmounting the device, I wasn’t able to mount it again.

  13. additional:

    This is how it looks like when I issue fdisk -l

    Disk /dev/sdn: 143.0 GB, 143034155008 bytes
    255 heads, 63 sectors/track, 17389 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    /dev/sdn1 1 17389 139677111 83 Linux

    Disk /dev/sdn1: 143.0 GB, 143029361664 bytes
    255 heads, 63 sectors/track, 17388 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sdn1 doesn’t contain a valid partition table

    Disk /dev/sdo: 968.0 GB, 968016265216 bytes
    255 heads, 63 sectors/track, 117687 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    /dev/sdo1 1 117687 945320796 83 Linux

    The one with the problem is /dev/sdo1. I’m expecting it to be looked like sdn1.
    (sdo1 and sdn1 are partition of sdo and sdn device).

    I am using rhel 4 AS, 2.6.9-42ELsmp,

    iscsitarget-kernel-smp-0.4.12-6_2.6.9_42.EL
    iscsitarget-0.4.12-6
    iscsi-initiator-utils-4.0.3.0-4

    Regards,
    Jay A

  14. I’m having a problem…

    My iscsi volume is 5GB on a RAID5 array. I could find no way to sucessfully partition this so I simply formatted the whole volume. When I do fdisk -l I get “Disk /dev/sdb doesn’t contain a valid partition table” but this doesnt seem to be a major issue.

    When I start up the server, iscsi finds the target, /dev/sdb is created and then mounted via fstab. I can connect to the volume fine.

    If I restart iscsi for any reason, I lose the disk – it disappears and is re-detected as /dev/sdc instead.

    I am also unable to unmount it without getting “device is busy”.

    Any ideas?

  15. Hi I have the same problem as Paras Pradhan except my error code is:

    [[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 192.168.2.0
    iscsiadm: cannot make connection to 192.168.2.0:3260 (101)
    iscsiadm: connection to discovery address 192.168.2.0 failed
    iscsiadm: cannot make connection to 192.168.2.0:3260 (101)
    iscsiadm: connection to discovery address 192.168.2.0 failed
    iscsiadm: cannot make connection to 192.168.2.0:3260 (101)
    iscsiadm: connection to discovery address 192.168.2.0 failed
    iscsiadm: cannot make connection to 192.168.2.0:3260 (101)
    iscsiadm: connection to discovery address 192.168.2.0 failed
    iscsiadm: cannot make connection to 192.168.2.0:3260 (101)
    iscsiadm: connection to discovery address 192.168.2.0 failed
    iscsiadm: connection login retries (reopen_max) 5 exceeded

    I tried service iscsi restart; service iscsid restart and tried the command again but I still am not able to connect to it. Did I configure the vi /etc/iscsi/iscsid.conf correctly?

    # to CHAP. The default is None.
    node.session.auth.authmethod = CHAP

    # To set a CHAP username and password for initiator
    # authentication by the target(s), uncomment the following lines:
    node.session.auth.username = [our username goes here]
    node.session.auth.password = [our password goes here]

    # To set a CHAP username and password for target(s)
    # authentication by the initiator, uncomment the following lines:
    node.session.auth.username_in = [our username goes here]
    node.session.auth.password_in = [our password goes here]

    # To enable CHAP authentication for a discovery session to the target
    # set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
    #discovery.sendtargets.auth.authmethod = CHAP

    # To set a discovery session CHAP username and password for the initiator
    # authentication by the target(s), uncomment the following lines:
    #discovery.sendtargets.auth.username = username
    #discovery.sendtargets.auth.password = password

    # To set a discovery session CHAP username and password for target(s)
    # authentication by the initiator, uncomment the following lines:
    #discovery.sendtargets.auth.username_in = username
    #discovery.sendtargets.auth.password_in = password

    Thanks,
    Eric Averitt

  16. I’m using centos 5

    It works perfectly 99% of the time but my problem is if I lose connection to the target for any reason, if iscsi is restarted, or if the volume is unmounted, the only way to get it to work properly again is to reboot the server. It doesnt seem as reliable as people say.

  17. Hi,

    I did not use partition anymore. I was able to mount and umount the iscsi and use it for 19 days now w/o a problem.

    Chris:,
    Do you really need to restart the server? How about just restarting the iscsi sevices?

    Rgds,
    Jay A

  18. Hi Jay,
    As I mentioned in my earlier message, if I restart the iscsi service, the device becomes detected as sdc instead of sdb.

  19. I’ve decided to reduce the size of my iscsi targets, change the block size back to 512 and try again. I wasnt happy with being unable to partition the 5TB volume. I now have four iscsi targets to connect to, each around 1.3 TB.

    I want to ensure that these are connected the same way each time. I see no way in the config file to specify each target – it seems to just auto discover. I dont want to auto discover. How can I specify the target’s target name or iqn so that it always gets the same device name each time? The device name appears to be dynamically created on boot. What happens if it dicovers one device before another next time it boots? will it apply a different device name to the one before. I need to be able to specify this somehow.

    On the storage device config I can choose a target name and a CHAP username and password for each target – each has their own iqn. I cant work out how to use CHAP authentication to conenct to four seperate targets. The config file only has an option to specify one username and password and there is no way of specifying the target. It doesnt make sense.

    I need to do the following…

    1. When the iscsi service is started, connect to four target devices using a different CHAPS username and password for each one.
    2. Ensure that each device always has the same device name.

    If somoene could point me in the direction of a tutorial or docuement that shows specifically how to do this I would be very grateful. Thank you for your time.

  20. How to make sure multiple targets always connect correctly…

    I’ve been looking at this for a couple of days and here are my findings.The above tutorial is great if you have only one target iSCSI volume. If you have multiple targets to connect to, there are a few things to consider.

    1. When the targets are discovered, they wont be discovered in the same order you set them up on your storage server.
    2. When you reboot the server or restart iSCSI, the targets will not always be assigned the same device names each time.
    3. The order in which the targets are discovered by the iscsi service, is not neccessarily the order in which they are assigned device names – so dont assume that each discovered node will be assigned /dev/sdb, /dev/sdc, /dev/sdd in turn – they wont.

    So, it is extremely difficult (or impossible) to identify which target volume applies to what device name If you enable all your targets at the same time and discover them all at once. To overcome the above issues you simply need to set up, discover and configure each target volume one at a time – then label the partitions and mount them using the label, instead of the device name. Here is how I did it…

    1. Create the first iSCSI target on your storage server. On some devices you will be able create all of them and set all but one to disabled for now. I’m using a Thecus N8800. I have 12TB split into 8 iSCSI volumes. The important thing at this step is to ensure that you only have your first iSCSI target enabled for discovery.

    2. Follow the instructions in “Step # 1: Configure iSCSI” and “Step # 2: Discover targets” in the main tutorial above.

    Side note: I am yet to understand the node.session. and discovery.sendtargets. authentication options. On my storage device I have the option to create a CHAP username and password for each target but I dont know how this equates to either node.session. or discovery.sendtargets. or how to set this up in the config file for multiple targets with different usernames. So I currently have the CHAP authentication disabled completely.

    3. From fdisk-l note the device name and give it a label…

    # e2label /dev/sdb1 iscsi001

    4. Enable the second iSCSI target on your storage device and run the first step from “Step # 2: Discover targets” in the above tutorial…

    # iscsiadm -m discovery -t sendtargets -p 192.168.x.x
    # /etc/init.d/iscsi restart

    5. When you run fdisk-l now, you will see your first target volume has been renamed to /dev/sdc1 and your new one has taken its place as /dev/sdb1.

    Complete your partioning and formatting of the new volume and then label it…

    # e2label /dev/sdb1 iscsi002

    Of course, the labels will remain, no matter what the device name. So if you read the label…

    # e2label /dev/sdc1

    the response will be…

    iscsi001

    # e2label /dev/sdb1

    iscsi002

    Repeat all the above steps for all your targets so that each one has a device label.

    Now mount the target volumes using the label instead of the device name…

    mkdir /mnt/iscsi001;
    mkdir /mnt/iscsi002;
    mkdir /mnt/iscsi003;
    mkdir /mnt/iscsi004;

    In /etc/fstab

    LABEL=iscsi001 /mnt/iscsi001 ext3 _netdev 0 0
    LABEL=iscsi002 /mnt/iscsi002 ext3 _netdev 0 0
    LABEL=iscsi003 /mnt/iscsi003 ext3 _netdev 0 0
    LABEL=iscsi004 /mnt/iscsi004 ext3 _netdev 0 0

    From the command line

    mount -L iscsi001 /mnt/iscsi001

    Hope this helps anyone else who has the same dilemma :-)

    1. I hope this will solve my problem. When I reboot, the target changes device names so I have to change my mount point i.e. Right now my volume is on /dev/sdk1 but before rebooting it was on /dev/sdn1 — so it does not remount from my fstab, and therefore I had to login and mount it (and I change my fstab, but that is really futile). I am concerned that this will not work because you are linking the label to the device name itself unless it is actually adding the label to the volume I have never used e2fs label before. I guess I will find out when I reboot in a year or two.

    2. yes, labeling helps.
      if you need always the same /dev/sd* names though, then use udev rules to assign the /dev/sd* name to particular device

  21. I was successful in creating iscsi even used e2label to label the /dev/sdc1. But everytime I reboot the machine it does not mount. I have to logon and ran mount -a

    Please advise on how to make it mount after every reboot. Seem like it could not find /dev/sdc1 during boot up.

    Thanks!

  22. Thanks. helped me get iscsi setup very quickly.

    to get the ids of the targets:

    udevinfo -q symlink -n /dev/sd??

    I tried adding them in fstab but that stopped the system from booting. I guess it was due to iscsi not being loaded at that point.

    For my scenario it was perfectly acceptable for me to put the mounts into /etc/rc.local

  23. Please Help!
    After doing the Step # 2: Discover targets without errors (I think so?).
    When is use the fdisk -l command i can’t see the new drive from iscsi target.

    What should i do next?

    I’m Using CentOS 5.3 Server

    1. There’s a step missing from these instructions, at least for my case. I actually have to login to the targets before they appear, so you need to do:

      iscsiadm -m node -T (target) -p (portal) –login

      For example:

      iscsiadm -m node -T iqn.2011-09.org.foobar.san:my.iscsi.target.01 -p 192.168.1.66 –login

  24. Is it possible to get the target back to read-write mode so I can umount the target without rebooting? iSCSI shows that the connection is in a “running” state, but the device is marked read-only so remount doesn’t help.

  25. This was so helpful.
    I’d add tune2fs to create a label for the lun, but otherwise, Super helpful.
    How about creating one that shows you how to get rid of a lun?

    You may have and I’m going to look for it.

    Thanks!

  26. If you want to create and mount a volume larger than 2 TB use parted or gparted. Don’t use fdisk. Fdisk and the old style disk lables it creates can’t handle volumes larger than 2 TB.

    2TB and larger volumes have to be created with GPT (GUID Partition Table) labels. In trying to setup a 9TB storage volume with CentOS 6 this bit me pretty hard, as I wasn’t aware of that limitation in fdisk.

    Hope this helps someone out there.

  27. How to add another iSCSI LUN ” Secondary iSCSI LUN” without restarting iSCSI Service.

    If i restart the iscsi service with” service iscsi restart” to detect new lun the existing luns are getting unmounted and effecting running applications on the node.

    Please let me know if there is any process to resolve the issue.

    Thank you,
    Venkat

    1. You need multipath in that case. Then restarting iscsi will not affect the mounts.

      This document is over 5 years old and is extremely badly written. I strongly suggest anyone not follow this or you will run into problems.

  28. I can’t tell you how many times your site has saved my bacon in the last 5 years.

    Good Job and Well done.

  29. i have assgin one lun for particular server and i used fdisk -l command but i view the 2 lun in the server thats is problem and then i have umount the lun and deleted the storage but i used fdisk -l command umount lun available in the same server.
    i added a new lun 100gb from storage dell md 3200i but customer need to 130gb so i used increase the option and extra 30 gb in particular lun but he take 4 hours.what is the solutions.

  30. Yeah it’s good info tq… for the info
    But i have small query server shared a drive to client and client want to access but he don’t know the server name and ip in such cases how can we access that drive.

    Thanks advanced.

Leave a Comment