Linux Fibre Channel over Ethernet implementation code released

Posted on in Categories Linux, Linux Scalability, Linux Virtualization, Networking, Open source coding, Storage last updated December 18, 2007

Intel has just released source code for Fibre Channel over Ethernet (FCoE). It provides some Fibre Channel protocol processing as well as the encapsulation of FC frames within Ethernet packets. FCoE will allow systems with an Ethernet adapter and a Fibre Channel Forwarder to login to a Fibre Channel fabric (the FCF is a “gateway” that bridges the LAN and the SAN). That fabric login was previously reserved exclusively for Fibre Channel HBAs. This technology reduces complexity in the data center by aiding network convergence. It is targeted for 10Gps Ethernet NICs but will work on any Ethernet NIC supporting pause frames. Intel will provide a Fibre Channel protocol processing module as well as an Ethernet based transport module. The Open-FC module acts as a LLD for SCSI and the Open-FCoE transport uses net_device to send and receive packets.

This is good news. I think one can compare bandwidth and throughput for copper and fiber Ethernet. If you are going to use copper you need to stay within 15m of the switch. This solution will try to bring down cost. One can connect to 8-10 server to central database server with 10G and there could be few more applications.

=> Open FCoE project home page

Can I boot My Linux Server from iSCSI or SAN or NAS network attached storage?

Posted on in Categories High performance computing, Howto, Linux, Linux Scalability, Storage last updated November 12, 2007

My previous article related to iSCSI storage and NAS storage brought a couple of questions. An interesting question from my mail bag:

I’ve 5 Debian Linux servers with HP SAN box. Should I boot from SAN?

No, use centralized network storage for shared data or high availability configuration only. Technically you can boot and configure system. However I don’t recommend booting from SAN or any other central server until and unless you need diskless nodes:

[a] Use local storage – Always use local storage for /boot and / (root) filesystem

[b] Keep it simply – Booting from SAN volumes is complicated procedure. Most operating systems are not designed for this kind of configuration. You need to modify scripts and booting procedure.

[c] SAN booting support – Your SAN vendor must support platform booting a Linux server. You need to configure HBA and SAN according to vendor specification. You must totally depend upon SAN vendor for drivers and firmware (HBA Bios) to get thing work properly. General principle – don’t put all your eggs in one basket err one vendor ;)

[d] Other factors – Proper fiber channel topology must be used. Make sure Multipathing and redundant SAN links are used. The boot disk LUN is dedicated to a single host. etc

As you can see, complications started to increases, hence I don’t recommend booting from SAN.

Linux Increase Process Identifiers Limit with /proc/sys/kernel/pid_max

Posted on in Categories Howto, Linux, Linux Scalability, Networking, Troubleshooting, Tuning last updated April 16, 2014

Yesterday I wrote about increasing local port range with net.ipv4.ip_local_port_range proc file. There is also /proc/sys/kernel/pid_max file, which specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels (<=2.4). The value in this file can be set to any value up to 2^22 (PID_MAX_LIMIT, approximately 4 million).

Mount a Linux filesystem on a SAN from multiple nodes at the same time

Posted on in Categories CentOS, FAQ, File system, Gentoo Linux, Hardware, High performance computing, Linux, Linux Scalability, RedHat/Fedora Linux, Storage last updated November 12, 2007

If you try to mount an ext3 Linux filesystem on a SAN from multiple nodes at the same time you will be in serious deep trouble.

SAN based storage allows multiple nodes to connect to same devices at the same time. Ext3/2 are not cluster aware file system. They can lead to a disaster such as kernel panic, server hang, corruption etc.

You need to use something which supports:

  1. Useful in clusters for moderate scale out and shared SAN volumes
  2. Symmetrical Parallel Cluster File System, Journaled
  3. POSIX access controls

Both GFS (RedHat Global File System) and Lustre (a scalable, secure, robust, highly available cluster file system) can be used with SAN based storage allows multiple nodes to connect to same devices at the same time.

Many newbie get confused as Linux offers a number of file systems. This paper (Linux File System Primer) discusses these file systems, why there are so many, and which ones are the best to use for which workloads and data.