Linux File System Limitations For High Performance Computing

last updated in Categories Hardware, High performance computing, kernel, Linux

Linux file systems have a number of limitations that make them a poor choice for large and high-performance computing environments. This article explains some of the pros and cons of Linux and old UNIX file systems:

I am frequently asked by potential customers with high I/O requirements if they can use Linux instead of AIX or Solaris.

No one ever asks me about high-performance I/O — high IOPS (define) or high streaming I/O — on Windows or NTFS because it isn’t possible. Windows and the NTFS file system, which hasn’t changed much since it was released almost 10 years ago, can’t scale given its current structure. The NTFS file system layout, allocation methodology and structure do not allow it to efficiently support multi-terabyte file systems, much less file systems in the petabyte range, and that’s no surprise since it’s not Microsoft’s target market.

=> Linux File Systems: You Get What You Pay For

What do you think?

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

1 comment

  1. My company does a vod server based on IBM or HP 2U servers.

    We use SAS disks with our software raid5 or raid 6. The filesystem we use is XFS.
    We use a slightly modified kernel (lock memory that other processes can’t touch, realtime priorities, etc …)
    We get 18Gb/s of thruput with almost no fluctuation, so we are able to serve 10 000 Mpeg4 streams from one server for several month.

    Linux is perfect for us, and we are a high-end storage user!

    I believe NetApp also runs Linux inside their appliances…

    Have a question? Post it on our forum!