Linux File System Limitations For High Performance Computing

Linux file systems have a number of limitations that make them a poor choice for large and high-performance computing environments. This article explains some of the pros and cons of Linux and old UNIX file systems:


I am frequently asked by potential customers with high I/O requirements if they can use Linux instead of AIX or Solaris.

No one ever asks me about high-performance I/O — high IOPS (define) or high streaming I/O — on Windows or NTFS because it isn’t possible. Windows and the NTFS file system, which hasn’t changed much since it was released almost 10 years ago, can’t scale given its current structure. The NTFS file system layout, allocation methodology and structure do not allow it to efficiently support multi-terabyte file systems, much less file systems in the petabyte range, and that’s no surprise since it’s not Microsoft’s target market.

=> Linux File Systems: You Get What You Pay For

What do you think?

🥺 Was this helpful? Please add a comment to show your appreciation or feedback.

nixCrat Tux Pixel Penguin
Hi! 🤠
I'm Vivek Gite, and I write about Linux, macOS, Unix, IT, programming, infosec, and open source. Subscribe to my RSS feed or email newsletter for updates.

1 comment… add one
  • G May 17, 2008 @ 16:19

    My company does a vod server based on IBM or HP 2U servers.

    We use SAS disks with our software raid5 or raid 6. The filesystem we use is XFS.
    We use a slightly modified kernel (lock memory that other processes can’t touch, realtime priorities, etc …)
    We get 18Gb/s of thruput with almost no fluctuation, so we are able to serve 10 000 Mpeg4 streams from one server for several month.

    Linux is perfect for us, and we are a high-end storage user!

    I believe NetApp also runs Linux inside their appliances…

Leave a Reply

Your email address will not be published. Required fields are marked *

Use HTML <pre>...</pre> for code samples. Your comment will appear only after approval by the site admin.