Linux File System Limitations For High Performance Computing

by on May 11, 2008 · 1 comment· LAST UPDATED May 11, 2008

in , ,

Linux file systems have a number of limitations that make them a poor choice for large and high-performance computing environments. This article explains some of the pros and cons of Linux and old UNIX file systems:

I am frequently asked by potential customers with high I/O requirements if they can use Linux instead of AIX or Solaris.

No one ever asks me about high-performance I/O -- high IOPS (define) or high streaming I/O -- on Windows or NTFS because it isn't possible. Windows and the NTFS file system, which hasn't changed much since it was released almost 10 years ago, can't scale given its current structure. The NTFS file system layout, allocation methodology and structure do not allow it to efficiently support multi-terabyte file systems, much less file systems in the petabyte range, and that's no surprise since it's not Microsoft's target market.

=> Linux File Systems: You Get What You Pay For

What do you think?

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 1 comment… read it below or add one }

1 G May 17, 2008 at 4:19 pm

My company does a vod server based on IBM or HP 2U servers.

We use SAS disks with our software raid5 or raid 6. The filesystem we use is XFS.
We use a slightly modified kernel (lock memory that other processes can’t touch, realtime priorities, etc …)
We get 18Gb/s of thruput with almost no fluctuation, so we are able to serve 10 000 Mpeg4 streams from one server for several month.

Linux is perfect for us, and we are a high-end storage user!

I believe NetApp also runs Linux inside their appliances…

Reply

Leave a Comment

Tagged as: , , , , , , , , , , , ,

Previous post:

Next post: