Trying out XFS

I’ve used most of the different filesystems in Linux. My most recent favorite has been JFS, but things like starvation with find have really been annoying me lately. To summarize, here is my experience with filesystems:

  • ext2: very slow, moderately unreliable
  • ext3: somewhat slow but reliable
  • reiserfs: fast, unreliable (cross-linked data after crash issues)
  • jfs: usually fast, somewhat unreliable (similar issues after crash, plus weird charset issues)

The one major Linux FS not in that list is XFS. So I decided to give it a whirl, switching my 40GB /home on one machine to XFS. So far, it’s been good.

There are two articles at IBM developerworks about XFS that were useful. There’s also a useful filesystems comparison from Novell.

7 thoughts on “Trying out XFS

  1. Do you have any link about reiserfs ?

    * reiserfs: fast, unreliable (cross-linked data after crash issues)

    in particular about these “cross-linked data after crash issues” ?

    I’d like to understand how much is real and how much is just FUD …

    p

    1. The first article (from IBM) I linked to mentions:

      With ReiserFS, an unexpected reboot can result in recently modified files containing portions of previously deleted files. Besides the obvious data loss, this could also theoretically pose a security threat. In contrast, XFS ensures that any unwritten data blocks are zeroed on reboot, when XFS journal is replayed. Thus, missing blocks are filled with null bytes, eliminating the security hole — a much better approach.

      I have personally experienced this on many occasions. I had some serious corruption one time when there was a crash while running dpkg, that resulted in some binary data being inserted into some of the system files in /var/lib/dpkg. In short, I would not use reiserfs for important work.

      1. I should also add that the article goes on to mention strategies XFS uses to minimize the frequency of this problem occuring.

    1. The reason you believe this is because your software is buggy – it might check the return value of the write() system call, but not check the return value of close(), and hence assume the file has been written when it actually hasn’t due to a lack of disk space. XFS delays allocation of disk resources to avoid fragmentation – writes are buffered for a certain period of time so it has a better chance of knowing how large the file is going to be. Then it can make a better choice about a contiguous extent of free space on disk in which to store the file. This means writes can succeed when the disk is full, but close will fail.

      1. According to POSIX’s description of write(2):

        If a write() requests that more bytes be written than there is room for (for example, [XSI] [Option Start] the process’ file size limit or [Option End] the physical end of a medium), only as many bytes as there is room for shall be written. For example, suppose there is space for 20 bytes more in a file before reaching a limit. A write of 512 bytes will return 20. The next write of a non-zero number of bytes would give a failure return (except as noted below).

        In fact, close(2) isn’t even allowed to return any error codes that would indicate you ran out of disk space.

        If XFS behaves the way you describe it, then it’s at fault not the application. write(2) should only return as many bytes as it can guarantee will be written to disk when the file descriptor is closed.

  2. 2010/05/08 efek camera IR bro, kebetulan ini saya makai ir harlim versi 9.8 yg bisa berubah rubah warnanya tanpa edit main chanel sgala. lsg dr camera aja ini, edit hanya rezise dan framing

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.