IBM Support

What is a FAN?

Technical Blog Post


Abstract

What is a FAN?

Body

Our industry is full of acronyms, and sometimes spelling out what words an acronym stands for is not enough to explain it fully.

It reminds me of an old story within IBM. A customer engineer (or "CE" for short) was repairing an air-cooled server, and found the failing part being a "FAN". Not knowing what this stood for, he looked up the acronym in the offical "IBM list of acronyms" and found that it stood for "Forced Air Network". Apparently, so many people did not realize that a FAN was just a "fan" that they needed to add an entry to remind people what this little motorized propeller was for.

This brings me to Tony Asaro's Fun with FAN blog entry which mentions yet another definition for FAN, that of "File Area Network". The concept is not new, but some developments this year help make it more a reality.

  1. IBM's General Parallel File System (GPFS) has been enhanced earlier this year with cool ILM-like functionality borrowed from SAN File System, such as policy-based data placement, movement and automatic expiration. This can include policies to place data on the fastest Fibre Channel drives at first, then move them to slower less costly SATA disks after a few months when fewer access reqeusts are expected.
  2. IBM has paired up N series with SAN Volume Controller (SVC), so that an N series gateway can now provide iSCSI, CIFS and NFS access to virtual disks presented from SVC. The problem with NAS appliances in the past, is that once they fill up, moving files to newer technologies is awkward and difficult. With SVC, file systems can now be moved from one physical disk system to another, all while applications are reading and writing data.
To better understand the importance of this, consider the first "FAN", the mainframe z/OS operating system using DFSMS. The mainframe uses the concept of "data sets", a data set can be a stream of fixed 80-character records, representing the original punched cards, a library of related documents, or a random-access data base. All mainframes in a system complex, or "sysplex" for short, could look up the location of any data set, and access it directly. Data sets could be moved from one disk system to another, migrated off to tape, and brought back to disk, all without re-writing any applications.

To join the rest of the world, new types of data set were created for the z/OS operating system, known as HFS and zFS. These held file systems in the sense we know them today, comparable in hierarchical organization of files on Windows, Linux and UNIX platforms. These could be linked and mounted together in larger hierarchical structures across the sysplex.

The concept of files and file systems is a fairly new concept. Prior to this, applications read and wrote directly in terms of blocks, typically fixed length multiples of 512 bytes. For a while, database management systems offered a choice, direct block access or file level access. The former may have offered slightly better performance, but the latter was easier to administer. Without file system, specialized tools were often required to diagnose and fix problems on block-oriented "raw logical" volumes.

This launched a "my file system is better than yours" war which continues today. The official standard is POSIX, but every file system tries to give some proprietary advantage by offering unique features. Sun's file system offers support for "sparse" files, which is ideal for certain mathematical processing of tables. Microsoft's NTFS offers biult-in compression, designed for the laptop user. IBM's JFS2 and Linux's EXT3 file systems support journaling, which tracks updates to file system structures in a separate journal to minimize data corruption in the event of a power outage, and thus speed up the re-boot process. Anyone who has ever waited for a "Scan Disk" or "fsck" process to finish knows what I'm talking about. Of course, if an application deviates from POSIX standards, and exploits some unique feature of a file system, it then limits its portability and market appeal.

The two competing NAS file systems are also different. Common Internet File System (CIFS) was developed initially by IBM and Microsoft to provide interoperability between DOS, Windows and OS/2. Meanwhile, Network File System (NFS) was the darling of nearly every UNIX and Linux distribution, and even has clients on operating platforms as diverse as MacOS, i5/OS, and z/OS. Today, nearly every platform supports one or both of these standards.

Bottom line, file systems are here to stay. Any slight advantages to use raw logical volumes for databases and applications are losing out to the robust set of file system utilities that can be used across a broad set of platforms and applications.

technorati tags: , , , , , , , , , , , , , , , , , , , , , ,

[{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW206","label":"Storage Systems"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

UID

ibm16162603