Ken Gibson has written a four-part series about where the storage industry is going, on his Storage Thoughts blog. You can find the four parts here (Part 1,Part 2,Part 3,Part 4).
His analysis of the storage industry is based on the concepts in Clayton Christensen's latest book Seeing What's Next, his latest work on the heels of his last two successes "The Innovator's Dilemma" and "The Innovator's Solution". I've only read the first book, "The Innovator's Dilemma" but need to check out these other two.
Ken explores the efforts of the incumbent players, and I agree IBM is farthest along, but not only for our "Storage Tank" architecture. For those not aware of Storage Tank, it was the code-name of a project from IBM's Almaden Research Center, productized as IBM System Storage SAN File System (SFS). Earlier this year the advanced policy-based data placement, movement and expiration features of SFS were copied over to IBM's General Parallel File System (GPFS) which has wide adoption among the High-Performance Technical Computing (HPTC) community. As I've said before, switching from one file system to another is hard, so it makes sense for HPTC clients who already use GPFS to make use of these new features by staying with GPFS, rather than trying to get them to move to SFS.
I also like Ken's analysis of "overshot" and "undershot" clients. Overshot clients are those that find what the marketplace delivers already "good enough" for their needs, and are price sensitive against paying for features they don't think they need. The undershot clients are those that the current marketplace set of offerings are not yet good enough, and are willing to pay a premium to the vendor or supplier that can get them closer to what they are looking for.
Changes are underfoot, and it is an exciting time to be involved in the storage industry.[Read More]
Comment (1) Visits (4606)
Our industry is full of acronyms, and sometimes spelling out what words an acronym stands for is not enough to explain it fully.
It reminds me of an old story within IBM. A customer engineer (or "CE" for short) was repairing an air-cooled server, and found the failing part being a "FAN". Not knowing what this stood for, he looked up the acronym in the offical "IBM list of acronyms" and found that it stood for "Forced Air Network". Apparently, so many people did not realize that a FAN was just a "fan" that they needed to add an entry to remind people what this little motorized propeller was for.
This brings me to Tony Asaro's Fun with FAN blog entry which mentions yet another definition for FAN, that of "File Area Network". The concept is not new, but some developments this year help make it more a reality.
To join the rest of the world, new types of data set were created for the z/OS operating system, known as HFS and zFS. These held file systems in the sense we know them today, comparable in hierarchical organization of files on Windows, Linux and UNIX platforms. These could be linked and mounted together in larger hierarchical structures across the sysplex.
The concept of files and file systems is a fairly new concept. Prior to this, applications read and wrote directly in terms of blocks, typically fixed length multiples of 512 bytes. For a while, database management systems offered a choice, direct block access or file level access. The former may have offered slightly better performance, but the latter was easier to administer. Without file system, specialized tools were often required to diagnose and fix problems on block-oriented "raw logical" volumes.
This launched a "my file system is better than yours" war which continues today. The official standard is POSIX, but every file system tries to give some proprietary advantage by offering unique features. Sun's file system offers support for "sparse" files, which is ideal for certain mathematical processing of tables. Microsoft's NTFS offers biult-in compression, designed for the laptop user. IBM's JFS2 and Linux's EXT3 file systems support journaling, which tracks updates to file system structures in a separate journal to minimize data corruption in the event of a power outage, and thus speed up the re-boot process. Anyone who has ever waited for a "Scan Disk" or "fsck" process to finish knows what I'm talking about. Of course, if an application deviates from POSIX standards, and exploits some unique feature of a file system, it then limits its portability and market appeal.
The two competing NAS file systems are also different. Common Internet File System (CIFS) was developed initially by IBM and Microsoft to provide interoperability between DOS, Windows and OS/2. Meanwhile, Network File System (NFS) was the darling of nearly every UNIX and Linux distribution, and even has clients on operating platforms as diverse as MacOS, i5/OS, and z/OS. Today, nearly every platform supports one or both of these standards.
Bottom line, file systems are here to stay. Any slight advantages to use raw logical volumes for databases and applications are losing out to the robust set of file system utilities that can be used across a broad set of platforms and applications.Read More]
Comment (1) Visits (4839)
For those of you worried about my mysterious absence on the blogosphere, I am getting better. Sorry for not posting much lately, I have had more serious issues to worry about. I am awaiting results on whether I have Dengue fever from Brazil, Avian flu from Thailand, Malaria from Kenya, or perhaps it is just food poisoning from the otherwise fabulous French cuisine I ate last week in the South Pacific. Well, I am back in town for a while, and hopefully will recover to full health, and have some time to reflect my thoughts on storage topics.
Speaking of which, a lot has happened while I was out. Let's take a quick look.