Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Well, it's Tuesday, and you know what that means... IBM announcements!
In today's environment, clients expect more from their storage, and from their storage provider. The announcements span the gamut, from helping to use Business Analytics to analyze Big Data for trends, insights and patterns, to managing private, public and hybrid cloud environments, all with systems that are optimized for their particular workloads.
There are over a dozen different announcements, so I will split these up into separate posts. Here is part 1.
IBM Scale Out Network Attach Storage (SONAS) R1.3
I have covered [IBM SONAS] for quite some time now. Based on IBM's General Parallel File System (GPFS), this integrated system combines servers, storage and software into a fully functional scale-out NAS solution that support NFS, CIFS, FTP/SFTP, HTTP/HTTPS, and SCP protocols. IBM continues its technical leadership in the scale-out NAS marketplace with new hardware and software features.
The hardware adds new disk options, with 900GB SAS 15K RPM drives, and 3TB NL-SAS 7200 RPM drives. These come in 4U drawers of 60 drives each, six ranks of ten drives each. So, with the high-performance SAS drives that would be about 43TB usable capacity per drawer, and with the high-capacity NL-SAS drives about 144TB usable. You can have any mix of high-performance drawers and high-capacity drawers, up to 7200 drives, for a maximum usable capacity of 17PB usable (21PB for those who prefer it raw). This makes it the largest commercial scale-out NAS in the industry. This capacity can be made into one big file system, or divided up to 256 smaller file systems.
In addition to snapshots of each file system, you can divide the file system up into smaller tree branches and snapshot these independently as well. The tree branches are called fileset containers. Furthermore, you can now make writeable clones of individual files, which provides a space-efficient way to create copies for testing, training or whatever.
Performance is improved in many areas. The interface nodes now can support a second dual-port 10GbE, and replication performance is improved by 10x.
SONAS supports access-based enumeration, which means that if there are 100 different subdirectories, but you only have authority to access five of them, then that's all you see, those five directories. You don't even know the other 95 directories exist.
I saved the coolest feature for last, it is called Active Cloud Engine™ that offers both local and global file management. Locally, Active Cloud Engine placement rules to decide what type of disk a new file should be placed on. Management rules that will move the files from one disk type to another, or even migrates the data to tape or other externally-managed storage! A high-speed scan engine can rip through 10 million files per node, to identify files that need to be moved, backed up or expired.
Globally, Active Cloud Engine makes the global namespace truly global, allowing the file system to span multiple geographic locations. Built-in intelligence moves individual files to where they are closest to the users that use them most. This includes an intelligent push-over-WAN write cache, on-demand pull-from-WAN cache for reads, and will even pre-fetch subsets of files.
No other scale-out NAS solution from any other storage vendor offers this amazing and awesome capability!
IBM® Storwize® V7000
Last year, we introduced the [IBM Storwize V7000], a midrange disk system with block-level access via FCP and iSCSI protocols. The 2U-high control enclosure held two cannister nodes, a 12-drive or 24-drive bay, and a pair of power-supply/battery UPS modules. The controller could attach up to nine expansion enclosures for more capacity, as well as virtualize other storage systems. This has been one of our most successful products ever, selling over 100PB in the past 12 months to over 2,500 delighted customers.
The 12-drive enclosure now supports both 2TB and 3TB NL-SAS drives. The 24-drive enclosures support 200/300/400GB Solid-State Drives (SSD), 146 and 300GB 15K RPM drives, 300/450/600GB 10K RPM drives, and a new 1TB NL-SAS drive option. For those who want to set up "Flash-and-Stash" in a single 2U drawer, now you can combine SSD and NL-SAS in the 24-drive enclosure! This is the perfect platform for IBM's Easy Tier sub-LUN automated tiering. IBM's Easy Tier is substantially more powerful and easier to use than EMC's FAST-VP or HDS's Dynamic Tiering.
Last week, at Oracle OpenWorld, there were various vendors hawking their DRAM/SSD-only disk systems, including my friends at Texas Memory Systems, Pure Storage, and Violin Memory Systems. When people came to the IBM booth to ask what IBM offers, I explained that both the IBM DS8000 and the Storwize V7000 can be outfitted in this manner. With the Storwize V7000, you can buy as much or little SSD as you like. You do not have to buy these drives in groups of 8 or 16 at a time.
The Storwize V7000 is the sister product of the IBM SAN Volume Controller, so you can replicate between one and the other. I see two use cases for this. First, you might have a SVC at a primary location, and decide to replicate just the subset of mission-critical production data to a remote location, and use the Storwize V7000 as the target device. Secondly, you could have three remote or branch offices (ROBO) that replicate to a centralized data center SAN Volume Controller.
Lastly, like the SVC, the Storwize V7000 now supports clustering so that you can now combine multiple control enclosures together to make a single system.
IBM® Storwize® V7000 Unified
Do you remember how IBM combined the best of SAN Volume Controller, XIV and DS8000 RAID into the Storwize V7000? Well, IBM did it again, combining the best of the Storwize V7000 with the common NAS software base developed for SONAS into the new "Storwize V7000 Unified".
You can upgrade your block-only Storwize V7000 into a file-and-block "Storwize V7000 Unified" storage system. This is a 6U-high system, consisting of a pair of 2U-high file modules connected to a standard 2U-high control enclosure. Like the block-only version, the control enclosure can attach up to nine expansion enclosures, as well as all the same support to virtualize external disk systems. The file modules combine the management node, interface node and storage node functionality that SONAS R1.3 offers.
What exactly does that mean for you? In addition to FCP and iSCSI for block-level LUNs, you can carve out file systems that support NFS, CIFS, FTP/SFTP, HTTP/HTTPS, and SCP protocols. All the same support as SONAS for anti-virus checking, access-based enumeration, integrated TSM backup and HSM functionality to migrate data to tape, NDMP backup support for other backup software, and Active Cloud Engine's local file management are all included!
IBM SAN Volume Controller V6.3
The SAN Volume Controller [SVC] increases its stretched cluster to distances up to 300km. This is 3x further than EMC's VPLEX offering. This allows identical copies of data to be kept identical in both locations, and allows for Live Partition Mobility or VMware vMotion to move workloads seamlessly from one data center to another. Combining two data centers with an SVC stretch cluster is often referred to as "Data Center Federation".
The SVC also introduces a low-bandwidth option for Global Mirror. We actually borrowed this concept from our XIV disk system. Normally, SVC's Global Mirror will consume all the bandwidth it can to keep the destination copy of the data within a few seconds of currency behind the source copy. But do you always need to be that current? Can you afford the bandwidth requirements needed to keep up with that? If you answered "No!" to either of these, then the low-bandwidth option is you. Basically, a FlashCopy is done on the source copy, this copy is then sent over to the destination, and a FlashCopy is made of that. The process is then repeated on a scheduled basis, like every four hours. This greatly reduces the amount of bandwidth required, and for many workloads, having currency in hours, rather than seconds, is good enough.
I am very excited about all these announcements! It is a good time to be working for IBM, and look forward to sharing these exciting enhancements with clients at the Tucson EBC.
I gotten several emails expressing worry that I have fallen off the face of th earth. The last two weeks have been educational and eye-opening for me. I can't provide details in my blog, so I will just say that it involved government agencies that IBM refers to as "dark accounts", and that I am now back safely in the USA. Between adjusting to time zone differences, ridiculously long hours, and restricted access to the internet, I was unable to blog lately.
Instead, I will resume my coverage of the [IBM System Storage Technical University 2011]. The "Solutions Expo" runs Monday evening through Wednesday lunch. This is a chance for people to explore all the solutions that are part of IBM's large "eco-system" for IBM System storage and System x products. There were several sponsors for this event.
As is often the case at these conferences, the various booths hand out fun items. The hot items this year were tie-dyed tee-shirts from Qlogic, and propeller beanies from the IBM rack and power systems team. Here is Amanda, one of the bartenders showing off the latter.
After the expo on Tuesday night, my friends at [Texas Memory Systems] held an after-party. Unlike the pens, tee-shirts and keychains at the Expo, these guys had a raffle for real storage products. Here is Erik Eyberg handing out a RamSan PCIe card, valued at $14,000 or so. IBM recently certified the TMS RamSan as External SSD storage for the IBM SAN Volume Controller (SVC). The SVC can optimize performance using this for automated sub-LUN tiering with the IBM System Storage Easy Tier feature.
Normally, when EMC fails, it is worth a giggle. Companies are run by humans, and nobody is perfect. However, their latest one, failing to defend their RSA SecurID two-factor website, is no laughing matter. Breaches like this undermine the trust needed for business and commerce to be done with Information Technology, so it affects the entire IT industry.
(FTC Disclosure: I do not work or have any financial investments in either EMC nor ENC Security Systems. Neither EMC nor ENC Security Systems paid me to mention them on this blog. Their mention in this blog is not an endorsement of either company or their products. Information about EMC was based solely on publicly available information made available by EMC and others. My friends at ENC Security Systems provided me an evaluation license for their latest software release so that I could confirm the use cases posed in this post.)
Of course, EMC did the right thing by making this breach public in an [Open Letter to RSA Customers]. While this may affect their revenues, as clients question whether they should do business with EMC, or affect their stock price, as investors question whether they should invest in EMC, they were very clear and public that the breach occurred. As far as I know, none of the executives of the RSA security division have stepped down. The disclosure of the breach was the right thing to do, and required by law from the [US Securities Exchange Commission]. This law was created to prevent companies from trying to hide breaches that expose external client information.
The breach does not affect RSA public/private key pairs used by IBM and most every other large company. Rather, this breach was targeted to RSA SecurID two-factor authentication. I explained two-factor authentication in my blog post [Day 5 Grid, SOA and Cloud Computing - System x KVM solutions], but basically it is an added level of security, requiring something you know (your password) with something you have (such as a magnetic card or key fob). Both are required to gain access to the system.
Breaches happen. Recently, [Hackers found vulnerabilities in the McAfee.com website]. Last month, fellow blogger Chuck Hollis from EMC had a blog post on [Understanding Advanced Persistent Threats (APT)] in the week leading up to their RSA Conference. It was precisely an APT that hit RSA, so the irony of this breach was not lost on the blogosphere. Perhaps Chuck's blog post gave hackers the idea to do this, like saying "I hope terrorists don't bomb this building that hold all of our chemical weapons..." or "I hope bank robbers don't rob this repository where we keep all the cash..."
(The sinister counter-theory, that EMC staged this breach as a marketing stunt to undermine trust in hybrid or public cloud offerings, such as those offered by IBM, Amazon or Salesforce.com, offers an interesting twist. While computer breaches in general are fodder for [Luddites] to argue we should not use computers at all, this particular breach could be used by EMC salesmen to encourage their customers to choose private cloud over hybrid cloud or public cloud deployments. Given all the extra work that RSA SecurID customers have to now do to harden their environments, that would be in bad taste.)
Today, March 31, is World Backup Day. This is because many viruses are triggered to operate on April 1. Just like checking the batteries in your smoke alarms every year, you should ensure that your backup methodology remains valid.
Back in 2008, I was a volunteer for the One Laptop Per Child (OLPC) initiative, and built an XS server to be used for Uruguay. I shipped [this baby off to school] to be the central server that all the student and teacher laptops connected to. It was the gateway to the Internet, as well as the [repository for the blogs of each student]. The blogs were accessible to the public, so that parents could read what their students were writing.
Unfortunately, this public access resulted in my little XS server being attacked by hackers, with IP addresses in Russia and China. Why anyone from either of those two countries wanted to ruin the hopes and dreams of small school children in Uruguay was beyond me. Fortunately, I had planned for remote administration. Backups were taken by me weekly to a second drive that was only mounted when I was dialed in to take the backup. The rest of the time, it was offline, so as not to be written to by hackers.
I also shipped along with the server a bootable DVD that contained a modified version of [System Rescue CD], scripts to start up SSHD daemon, and pre-populated for use with public/private RSA keys for me and eight other administrators located in various countries. To effect repairs, the local operator would reboot to the DVD, and then I could login via "ssh" and restore the operating system, programs and data. Sadly, this meant that the students might have lost some of their most recent blog posts since the last backup.
Please consider reviewing your own backup strategies. If your security were compromised, data was corrupted or lost, would you be able to recover from your backups?
Use Encryption where Appropriate
If you plan to travel this Summer, you may want to consider encryption to protect yourself. ENC Security Systems has just released their latest [Encrypt Stick] which is a USB memory stick pre-loaded with software that provides three features:
Encryption for your files
A secure web browser for accessing sensitive websites
Secure password manager
Many hotels now offer computers for use by the guests. These are typically running some flavor of Windows operating system. Encrypt Stick comes with an EXE file that you can run to browse the web securely, and have access to your encrypted files and passwords, leaving no trace on the hotel lobby computer.
Friends and Family
What if you are visiting friends and family, and they have a Mac instead? No problem, as Encrypt Stick has a DMG file to use on Mac OS X operating system. While you may not be worried about your siblings hacking into your bank account, you may not want them necessarily seeing what sites you visited.
I have been to several airport lounges now that use Linux for their public computers. Makes sense to me, as there are fewer viruses for Linux, and updating Linux is relatively straightforward. However, Encrypt Stick does not support Linux. For my Linux-knowledgeable readers, you can build your own with [Unetbootin] bootable USB memory stick to launch your favorite Linux browser in memory on whatever system you are using. The [Gparted Magic] utility rescue tool includes [TrueCrypt] to encrypt your files. Lastly, you can use [MyPasswordSafe] to hold all of your passwords securely.
Several clients have asked if any of the IBM data-at-rest encrypted disks or tapes are affected by this breach. IBM uses AES encryption for the actual disk and tape media, but we do use RSA keys to encrypt the generated keys used on the TS1120 and TS1130 drives. However, these were not affected by the RSA SecurID breach, and your tapes are safely protected.
Advanced Persistent Threats, viruses and other malware are no laughing matter. If you are concerned about security, contact IBM to help you assess your current environment and help you plan a robust protection strategy.
My series last week on IBM Watson (which you can read [here], [here], [here], and [here]) brought attention to IBM's Scale-Out Network Attached Storage [SONAS]. IBM Watson used a customized version of SONAS technology for its internal storage, and like most of the components of IBM Watson, IBM SONAS is commercially available as a stand-alone product.
Like many IBM products, SONAS has gone through various name changes. First introduced by Linda Sanford at an IBM SHARE conference in 2000 under the IBM Research codename Storage Tank, it was then delivered as a software-only offering SAN File System, then as a services offering Scale-out File Services (SoFS), and now as an integrated system appliance, SONAS, in IBM's Cloud Services and Systems portfolio.
If you are not familiar with SONAS, here are a few of my previous posts that go into more detail:
This week, IBM announces that SONAS has set a world record benchmark for performance, [a whopping 403,326 IOPS for a single file system]. The results are based on comparisons of publicly available information from Standard Performance Evaluation Corporation [SPEC], a prominent performance standardization organization with more than 60 member companies. SPEC publishes hundreds of different performance results each quarter covering a wide range of system performance disciplines (CPU, memory, power, and many more). SPECsfs2008_nfs.v3 is the industry-standard benchmark for NAS systems using the NFS protocol.
(Disclaimer: Your mileage may vary. As with any performance benchmark, the SPECsfs benchmark does not replicate any single workload or particular application. Rather, it encapsulates scores of typical activities on a NAS storage system. SPECsfs is based on a compilation of workload data submitted to the SPEC organization, aggregated from tens of thousands of fileservers, using a wide variety of environments and applications. As a result, it is comprised of typical workloads and with typical proportions of data and metadata use as seen in real production environments.)
The configuration tested involves SONAS Release 1.2 on 10 Interface Nodes and 8 Storage Pods, resulting a single file system over 900TB usable capacity.
10 Interface Nodes; each with:
Maximum 144 GB of memory
One active 10GbE port
8 Storage Pods; each with:
2 Storage nodes and 240 drives
Drive type: 15K RPM SAS hard drives
Data Protection using RAID-5 (8+P) ranks
Six spare drives per Storage Pod
IBM wanted a realistic "no compromises" configuration to be tested, by choosing:
Regular 15K RPM SAS drives, rather than a silly configuration full of super-expensive Solid State Drives (SSD) to plump up the results.
Moderate size, typical of what clients are asking for today. The Goldilocks rule applies. This SONAS is not a small configuration under 100TB, and nowhere close to the maximum supported configuration of 7,200 disks across 30 Interface Nodes and 30 Storage Pods.
Single file system, often referred to as a global name space, rather than using an aggregate of smaller file systems added together that would be more complicated to manage. Having multiple file systems often requires changes to applications to take advantage of the aggregate peformance. It is also more difficult to load-balance your performance and capacity across multiple file systems. Of course, SONAS can support up to 256 separate file systems if you have a business need for this complexity.
The results are stunning. IBM SONAS handled three times more workload for a single file system than the next leading contender. All of the major players are there as well, including NetApp, EMC and HP.
"When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, the actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1 Terabyte (TB). For performance reasons, various subsets of the data are replicated in RAM on different functional groups of cluster nodes. The entire system is self-contained, Watson is NOT going to the internet searching for answers."
I had several readers ask me to explain the significance of the "Terabyte". I'll work my way up.
A bit is simply a zero (0) or one (1). This could answer a Yes/No or True/False question.
Most computers have standardized a byte as a collection of 8 bits. There are 256 unique combinations of ones and zeros possible, so a byte could be used to storage a 2-digit integer, or a single upper or lower case character in the English alphabet. In pratical terms, a byte could store your age in years, or your middle initial.
The Kilobyte is a thousand bytes, enough to hold a few paragraphs of text. A typical written page could be held in 4 KB, for example.
The IBM Challenge to play on Jeopardy! is being compared to the historic 1969 moon landing. To land on the moon, Apollo 11 had the "Apollo Guidance Computer" (AGC) which had 74KB of fixed read-only memory, and 2KB of re-writeable memory. Over [3500 IBM employees were involved] to get the astronauts to the moon and safely back to earth again.
The importance of this computer was highlighted in a [lecture by astronaut David Scott] who said: "If you have a basketball and a baseball 14 feet apart, where the baseball represents the moon and the basketball represents the Earth, and you take a piece of paper sideways, the thinness of the paper would be the corridor you have to hit when you come back."
The Megabyte is a thousand KB, or a million bytes. The 3.5-inch floppy diskette, mentioned in my post [A Boxfull of Floppies] could hold 1.44MB, or about 360 pages of text.
In the article [Wikipedia as a printed book], the printing of a select 400 articles resulted in a book 29 inches thick. Those 5,000 pages would consume about 20 MB of space.
One of my favorite resources I use to search is the Internet Movie Data Base [IMDB]. Leaving out the photos and videos, the [text-only portion of the IMDB database is just over 600 MB], representing nearly all of the actors, awards, nominations, television shows and movies. A standard CD-ROM can hold 700MB, so the text portion of the IMDB could easily fit on a single CD.
The Gigabyte is a thousand MB, or a billion bytes. My Thinkpad T410 laptop has 4GB of RAM and 320GB of hard disk space. My laptop comes with a DVD burner, and each DVD can hold up to 4.7GB of information.
The popular Wikipedia now has some 17 million articles, of which 3.5 million are in English language. It would only take [14GB of space to hold the entire English portion] of Wikipedia. That is small enough to fit on twenty CDs, three DVDs, an Apple iPad or my cellphone (a Samsung Galaxy S Vibrant).
Perhaps you are thinking, "Someone should offer Wikipedia pre-installed on a small handheld!" Too late. The [The Humane Reader] is able to offer 5,000 books and Wikipedia in a small device that connects to your television. This would be great for people who do not have access to the internet, or for parents who want their kids to do their homework, but not be online while they are doing it.
In the latest 2009 report of [How Much Information?] from the University of California, San Diego, the average American consumes 34 GB of information. This includes all the information from radio, television, newspapers, magazines, books and the internet that a person might look at or listen to throughout the day. This project is sponsored by IBM and others to help people understand the nature of our information-consuption habits.
Back in 1992, I visited a client in Germany. Their 90 GB of disk storage attached to their mainframe was the size of three refrigerators, and took five full-time storage administrators to manage.
The Terabyte is a thousand GB, or a trillion bytes. It is now possible to buy external USB drive for your laptop or personal computer that holds 1TB or more. However, at 40MB/sec speeds that USB 2.0 is capable of, it would take seven hours to do a bulk transfer in or out of the device.
IBM offers 1TB and 2TB disk drives in many of our disk systems. In 2008, IBM was preparing to announce the first 1TB tape drive. However, Sun Microsystems announced their own 1TB drive the day before our big announcement, so IBM had to rephrase the TS1130 announcement to [The World's Fastest 1TB tape drive!]
A typical academic research library will hold about 2TB of information. For the [US Library of Congress] print collection is considered to be about 10TB, and their web capture team has collected 160TB of digital data. If you are ever in the Washington DC, I strongly recommend a visit to the Library of Congress. It is truly stunning!
Full-length computer animated movies, like [Happy Feet], consume about 100TB of disk storage during production. IBM offers disk systems that can hold this much data. For example, the IBM XIV can hold up to 151 TB of usable disk space in the size of one refrigerator.
A Key Performance Indicator (KPI) for some larger companies is the number of TB that can be managed by a full-time employee, referred to as TB/FTE. Discussions about TB/FTE are available from IT analysts including [Forrester Research] and [The Info Pro].
The website [Ancestry.com] claims to have over 540 million names in its genealogical database, with a storage of 600TB, with the inclusion of [US census data from 1790 to 1930]. The US government took nine years to process the 1880 census, so for the 1890 census, it rented equipment from Herman Hollerith's Tabulating Machine Company. This company would later merge with two others in 1911 to form what is now called IBM.
A Petabyte is thousand TB, or a quadrillion bytes. It is estimated that all printed materials on Earth would represent approximately 200 PB of information.
IBM's largest disk system, the Scale-Out Network Attach Storage (SONAS) comprised of up to 7,200 disk drives, which can hold over 11 PB of information. A smaller 10-frame model, the same size as IBM Watson, with six interface nodes and 19 storage pods, could hold over 7 PB of information.
For those of us in the IT industry, 1TB is small potatoes. I for one, was expecting it to be much bigger. But for everyone else, the equivalent of 200 million pages of text that IBM Watson has loaded inside is an incredibly large repository of information. I suspect IBM Watson probably contains the complete works of Shakespeare as well as other fiction writers, the IMDB database, all 3.5 million articles of Wikipedia, religious texts like the Bible and the Quran, famous documents like the Magna Carta and the US Constitution, and reference books like a Dictionary, a Thesaurus, and "Gray's Anatomy". And, of course, lots and lots of lists.
For those on Twitter, follow [@ibmwatson] these next three days during the challenge.
The weather has warmed up here in Tucson so I started my Spring Cleaning early this year and unearthed from my garage a [Bankers Box] full of floppy diskettes.
IBM invented the floppy disk back in 1971, and continued to make improvements and enhancements through the 1980s and 1990s. It will be one of the many inventions celebrated as part of IBM's Centennial (100-year) anniversary. Here is an example [T-shirt]
IBM needed a way to send out small updates and patches for microcode of devices out in client locations. IBM had drives that could write information, and sent out "read-only" drives to the customer locations to receive these updates. These were flexible plastic circles with a magnetic coating, and placed inside a square paper sleeve. Imagine a floppy disk the size of a piece of standard paper. The 8-inch floppy fit conveniently in a manila envelope, sendable by standard mail, and could hold nearly 80KB of data.
I've been using floppies for the past thirty years. Here's some of my fondest memories:
While still in high school, my friend Franz Kurath and I formed "Pearson Kurath Systems", a software development firm. We wrote computer programs to run on UNIX and Personal Computers for small businesses here in Tucson. Whenever we developed a clever piece of code, a subroutine or procedure, we would save it on a floppy disk and re-use it for our next project. We wrote in the BASIC language, and our databases were simple Comma-Separated-Variable (CSV) flat files.
The 5.25-inch floppies we used could hold 360KB, and were flexible like the 8-inch models. Later versions of these 5.25-inch floppies would be able to hold as much as 1.2MB of data. We would convert single-sided floppies into double-sided ones by cutting out a notch in the outer sleeve. Covering up the notches would mark them as read-only.
The 3.5-inch floppies were introduced with a hard plastic shell, with the selling point that you can slap on a mailing label and postage and send it "as is" without the need for a separate envelope. These new 3.5-inch floppies would carry "HD" for high density 720KB, and double-sided versions could hold 1.44MB of data. The term "diskette" was used to associate these new floppies with [hard-shelled tape cassettes]. Sliding a plastic tab would allow floppies to be marked "read-only". IBM has the patent on this clever invention.
Continuing our computer programming business in college, Franz and I took out a bank loan to buy our first Personal Computer, for over $5000 dollars USD. Until then, we had to use equipment belonging to each client. The banks we went to didn't understand why we needed a computer, and suggested we just track our expenses on traditional green-and-white ledger paper. Back then, peronsal computers were for balancing your checkbook, playing games and organizing your collection of cooking recipies. But for us, it was a production machine. A computer with both 5.25-inch and 3.5-inch drives could copy files from one format to another as needed. The boost in productivity paid for itself within months.
Apple launched its Macintosh computer in 1984, with a built-in 3.5-inch disk drive as standard equipment. Here is a YouTube video of an [astronaut ejecting a floppy disk] from an Apple computer in space.
In my senior year at the University of Arizona, my roommate Dave had borrowed my backpack to hold his lunch for a bike ride. He thought he had taken everything out, but forgot to remove my 3.5-inch floppy diskette containing files for my senior project. By the time he got back, the diskette was covered in banana pulp. I was able to rescue my data by cracking open the plastic outer shell, cleaning the flexible magnetic media in soapy water, placing it back into the plastic shell of a second diskette, and then copied the data off to a third diskette.
After graduating from college, Franz and I went our separate ways. I went to work for IBM, and Franz went to work for [Chiat/Day], the advertising agency famous for the 1984 Macintosh commercial. We still keep in touch through Facebook.
At IBM, I was given a 3270 terminal to do my job, and would not be assigned a personal computer until years later. Once I had a personal computer at home and at work, the floppy diskette became my "briefcase". I could download a file or document at work, take it home, work on it til the wee hours of the morning, and then come back the next morning with the updated effort.
To help prepare me for client visits and public speaking at conferences, IBM loaned me out to local schools to teach. This included teaching Computer Science 101 at Pima Community College. When asked by a student whether to use "disc" or "disk", I wrote a big letter "C" on the left side of the chalkboard, and a big letter "K" on the right side. If it is round, I told the students while pointing at the letter "C", like a CD-ROM or DVD, use "disc". If it has corners, pointing to corners of the letter "K", like a floppy diskette or hard disk drive, use "disk".
On one of my business trips to visit a client, we discovered the client had experienced a problem that we had just recently fixed. Normally, this would have meant cutting a Program Trouble Fix (PTF) to a 3480 tape cartridge at an IBM facility, and send it to the client by mail. Unwilling to wait, I offered to download the PTF onto a floppy diskette on my laptop, upload it from a PC connected to their systems, and apply it there. This involved a bit of REXX programming to deal with the differences between ASCII and EBCDIC character sets, but it worked, and a few hours later they were able to confirm the fix worked.
In 1998, Apple would signal the begining of the end of the floppy disk era, announcing their latest "iMac" would not come with an internal built-in floppy drive. David Adams has a great article on this titled [The iMac and the Floppy Drive: A Conspiracy Theory]. You can get external floppy drives that connect via USB, so not having an internal drive is no longer a big deal.
While teaching a Top Gun class to a mix of software and hardware sales reps, one of the students asked what a "U" was. He had noticed "2U" and "3U" next to various products and wondered what that was referring to. The "U" represents the [standard unit of measure for height of IT equipment in standard racks]. To help them visualize, I explained that a 5.25-inch floppy disk was "3U" in size, and a 3.5-inch floppy diskette was "2U". Thus, a "U" is 1.75 inches, the thinnest dimension on a two-by-four piece of lumber. Servers that were only 1U tall would be referred to as "pizza boxes" for having similar dimensions.
Every year, right around November or so, my friends and family bring me their old computers for me to wipe clean. Either I would re-load them with the latest Ubuntu Linux so that their kids could use it for homework, or I would donate it to charity. Last November, I got a computer that could not boot from a CD-ROM, forcing me to build a bootable floppy. This gave me a chance to check out the various 1-disk and 2-disk versions of Linux and other rescue disks. I also have a 3-disk set of floppies for booting OS/2 in command line mode.
So while this unexpected box of nostalgia derailed my efforts to clean out my garage this weekend, it did inspire me to try to get some of the old files off them and onto my PC hard drive. I have already retrieved some low-res photographs, some emails I sent out, and trip reports I wrote. While floppy diskettes were notorious for being unreliable, and this box of floppies has been in the heat and cold for many Arizonan summers and winters, I am amazed that I was able to read the data off most of them so far, all the way back to data written in 1989. While the data is readable, in most cases I can't render it into useful information. This brings up a few valuable lessons:
Backups are not Archives
Some of the files are in proprietary formats, such as my backups for TurboTax software. I would need a PC running a correct level of Windows operating system, and that particular software, just to restore the data. TurboTax shipped new software every year, and I don't know how forward or backward-compatible each new release was.
Another set of floppies are labeled as being in "FDBACK" format. I have no idea what these are. Each floppy has just two files, "backup.001" and "control.001", for example.
Backups are intended solely to protect against unexpected loss from broken hardware or corrupted data. If you plan to keep data as archives for long-term retention, use archive formats that will last a long time, so that you can make sense of them later.
Operating System Compatibility
Windows 7 and all of my favorite flavors of Linux are able to recognize the standard "FAT" file system that nearly all of my floppies are written in. Sadly, I have some files that were compressed under OS/2 operating system using software called "Stacker". I may have to stand up an OS/2 machine just to check out what is actually on those floppies.
You can't judge a book by its cover
Floppies were a convenient form of data interchange. Sometimes, I reused commercially-labeled floppies to hold personal files. So, just because a floppy says "America On-Line (AOL) version 2.5 Installation", I can't just toss it away. It might actually contain something else entirely. This means I need to mount each floppy to check on its actual contents.
So what will I do with the floppies I can't read, can't write, and can't format? I think I will convert them into a [retro set of coasters], to protect my new living room furniture from hot and cold beverages.
Each quarter since 2006, the [IBM Migration Factory] team has tallied the number of clients who have moved to IBM severs and storage systems from competitive hardware. We'll I've just seen the latest numbers, for the third quarter of 2010, and it looks like we set a new quarterly record with nearly 400 total migrations to IBM from Oracle/Sun and HP.
It's clear that companies and governments worldwide are seeing greater value in IBM systems, while Oracle and HP watch their customer bases erode. In just this past 3Q 2010, nearly 400 clients have moved over to IBM -- almost all of them from Oracle/Sun and HP. Of these, 286 clients migrated to IBM Power Systems, running AIX, Linux and IBM i operating systems, from competitors alone -- nearly 175 from Oracle/Sun and nearly 100 from HP. The number of migrations to IBM Power Systems through the first three quarters of 2010 is nearly 800, already exceeding the total for all of last year by more than 200.
Let's do the math.... Since IBM established its Migration Factory program in 2006, more than 4,500 clients have switched to IBM. More than 1,000 from Oracle/Sun and HP joined the exodus this year alone. In less than five years, almost 3,000 of these clients -- including more than 1,500 from Oracle/Sun and more than 1,000 from HP -- have chosen to run their businesses on IBM's Power Systems. That's more than a client per day making the move to IBM!
And as the servers go, so goes the storage. Clients are re-discovering IBM as a server and storage powerhouse, offering a strong portfolio in servers, disk and tape systems, and how synergies between servers and storage can provide them real business benefits.
Adding it all up, it's clear that IBM's multi-billion dollar investment in helping to build a smarter planet with workload-optimized systems is paying off -- and that, more and more, clients are selecting IBM over the competition to help them meet their business needs.
A client asked me to explain "Nearline storage" to them. This was easy, I thought, as I started my IBM career on DFHSM, now known as DFSMShsm for z/OS, which was created in 1977 to support the IBM 3850 Mass Storage System (MSS), a virtual storage system that blended disk drives and tape cartridges with robotic automation. Here is a quick recap:
Online storage is immediately available for I/O. This includes DRAM memory, solid-state drives (SSD), and always-on spinning disk, regardless of rotational speed.
Nearline storage is not immediately available, but can be made online quickly without human intervention. This includes optical jukeboxes, automated tape libraries, as well as spin-down massive array of idle disk (MAID) technologies.
Offline storage is not immediately available, and requires some human intervention to bring online. This can include USB memory sticks, CD/DVD optical media, shelf-resident tape cartridges, or other removable media.
Sadly, it appears a few storage manufacturers and vendors have been misusing the term "Nearline" to refer to "slower online" spinning disk drives. I find this [June 2005 technology paper from Seagate], and this [2002 NetApp Press Release], the latter of which included this contradiction for their "NearStore" disk array. Here is the excerpt:
"Providing online access to reference information—NetApp nearline storage solutions quickly retrieve and replicate reference and archive information maintained on cost-effective storage—medical images, financial models, energy exploration charts and graphs, and other data-intensive records can be stored economically and accessed in multiple locations more quickly than ever"
Which is it, "online access" or "nearline storage"?
If a client asked why slower drives consume less energy or generate less heat, I could explain that, but if they ask why slower drives must have SATA connections, that is a different discussion. The speed of a drive and its connection technology are for the most part independent. A 10K RPM drive can be made with FC, SAS or SATA connection.
I am opposed to using "Nearlne" just to distinguish between four-digit speeds (such as 5400 or 7200 RPM) versus "online" for five-digit speeds (10,000 and 15,000 RPM). The difference in performance between 10K RPM and 7200 RPM spinning disks is miniscule compared to the differences between solid-state drives and any spinning disk, or the difference between spinning disk and tape.
I am also opposed to using the term "Nearline" for online storage systems just because they are targeted for the typical use cases like backup, archive or other reference information that were previously directed to nearline devices like automated tape libraries.
Can we all just agree to refer to drives as "fast" or "slow", or give them RPM rotational speed designations, rather than try to incorrectly imply that FC and SAS drives are always fast, and SATA drives are always slow? Certainly we don't need new terms like "NL-SAS" just to represent a slower SAS connected drive.
This week, July 26-30, 2010, I am in Washington DC for the annual [2010 System Storage Technical University]. As with last year, we have joined forces with the System x team. Since we are in Washington DC this time, IBM added a "Federal Track" to focus on government challenges and solutions. So, basically, offering attendees the option to attend three conferences for one low price.
This conference was previously called the "Symposium", but IBM changed the name to "Technical University" to emphasize the technical nature of the conference. No marketing puffery like "Journey to the Private Cloud" here! Instead, this is bona fide technical training, qualifying attendees to count this towards their Continuing Professional Education (CPE).
(Note to my readers:The blogosphere is like a playground. In the center are four-year-olds throwing sand into each other's faces, while mature adults sit on benches watching the action, and only jumping in as needed. For example, fellow blogger Chuck Hollis (EMC) got sand in his face for promising to resign if EMC ever offered a tacky storage guarantee, and then [failed to follow through on his promise] when it happened.
Several of my readers asked me to respond to another EMC blogger's latest [fistful of sand].
A few months ago, fellow blogger Barry Burke (EMC) committed to [stick to facts] in posts on his Storage Anarchist blog. That didn't last long! BarryB apparently has fallen in line with EMC's over-promise-then-under-deliver approach. Unfortunately, I will be busy covering the conference and IBM's robust portfolio of offerings, so won't have time to address BarryB's stinking pile of rumor and hearsay until next week or later. I am sorry to disappoint.)
This conference is designed to help IT professionals make their business and IT infrastructure more dynamic and, in the process, help reduce costs, mitigate risks, and improve service. This technical conference event is geared to IT and Business Managers, Data Center Managers, Project Managers, System Programmers, Server and Storage Administrators, Database Administrators, Business Continuity and Capacity Planners, IBM Business Partners and other IT Professionals. This week will offer over 300 different sessions and hands-on labs, certification exams, and a Solutions Center.
For those who want a quick stroll through memory lane, here are my posts from past events:
In keeping up with IBM's leadership in Social Media, IBM Systems Lab Services and Training team running this event have their own [Facebook Fan Page] and
[blog]. IBM Technical University has a Twitter account [@ibmtechconfs], and hashtag #ibmtechu. You can also follow me on Twitter [@az990tony].
Well, today's Tuesday, and you know what that means... IBM Announcements!
This week, IBM has their huge 3Q Launch. This on top of the [2Q results] IBM released yesterday. You can read how the rest of the company did, but it is good to see that IBM grew in both revenue and market share for storage!
As with any IBM launch of this magnitude, there are so many enhancements, I will spread them across several posts.
IBM System Storage TS7610 ProtecTIER® Deduplication Appliance Express
The TS7610 is a smaller appliance than the TS7650 we introduced last year, taking up only 3U of rack space (2U for the disk itself, and a 1U slide rail to simplify maintenance). This is designed for smaller deployments, such as midsized businesses between 100 and 1000 employees that backup 3TB of data per week or less. The unit relies on RAID protected SATA drives. Thanks to the same ProtecTIER data deduplication we have on the TS7650, the TS7610 can hold up to 135TB of backup data on just 5.4TB of disk capacity, with in-line data ingest at 80 MB/sec performance. This little Virtual Tape Library (VTL) emulates up to four TS3500 libraries, with 64 LTO-3 drives and over 8000 virtual tapes. See the [Announcement letter] for details.
The [ProtecTIER Entry Edition] offers a hub-and-spoke approach to replication. You can have up to twelve(12) TS7610 boxes (the "spokes") replicate to a central VTL (the "hub"). This can be ideal for protecting remote office or branch office deployments.
IBM dobules the storage capacity by utilizing 2TB hard disk drives for the N3300 and N3400 series models, maximizes customer satisfaction through Partner Select Bundles (software bundles) for all of the N3000 series (N3300, N3400, N3600), and offers Application and Server Packs (software bundles) for N3400 models.
For the high-end, IBM introduces an enhanced Performance Acceleration Module (PAM II) bundle for N7900 Gateway. This bundle includes two 512GB Solid State Drive PAM II adapters, two dual-port 10GbE TOE network interface cards (NIC), and various software features.
The DS5020 and EXP520 joins their bigger siblings DS5100 and DS5300 in supporting Solid State Drives (SSD), available in 73GB and 300GB capacities. A new air filter bezel is also available for these when used in dusty environments. See the [Announcement letter] for details.
For my friends down in Brazil, A new 2.8 meter length power cord that supports 220-250 volts is now available for all DS4000 and DS5000 series disk systems. Obrigado para o seu negócio!
IBM Tivoli Storage FlashCopy Manager v2.2
I covered this latest release in my post [FlashCopy Manager v2.2] but the marketing team felt we should include it with this launch to get added exposure and visibility.
I'll try to get to the rest in separate posts over the rest of this week.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Now that the US Recession has been declared over, companies are looking to invest in IT again. To help you plan your upcoming investments, here are some upcoming events in April.
SNW Spring 2010, April 12-15
IBM is a Platinum Plus sponsor at this [Storage Networking World event], to be held April 12-15 at the Rosen Shingle Creek Resort in Orlando, Florida. If you are planning to go, here's what you can go look for:
IBM booth at the Solution Center featuring the DS8700 and XIV disk systems, SONAS and the Smart Business Storage Cloud (SBSC), and various Tivoli storage software
IBM kiosk at the Platinum Galleria focusing on storage solutions for SAP and Microsoft environments
IBM Senior Engineer Mark Fleming presenting "Understanding High Availability in the SAN"
IBM sponsored "Expo Lunch" on Tuesday, April 13, featuring Neville Yates, CTO of IBM ProtecTIER, presenting "Data Deduplication -- It's not Magic - It's Math!"
IBM CTO Vincent Hsu presenting "Intelligent Storage: High Performance and Hot Spot Elimination"
IBM Senior Technical Staff Member (STSM) Gordon Arnold presenting "Cloud Storage Security"
One-on-One meetings with IBM executives
I have personally worked with Mark, Neville, Vincent and Gordon, so I am sure they will do a great job in their presentations. Sadly, I won't be there myself, but fellow blogger [Rich Swain from IBM] will be at the event to blog about all the actviities there.
Jim Stallings - General Manager, Global Markets, IBM Systems and Technology Group
Scott Handy - Vice President, WW Marketing, Power Systems, IBM Systems and Technology Group
Dan Galvan - Vice President, Marketing & Strategy, Storage and Networking Systems, IBM Systems and Technology Group
Inna Kuznetsova - Vice President, Marketing and Sales Enablement, Systems Software, IBM Systems and Technology Group
Jeanine Cotter - Vice President, Systems Services, IBM Global Technology Services
The webinar will include client testimonials from various companies as well.
Dynamic Infrastructure Executive Summit, April 27-29
I will be there, at this this 2-and-a-half-day [Executive Summit] in Scottsdale, Arizona, to talk to company executives. Discover how IBM can help you manage your ever-increasing amount of information with an end-to-end, innovative approach to building a dynamic infrastructure. You will learn all of our innovative solutions and find out how you can effectively transform your enterprise for a smarter planet.
In addition to dominating the gaming world, producing chips for the Nintendo Wii, Sony PlayStation, and Microsoft Xbox 360, IBM also dominates the world of Linux and UNIX servers. Today, IBM announced its new POWER7 processor, and a line of servers that use this technology. Here is a quick [3-minute video] about the POWER7.
While others might be [Dancing on Sun's grave], IBM instead is focused on providing value to the marketplace. Here is another quick [2-minute video] about why thousands of companies have switched from Sun, HP and Dell over to IBM.
Am I dreaming? On his Storagezilla blog, fellow blogger Mark Twomey (EMC) brags about EMC's standard benchmark results, in his post titled [Love Life. Love CIFS.]. Here is my take:
A Full 180 degree reversal
For the past several years, EMC bloggers have argued, both in comments on this blog, and on their own blogs, that standard benchmarks are useless and should not be used to influence purchase decisions. While we all agree that "your mileage may vary", I find standard benchmarks are useful as part of an overall approach in comparing and selecting which vendors to work with, and which architectures or solution approaches to adopt, and which products or services to deploy. I am glad to see that EMC has finally joined the rest of the planet on this. I find it funny this reversal sounds a lot like their reversal from "Tape is Dead" to "What? We never said tape was dead!"
Impressive CIFS Results
The Standard Performance Evaluation Corporation (SPEC) has developed a series of NFS benchmarks, the latest, [SPECsfs2008] added support for CIFS. So, on the CIFS side, EMC's benchmarks compare favorably against previous CIFS tests from other vendors.
On the NFS side, however, EMC is still behind Avere, BlueArc, Exanet, and IBM/NetApp. For example, EMC's combination of Celerra gateways in front of V-Max disk systems resulted in 110,621 OPS with overall response time of 2.32 milliseconds. By comparison, the IBM N series N7900 (tested by NetApp under their own brand, FAS6080) was able to do 120,011 OPS with 1.95 msec response time.
Even though Sun invented the NFS protocol in the early 1980s, they take an EMC-like approach against standard benchmarks to measure it. Last year, fellow blogger Bryan Cantrill (Sun) gives his [Eulogy for a Benchmark]. I was going to make points about this, but fellow blogger Mike Eisler (NetApp) [already took care of it]. We can all learn from this. Companies that don't believe in standard benchmarks can either reverse course (as EMC has done), or continue their downhill decline until they are acquired by someone else.
(My condolences to those at Sun getting laid off. Those of you who hire on with IBM can get re-united with your former StorageTek buddies! Back then, StorageTek people left Sun in droves, knowing that Sun didn't understand the mainframe tape marketplace that StorageTek focused on. Likewise, many question how well Oracle will understand Sun's hardware business in servers and storage.)
What's in a Protocol?
Both CIFS and NFS have been around for decades, and comparisons can sometimes sound like religious debates. Traditionally, CIFS was used to share files between Windows systems, and NFS for Linux and UNIX platforms. However, Windows can also handle NFS, while Linux and UNIX systems can use CIFS. If you are using a recent level of VMware, you can use either NFS or CIFS as an alternative to Fibre Channel SAN to store your external disk VMDK files.
The Bigger Picture
There is a significant shift going on from traditional database repositories to unstructured file content. Today, as much as [80 percent of data is unstructured]. Shipments this year are expected to grow 60 percent for file-based storage, and only 15 percent for block-based storage. With the focus on private and public clouds, NAS solutions will be the battleground for 2010.
So, I am glad to see EMC starting to cite standard benchmarks. Hopefully, SPC-1 and SPC-2 benchmarks are forthcoming?
Last Tuesday, we had our official "Grand Opening" for the new Tucson Executive Briefing Center!
We sent out fancy invitations to all the IBM executives who supported this center, local dignitaries from the Tucson and State of Arizona level, and all of the IBM employees on the Tucson campus.
Since our new center is significantly cozier (5700 square feet versus our previous 15,000 square feet), we split the day into two separate events. The first for the IBM executives and local VIPs, and the second for the rest of the IBM employees on campus.
Of course, there is no free lunch. The day started out with a series of speeches. My manager, Doug Davies, was the master of ceremonies to introduce each speaker.
Alistair Symon, IBM Vice President of Enterprise Storage, explained how important storage affects everyone's lives. If you use an ATM machine to withdraw money, for example, you are most probably using IBM System Storage behind the scenes. Nearly all of the IBM disk and tape storage products are designed here in Tucson.
Bruce Wright (shown here) directs the University of Arizona's Office of University Research Parks, serves as CEO of the UA Tech Park, and the founder and president of the Arizona Center for Innovation. Bruce said a few words on how please he was that IBM decided to reverse its July 2011 decision to leave Tucson. The UofA owns all the property, renting back four of the eleven buildings back to IBM, so is effectively our landlord. Next year will mark the 20th anniversary of IBM's sale of the technology park to the University.
Tucson Councilwoman Shirley Scott talked about the improtance of high-paying jobs to the local economy. While IBMers in Tucson are paid less than our counterparts in San Jose, Austin, Raleigh or Poughkeepsie, we are certainly [paid more than the average Tucsonan], thus helping to raise the standard of living here.
Dr. Michael Varney, president and CEO of the local Tucson Metropolitan Chamber of Commerce, praised IBM for its strong reputation in ethics and diversity.
My new second-line manager, Karl Duvalsaint, and my new third-line manager, Doug Dreyer, emphasized the importance of co-locating Briefing Centers in sites that have Research and Development activity. It is important for clients to interact directly with developers, and it is also good for developers to understand directly from clients their needs, preferences and requirements. Worldwide, the IBM Systems and Technology Group has only twelve Executive Briefing Centers, and the Tucson EBC is one of them.
This is not to say that IBM does not have centers in other locations. Our newest client center in Singapore is a shining example. Of course, if they want experts to speak to clients there, they need to be flown in. Doug Dreyer mentioned that IBM plans to launch six such centers in Africa as well.
Next was the ribbon cutting. From left to right, Lee Olguin (our Gunny Sargeant), Tucson Councilwoman Shirley Scott, UofA's Bruce Wright, IBM VP of Program Management Calline Sanchez, My second-line manager Karl Duvalsaint, IBM VP Allistair Simon, my first-line manager Doug Davies, Tucson Chamber of Commerce President Dr. Michael Varney, and my third-line manager Doug Dreyer. We had a member of the local high school band do the drum roll.
Once the ribbon was cut, the IBM Executves and local VIPs were brought in to see the new facility, which has two large rooms, one common dining area, an 800-square foot green data center to showcase our products, our own set of restrooms, a galley to stage up the food and beverage service, and two smaller rooms for private conversations or conference calls. A local high school band provided live music throughout the day.