Well, this week I am in Maryland, just outside of Washington DC. It's a bit cold here.
Robin Harris over at StorageMojo put out this Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun about the results of two academic papers, one from Google, and another from Carnegie Mellon University (CMU). The papers imply that the disk drive module (DDM) manufacturers have perhaps misrepresented their reliability estimates, and asks major vendors to respond. So far, NetAppand EMC have responded.
I will not bother to re-iterate or repeat what others have said already, but make just a few points. Robin, you are free to consider this "my" official response if you like to post it on your blog, or point to mine, whatever is easier for you. Given that IBM no longer manufacturers the DDMs we use inside our disk systems, there may not be any reason for a more formal response.
- Coke and Pepsi buy sugar, Nutrasweet and Splenda from the same sources
Somehow, this doesn't surprise anyone. Coke and Pepsi don't own their own sugar cane fields, and even their bottlers are separate companies. Their job is to assemble the components using super-secret recipes to make something that tastes good.
IBM, EMC and NetApp don't make DDMs that are mentioned in either academic study. Different IBM storage systems uses one or more of the following DDM suppliers:
- Seagate (including Maxstor they acquired)
- Hitachi Global Storage Technologies, HGST (former IBM division sold off to Hitachi)
In the past, corporations like IBM was very "vertically-integrated", making every component of every system delivered.IBM was the first to bring disk systems to market, and led the major enhancements that exist in nearly all disk drives manufactured today. Today, however, our value-add is to take standard components, and use our super-secret recipe to make something that provides unique value to the marketplace. Not surprisingly, EMC, HP, Sun and NetApp also don't make their own DDMs. Hitachi is perhaps the last major disk systems vendor that also has a DDM manufacturing division.
So, my point is that disk systems are the next layer up. Everyone knows that individual components fail. Unlike CPUs or Memory, disks actually have moving parts, so you would expect them to fail more often compared to just "chips".
If you don't feel the MTBF or AFR estimates posted by these suppliers are valid, go after them, not the disk systems vendors that use their supplies. While IBM does qualify DDM suppliers for each purpose, we are basically purchasing them from the same major vendors as all of our competitors. I suspect you won't get much more than the responses you posted from Seagate and HGST.
- American car owners replace their cars every 59 months
According to a frequently cited auto market research firm, the average time before the original owner transfers their vehicle -- purchased or leased -- is currently 59 months.Both studies mention that customers have a different "definition" of failure than manufacturers, and often replace the drives before they are completely kaput. The same is true for cars. Americans give various reasons why they trade in their less-than-five-year cars for newer models. Disk technologies advance at a faster pace, so it makes sense to change drives for other business reasons, for speed and capacity improvements, lower power consumption, and so on.
The CMU study indicated that 43 percent of drives were replaced before they were completely dead.So, if General Motors estimated their cars lasted 9 years, and Toyota estimated 11 years, people still replace them sooner, for other reasons.
At IBM, we remind people that "data outlives the media". True for disk, and true for tape. Neither is "permanent storage", but rather a temporary resting point until the data is transferred to the next media. For this reason, IBM is focused on solutions and disk systems that plan for this inevitable migration process. IBM System Storage SAN Volume Controller is able to move active data from one disk system to another; IBM Tivoli Storage Manager is able to move backup copies from one tape to another; and IBM System Storage DR550 is able to move archive copies from disk and tape to newer disk and tape.
If you had only one car, then having that one and only vehicle die could be quite disrupting. However, companies that have fleet cars, like Hertz Car Rentals, don't wait for their cars to completely stop running either, they replace them well before that happens. For a large company with a large fleet of cars, regularly scheduled replacement is just part of doing business.
This brings us to the subject of RAID. No question that RAID 5 provides better reliability than having just a bunch of disks (JBOD). Certainly, three copies of data across separate disks, a variation of RAID 1, will provide even more protection, but for a price.
Robin mentions the "Auto-correlation" effect. Disk failures bunch up, so one recent failure might mean another DDM, somewhere in the environment, will probably fail soon also. For it to make a difference, it would (a) have to be a DDM in the same RAID 5 rank, and (b) have to occur during the time the first drive is being rebuilt to a spare volume.
- The human body replaces skin cells every day
So there are individual DDMs, manufactured by the suppliers above; disk systems, manufactured by IBM and others, and then your entire IT infrastructure. Beyond the disk system, you probably have redundant fabrics, clustered servers and multiple data paths, because eventually hardware fails.
People might realize that the human body replaces skin cells every day. Other cells are replaced frequently, within seven days, and others less frequently, taking a year or so to be replaced. I'm over 40 years old, but most of my cells are less than 9 years old. This is possible because information, data in the form of DNA, is moved from old cells to new cells, keeping the infrastructure (my body) alive.
Our clients should approach this in a more holistic view. You will replace disks in less than 3-5 years. While tape cartridges can retain their data for 20 years, most people change their tape drives every 7-9 years, and so tape data needs to be moved from old to new cartridges. Focus on your information, not individual DDMs.
What does this mean for DDM failures. When it happens, the disk system re-routes requests to a spare disk, rebuilding the data from RAID 5 parity, giving storage admins time to replace the failed unit. During the few hours this process takes place, you are either taking a backup, or crossing your fingers.Note: for RAID5 the time to rebuild is proportional to the number of disks in the rank, so smaller ranks can be rebuilt faster than larger ranks. To make matters worse, the slower RPM speeds and higher capacities of ATA disks means that the rebuild process could take longer than smaller capacity, higher speed FC/SCSI disk.
According to the Google study, a large portion of the DDM replacements had no SMART errors to warn that it was going to happen. To protect your infrastructure, you need to make sure you have current backups of all your data. IBM TotalStorage Productivity Center can help identify all the data that is "at risk", those files that have no backup, no copy, and no current backup since the file was most recently changed. A well-run shop keeps their "at risk" files below 3 percent.
So, where does that leave us?
- ATA drives are probably as reliable as FC/SCSI disk. Customers should chose which to use based on performance and workload characteristics. FC/SCSI drives are more expensive because they are designed to run at faster speeds, required by some enterprises for some workloads. IBM offers both, and has tools to help estimate which products are the best match to your requirements.
- RAID 5 is just one of the many choices of trade-offs between cost and protection of data. For some data, JBOD might be enough. For other data that is more mission critical, you might choose keeping two or three copies. Data protection is more than just using RAID, you need to also consider point-in-time copies, synchronous or asynchronous disk mirroring, continuous data protection (CDP), and backup to tape media. IBM can help show you how.
- Disk systems, and IT environments in general, are higher-level concepts to transcend the failures of individual components. DDM components will fail. Cache memory will fail. CPUs will fail. Choose a disk systems vendor that combines technologies in unique and innovative ways that take these possibilities into account, designed for no single point of failure, and no single point of repair.
So, Robin, from IBM's perspective, our hands are clean. Thank you for bringing this to our attention and for giving me the opportunity to highlight IBM's superiority at the systems level.
technorati tags: IBM, Seagate, Hitachi, HGST, EMC, NetApp, HP, HDS, Sun, Google, CMU, DDM, Fujitsu, MTBF, MTTF, AFR, ARR, JBOD, RAID, Tivoli, SVC, DR550, CDP, FC, SCSI, disk, tape, SAN,
Well, it's Tuesday again, and you know what that means? IBM Announcements! After much needed vacation in Cancun Mexico, Lake Havasu and Sedona, Arizona, I am glad to be back at work! This week, I was visiting clients in the Los Angeles area.
- IBM FlashSystem 9100
IBM's latest addition to its lineup of All-Flash Arrays is the FlashSystem 9100.
There are actually two models: the 9110 (model AF7) has 8-core processors, and the 9150 (model AF8) has 14-core processors. Both models are 2U 19-inch shelves with 24 drives on the front, with two control node canisters in the back. The term "FlashSystem 9100" applies to both 9110 and 9150 models.
Each canister has two processors, 64GB to 768GB of cache memory, an on-board 1GbE port for management, four 10GbE ports for Ethernet, and three HIC slots for I/O adapters, which can be any mix of quad-port FC cards, dual-port 25GbE Ethernet cards, or 12Gb SAS cards for expansion drawers.
For drives, you can have any mix of FlashCore Modules (FCM) or Industry-Standard NVMe (ISN) drives. The FlashCore modules are similar to the FlashCore boards in the FlashSystem 900, including Variable-Striped RAID, advanced flash management, heat binning, health separation, hardware-embedded encryption and compression.
These FCM are packaged into standard NVMe SSD form-factor, with 4.8, 9.6 and 19.2 TB capacities. The Industry-Standard NVMe drives come in 1.92, 3.84, 7.68 and 15.36 TB capacities to offer additional price/capacity options to clients.
A fully maxed out twenty-four FCM module system at 19.2TB represents approximately 400TB usable capacity, combined with 5:1 data footprint reduction with deduplication and compression, can provide up to an effective 2PB in as little as 2U of rack space!
The NVMe and FlashCore technology truly accelerates performance. Latencies as low as 100 microseconds are 2.5x lower than competitive offerings. Each control enclosure can deliver up to 2.5 Million IOPS, and a four-way cluster up to 10 million IOPS in just 8U!
You can mix and match FCM and ISN drives in the same controller, but FCM and ISN have to be in their own separate RAID groups. To use Distributed RAID6 (DRAID6), you need at least six drives for this.
IBM has made a "Statement of Direction" that these models are NVMe-OF hardware ready and will support both FC-NVMe and NVMe-OF over Ethernet by year end. Part of this involves changes to server-side software, including various operating systems, device drivers, and multi-pathing drivers.
The FlashSystem 9100 support up to 40U of expansion drawers, over 12Gb SAS, in two sizes. A 2U drawer for 24 SFF drives, and 5U for 92 SFF/LFF drives. Each FlashSystem 9100 can support up to 760 drives. These expansion drawers are not NVMe, so the Solid-State Drives (SSD) inside them use standard SAS. Consider using Easy Tier sub-LUN automated tiering to move fast data up to the FCM/ISN drives, and slower data to these SAS-based SSD.
Even though it doesn't have a "V" in its name, the FlashSystem 9100 runs Spectrum Virtualize, so you can also virtualize other storage behind it. Over 400 different storage devices from leading storage vendors are supported. The FlashSystem 9100 can be virtualized behind SVC or FlashSystem V9000.
FlashSystem 9100 can also cluster with Gen2 and Gen2+ models of the Storwize V7000 and V7000F controllers. You can connect up to four of any of these into a single cluster, supporting up to 3,040 drives.
The FlashSystem 9100 offers all of the features you have come to love from the rest of the Spectrum Virtualize products: data deduplication and compression, encryption, high-availability guarantee, data footprint reduction guarantee, hardware refresh option after three years, storage utility pricing, and IBM Storage Insights support.
IBM has no plans to withdraw either the existing FlashSystem V9000 nor the Storwize V7000/F models anytime soon. They continue to be available for purchase.
To learn more, see [IBM FlashSystem 9100] announcement letter, and fellow blogger Barry Whyte's post [Introducing the FlashSystem 9100 NVMe with FCM].
- IBM FlashSystem 9100 Multi-Cloud solutions
To complement the hardware features of the FlashSystem 9100, IBM has come up with three Multi-cloud solutions.
- Multi-Cloud Solution for Data Reuse, Protection and Efficiency - this combines Spectrum CDM with Spectrum Protect Plus to take snapshots of volumes on FlashSystem 9100. These snapshots are not just for data protection, but can also be "reused" for other purposes, like dev/test, DevOPS, or analytics.
- Multi-Cloud Solution for Business Continuity and Data Reuse - combines Spectrum CDM with Spectrum Virtualize in the Public Cloud, allowing you to take snapshots to the IBM Cloud for disaster recovery. The snapshots can be used in the cloud, or copied back to the same or different data center.
- Multi-Cloud Solution for Private Cloud Flexibility and Data Protection - combines IBM Cloud Private, Spectrum CDM, and Spectrum Connect to support client's efforts to re-factor their applications with Docker containers and Kubernetes. IBM FlashSystem 9100 can be used as persistent storage for containerized applications.
To learn more, see [IBM Multi-Cloud solutions] announcement letter.
- IBM Spectrum Virtualize 8.2 release
This release applies only to the Storwize V7000/F and the new FlashSystem 9100 models, and provides support for iSCSI Extensions over RDMA (iSER) on the 25GbE NIC cards. If you want to cluster existing Storwize V7000/F models to the new FlashSystem 9100 models, you need all of them to be at least v8.2.0 release.
Lower latencies and higher bandwidth requirements can be addressed by using RDMA to implement iSCSI. iSER is a new interconnect protocol that allows iSCSI to run on top of RDMA technology. RDMA can be implemented by using RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide-area RDMA Protocol). iSER enables iSCSI to run on top of it regardless of which of these technologies is used underneath.
To learn more, see [ IBM Spectrum Virtualize Software V8.2] announcement letter.
- IBM Storage Utility Pricing
The "Storage Utility" pricing available for many of IBM's other products has been extended to include the IBM FlashSystem 9100 and IBM Cloud Object Storage.
Basically, this is a variable-priced usage-based lease. Let's say you lease 500TB of capacity, but only use 150TB, the first few months you only pay for 150TB, a bit later, you use more, and now start paying more monthly, say 200TB. The price can go up or down. At the end of the lease, typically 36 or 60 months, you have a choice: give the equipment back, or pay the difference.
To learn more, see [IBM Storage Utility offerings for IBM Cloud Object Storage] announcement letter.
IBM is pleased to be on the leading edge of NVMe technology!
technorati tags: FlashSystem 9100, Multi-Cloud, Spectrum Virtualize
Well, it's Tuesday again, and you know what that means? IBM announcements!
Today's announcements are all about the Storwize family, IBM's market-leading Software Defined Storage offerings. Having sold over 55,000 systems, and managing over 1.6 Exabytes of data, IBM continues to be the #1 leader in storage virtualization solutions. The Storwize family consists of the SAN Volume Controller (SVC), Storwize V7000, Storwize V7000 Unified, Flex System V7000, Storwize V5000, Storwize V3700 and V3500.
SAN Volume Controller 2145-DH8
The new 2145-DH8 model is a complete repackaging of this popular storage system. The previous model, the 2145-CG8, was 1U-high x86 server per node, and each node required a separate 1U-high UPS to provide battery protection for its cache. Nobody liked this. The new 2145-DH8 instead is a 2U-high node with two hot-swappable batteries, eliminating the need for UPS altogether. Thus, an SVC node-pair using the 2145-DH8 models takes up the same 4U space, but with fewer cables. The SVC can now also support standard office 110/240 voltage sources.
The new model sports an 8-core processor with 32GB RAM. Since these are 2-socket servers, IBM offers that option to add a second 8-core processor and additional 32GB RAM to help boost Real-time Compression. Each node can have optionally one or two hardware-assisted compression cards which use the Intel QuickAssist chip to boost compression performance.
While the Real-time Compression was in fact, real-time, performed in-line to the read/write I/O process, at latency comparable to uncompressed data for applications, the compression process on older models was entirely software-based, consuming some of the CPU resources, which lowered the maximum IOPS of the solution. With the added cores, added RAM, and hardware-assisted compression chips, IBM resolves that concern. In fact, the new 2145-DH8 with compression can provide more IOPS than an older 2145-CG8 without compression.
The previous model 2145-CG8 allowed you to put up to 4 small SSD drives in the node itself, which were treated the same as externally Flash drives for purposes of having a high-speed storage pool for select volumes, or automated sub-LUN tiering with Easy Tier. The new model 2145-DH8 allows you to attach up to 48 Solid State Drives (SSD) via 12Gb SAS cables. These are housed in the new 2U-high 24F enclosures that can offer up to 38.4 TB of Flash per SVC I/O group.
IBM also re-designed the host/device ports to use Hardware Interface Card (HIC) slots. In the 2145-CG8, you had four FCP ports, two 1GbE Ethernet ports, with options to add two 10GbE Ethernet ports or four additional FCP ports. If you had mostly an FCoE or iSCSI environment, you didn't need the FCP, and if you were mostly a FCP Storage Area Network (SAN) environment, then most of the Ethernet ports went unused. To solve this, the 2145-DH8 can allow you to have up to six HIC cards that are either FCP, Ethernet, or SAS. There are three 1GbE fixed Ethernet ports which can be used for iSCSI and administration.
If you have SVC today, you can upgrade non-disruptively by either swapping out your current SVC engines with the new 2145-DH8 engines, or you can add the new 2145-DH8 engines to your existing SVC cluster. Either way, there is no outage to your applications!
To learn more, see the [Announcement letter: SAN Volume Controller Storage Engine DH8].
New Storwize V7000 hardware
This is the next generation of the popular Storwize V7000. The previous generation had a 4-core processor and 8GB RAM per canister. The new model has an 8-core processor with 32GB of RAM per canister, with the option to double these to boost Real-time compression. There are two canisters per control enclosure, which gives you 64GB to 128GB of RAM per Storwize V7000 I/O group.
The new Storwize V7000 comes with one hardware-assisted compression chip on the mother board of each canister, with the option to add a second chip per canister.
Each canister offers three HIC slots, which can be used for the additional hardware-assist compression chip, FCP or Ethernet ports.
To accommodate these HIC slots, new canisters were needed. Instead of the flat wide style top and bottom, we now have taller, thinner canisters that sit side to side. This side-to-side design is similar to our existing Storwize V5000 and V3700 models.
The previous model could support up to 9 expansion enclosures per control enclosure. The Storwize V7000 can have up to 24 drives in its control enclosure, and now attach up to 20 expansion enclosures, which allows up to 504 drives per control enclosure, and up to a maximum of 1,056 drives per Storwize cluster.
If you have previous models of Storwize V7000, you can add the new Storwize V7000 into the same cluster, or virtualize the previous storage for migration purposes.
To learn more, see the [Announcement letter: New Storwize V7000].
IBM Storwize Family Software V7.3.0
The new software applies new capabilities to both new generation hardware as well as the older models, so people with existing gear can benefit as well.
In prior releases, the sub-LUN automated tiering was limited to two levels: Flash and HDD. This lumped all 15K, 10K and 7200 RPM drives into a common HDD category. In the new v7.3.0 code, you can now have three levels: Flash, Enterprise HDD, and Nearline HDD, or two HDD levels: Enterprise and Nearline. The Enterprise level combines 15K and 10K RPM drives, similar to what is done on the IBM System Storage DS8000 disk systems.
The new code is also able balance your storage pools, and can be used with uniform or mixed storage pools to eliminate performance hot spots.
The new code has been enhanced to detect the hardware-assisted compression chip on the new SVC and Storwize V7000 models, and use those if available.
For the Storwize V3700 and V5000 models, the new code allows up to nine expansion enclosures per control enclosure. In the previous models, the V3700 allowed only four expansions, and the V6000 only six expansions per control enclosure. The V3700 can now support up to 240 drives, and the V5000 can support up to 480 drives.
To learn more, see the [Announcement letter: Storwize Family Software v7.3.0].
IBM Storwize V7000 Unified File Module software v1.5
For Storwize V7000 Unified clients, there is new software for the File Modules that provide NFS, CIFS, FTP, HTTPS and SCP protocol capability. The new v1.5 code now adds NFS v4 and SMB 2.1 levels of support. Most NFS users are still on NFSv3, but about 20 percent of NFS users are using NFS v4 which offers stateful access. The SMB 2.1 for CIFS was introduced by Microsoft in Windows 7 and Windows Server 2008 R2.
Deterministic ID mapping allows you to map Windows userids to UNIX/Linux group and owner id numbers. In the past, the problem is that this mapping is different on each machine, so people often had to stand up a Windows System for Unix Services (SFU) server to provide consistent ID mapping. Now, with v1.5 code, you will no longer have to do this. The deterministic ID mapping will can now replicate the mapping to each machine without an SFU server.
Active Cloud Engine allows up to ten Storwize V7000 Unified to be connected across distance to form a single global name space. WAN caching, however, was restricted to a single site having write capabilities, while the others were read-only. In v1.5 release, IBM now supports multiple independent writers at different locations on the same fileset.
Security enhancements include multi-tenancy, configurable password policies, session policies, and hardened boot and SSH configurations. With NFS v3/v4, you can now use [Kerberos] for security.
Finally, I am please to see that we now have Cinder support for files on the Storwize V7000 Unified on the OpenStack Havana release that just came out last month. The OpenStack Cinder interface can assign LUNs to virtual machines, but the new Havana release allows NAS systems to dole out files that act as LUNs, such as OVA or VMDK files. The advantage is that these files can managed by Active Cloud Engine, cached locally across global name space, have policies place them on appropriate storage tiers, and inactive Virtual Machine images can be migrated to less expensive disk or tape.
To learn more, see the [Announcement letter: Storwize Family Software v7.3.0].
You can learn more about the Storwize family at the [IBM Edge Conference], May 19-23, at Las Vegas. I'll be there!
technorati tags: IBM, Announcements, SAN Volume Controller, SVC, Storwize, Storwize V7000, Flex System V7000, Storwize V5000, Storwize V3700, 2145-DH8, hardware-assisted compression, Real-time Compression, Intel QuickAssist, New Storwize, HIC, Easy Tier, Storwize V7000 Unified, File Modules, OpenStack, OpenStack Havana, OpenStack Cinder, multiple-writer, independent-writer, Active Cloud Engine, Windows SFU, Kerberos, Storwize family, #ibmEdge, Las Vegas
Happy Winter Solstice everyone! The Mayan calendar flipped over yesterday, and everything continued as normal.
The next date to watch out for is ... drumroll please ... April 8, 2014. This is the date Microsoft has decided to [drop support for Windows XP].
While many large corporations are actively planning to get off Windows XP, there are still many homes and individuals that are running on this platform.
When [Windows XP] was introduced in 2001, it could support systems with as little as 64MB of RAM. Nowadays, the latest versions of Windows now requires a minimum of 1GB for 32-bit systems, with 2GB or 3GB recommended.
That leaves Windows XP users on older hardware few choices:
- Continue to run Windows XP, but without support (and hope for the best)
- Upgrade their hardware with more RAM (and possibly more disk space) needed to run a newer level of Windows
- Install a different operating system like Linux
- Put the hardware in the recycle bin, and buy a new computer
Here is a personal example. A long time ago, I gave my sister a Thinkpad R31 laptop so that she could work from home. When she got a newer one, she passed this down to her daughter for doing homework. When my neice got a newer one, she passed this old laptop to her grandma.
Grandma is fairly happy with her modern PC running Windows XP. She plays all kinds of games, scans photographs, sends emails, listens to music on iTunes, and even uses Skype to talk to relatives. Her problem is that this PC is located upstairs, in her bedroom, and she wanted something portable that she could play music downstairs when she is playing cards with her friends.
"Why not use the laptop you have?" I asked. Her response: "It runs very slow. Perhaps it has a virus. Can you fix that?" I was up for the challenge, so I agreed.
(The Challenge: Update the Thinkpad R31 so that grandma can simply turn it on, launch iTunes or similar application, and just press a "play" button to listen to her music. It will be plugged in to an electrical outlet wherever she takes it, and she already has her collection of MP3 music files. My hope is to have something that is (a) simple to use, (b) starts up quickly, and (c) will not require a lot of on-going maintenance issues.)
Here are the relevant specifications of the Thinkpad R31 laptop:
|CPU||Intel Celeron 1.13GHz Pentium-III|
|Display||13.3-inch TFT, 1024x768 XGA|
|Memory (RAM)||384 MB @133MHz, upgradeable only to 1GB|
|Disk storage||20.0 GB|
|Optical Drive||CD-ROM drive|
|BIOS boot options||Hard drive or CD-ROM only|
|External attachment||2 USB ports, but no USB boot option|
|Network||Wired 10/100 Mbps Ethernet|
56 Kbps Phone modem
The system was pre-installed with Windows XP, but was terribly down-level. I updated to Windows XP SP3 level, downloaded the latest anti-virus signatures, and installed iTunes. A full scan found no viruses. All this software takes up 14GB, leaving less than 6GB for MP3 music files.
The time it took from hitting the "Power-on" button to hearing the first note of music was over 14 minutes! Unacceptable!
If you can suggest what my next steps should be, please comment below or send me an email!
technorati tags: IBM, Windows XP, Microsoft, Thinkpad
Have you ever noticed that sometimes two movies come out that seem eerily similar to each other, released by different studios within months or weeks of each other? My sister used to review film scripts for a living, she would read ten of them and have to pick her top three favorites, and tells me that scripts for nearly identical concepts came all the time. Here are a few of my favorite examples:
- 1994: [Wyatt Earp] and [Tombstone] were Westerns recounting the famed gunfight at the O.K. Corral. Tombstone, Arizona is near Tucson, and the gunfight is recreated fairly often for tourists.
- 1998: [Armageddon] and [Deep Impact] were a pair of disaster movies dealing with a large rock heading to destroy all life on earth. I was in Mazatlan, Mexico to see the latter, dubbed in Spanish as "Impacto Profundo".
- 1998: [A Bug's Life] and [Antz] were computer-animated tales of the struggle of one individual ant in an ant colony.
- 2000: [Mission to Mars] and [Red Planet] were sci-fi pics exploring what a manned mission to our neighboring planet might entail.
- 2009: [Paul Blart: Mall Cop] and [Observe and Report] were comedies dealing with challenges of security at a shopping mall.
(I think I made my point with just a few examples. A more complete list can be found on [Sam Greenspan's 11 Points website].)
This is different than copy-cat movies that are re-made or re-imagined many years later based on the previous successes of an original. Ever since my blog post [VPLEX: EMC's Latest Wheel is Round] in 2010 comparing EMC's copy-cat product that came our seven years after IBM's SAN Volume Controller (SVC), I've noticed EMC doesn't talk about VPLEX that much anymore.
This week, IBM announced [XIV Gen3 Solid-State Drive support] and our friends over at EMC announced [VFCache SSD-based PCIe cards]. Neither of these should be a surprise to anyone who follows the IT industry, as IBM had announced its XIV Gen3 as "SSD-Ready" last year specifically for this purpose, and EMC has been touting its "Project Lightning" since last May.
Fellow blogger Chris Mellor from The Register has a series of articles to cover this, including [EMC crashes the server flash party], [NetApp slaps down Lightning with multi-card Flash flush], [HP may be going the server flash route], and [Now HDS joins the server flash party].
Fellow blogger Chuck Hollis from EMC has a blog post [VFCache means Very Fast Cache indeed] that provides additional detail. Chuck claims the VFCache is faster than popular [Fusion-IO PCIe cards] available for IBM servers. I haven't seen the performance spec sheets, but typically SSD is four to five times slower than the DRAM cache used in the XIV Gen3. The VFCache's SSD is probably similar in performance to the SSD supported in the IBM XIV Gen3, DS8000, DS5000, SVC, N series, and Storwize V7000 disk systems.
Nonetheless, I've been asked my opinions on the comparison between these two announcements, as they both deal with improving application performance through the use of Solid-State Drives as an added layer of read cache.
(FTC Disclosure: I am both a full-time employee and stockholder of the IBM Corporation. The U.S. Federal Trade Commission may consider this blog post as a paid celebrity endorsement of IBM servers and storage systems. This blog post is based on my interpretation and opinions of publicly-available information, as I have no hands-on access to any of these third-party PCIe cards. I have no financial interest in EMC, Fusion-IO, Texas Memory Systems, or any other third party vendor of PCIe cards designed to fit inside IBM servers, and I have not been paid by anyone to mention their name, brands or products on this blog post.)
The solutions are different in that IBM XIV Gen3 the SSD is "storage-side" in the external storage device, and EMC VFCache is "server-side" as a PCI Express [PCIe] card. Aside from that, both implement SSD as an additional read cache layer in front of spinning disk to boost performance. Neither is an industry first, as IBM has offered server-side SSD since 2007, and IBM and EMC have offered storage-side SSD in many of their other external storage devices. The use of SSD as read cache has already been available in IBM N series using [Performance Accelerator Module (PAM)] cards.
IBM has offered cooperative caching synergy between its servers and its storage arrays for some time now. The predecessor to today's POWER7-based were the iSeries i5 servers that used PCI-X IOP cards with cache to connect i5/OS applications to IBM's external disk and tape systems. To compete in this space, EMC created their own PCI-X cards to attach their own disk systems. In 2006, IBM did the right thing for our clients and fostered competition by entering in a [Landmark agreement] with EMC to [license the i5 interfaces]. Today, VIOS on IBM POWER systems allows a much broader choice of disk options for IBM i clients, including the IBM SVC, Storwize V7000 and XIV storage systems.
EMC is not the first to manufacture an SSD-based PCIe card. Last summer, my friends at Texas Memory Systems [TMS] gave away a [RAMsan-70 PCIe card] at an after-party on [Day 2 of the IBM System Storage University].
Can a little SSD really help performance? Yes! An IBM client running a [DB2 Universal Database] cluster across eight System x servers was able to replace an 800-drive EMC Symmetrix by putting eight SSD Fusion-IO cards in each server, for a total of 64 Solid-State drives, saving money and improving performance. DB2 has the Data Partitioning Feature that has multi-system DB2 configurations using a Grid-like architecture similar to how XIV is designed. Most IBM System x and BladeCenter servers support internal SSD storage options, and many offer PCIe slots for third-party SSD cards. Sadly, you can't do this with a VFCache card, since you can have only one VFCache card in each server, the data is unprotected, and only for ephemeral data like transaction logs or other temporary data. With multiple Fusion-IO cards in an IBM server, you can configure a RAID rank across the SSD, and use it for persistent storage like DB2 databases.
Here then is my side-by-side comparison:
|Category||EMC VFCache||IBM XIV Gen3 SSD Caching|
|Servers supported||Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers||All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux|
|Operating System support||Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2.||All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X|
|Protocol support||FCP||FCP and iSCSI|
|Vendor-supplied driver required on the server||Yes, the VFCache driver must be installed to use this feature.||No, IBM XIV Gen3 uses native OS-based multi-pathing drivers.|
|External disk storage systems required||None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage||XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.|
|Shared disk support||No, VFCache has to be disabled and removed for vMotion to take place.||Yes! XIV Gen3 SSD caching shared disk supports VMware vMotion and Live Partition Mobility.|
|Support for multiple servers||No||An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers.|
|Support for active/active server clusters||No||Yes!|
|Aware of changes made to back-end disk||No, it appears the VFCache has no direct interaction with the back-end disk array, so any changes to the data on the box itself are not communicated back to the VFCache card itself to invalidate the cache contents.||Yes!|
|Sequential-access detection||None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache.||Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.|
|Number of SSD supported||One, which seems odd as IBM supports multiple Fusion-IO cards for its servers. However, this is not really a single point of failure (SPOF) as an application experiencing a VFCache failure merely drops down to external disk array speed, no data is lost since it is only read cache.||6 to 15 (one per XIV module) for high availability.|
|Pin data in SSD cache||Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files.||No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.|
|Pre-sales Estimating tools||None identified||Yes! CDF and Disk Magic tools are available to help cost-justify the purchase of SSD based on workload performance analysis.|
IBM has the advantage that it designs and manufactures both servers and storage, and can design optimal solutions for our clients in that regard.
technorati tags: IBM, XIV, Gen3, SSD, cache, EMC, VFCache, Project Lightning, SVC, Solid State Drives, Fusion-IO, Texas Memory Systems, RAMSan, System+x, POWER systems, VIOS, DRAM, VMware, Vmotion, Live Partition Mobility, AIX, IBM i, PCIe, PCI-X
Wrapping up my coverage of the annual [2010 System Storage Technical University], I attended what might be perhaps the best session of the conference. Jim Nolting, IBM Semiconductor Manufacturing Engineer, presented the new IBM zEnterprise mainframe, "A New Dimension in Computing", under the Federal track.
The zEnterprises debunks the "one processor fits all" myth. For some I/O-intensive workloads, the mainframe continues to be the most cost-effective platform. However, there are other workloads where a memory-rich Intel or AMD x86 instance might be the best fit, and yet other workloads where the high number of parallel threads of reduced instruction set computing [RISC] such as IBM's POWER7 processor is more cost-effective. The IBM zEnterprise combines all three processor types into a single system, so that you can now run each workload on the processor that is optimized for that workload.
- IBM zEnterprise z196 Central Processing Complex (CPC)
Let's start with the new mainframe z196 central processing complex (CPC). Many thought this would be called the z11, but that didn't happen. Basically, the z196 machine has a maximum 96 cores versus z10's 64 core maximum, and each core runs 5.2GHz instead of z10's cores running at 4.7GHz. It is available in air-cooled and water-cooled models. The primary operating system that runs on this is called "z/OS", which when used with its integrated UNIX System Services subsystem, is fully UNIX-certified. The z196 server can also run z/VM, z/VSE, z/TPF and Linux on z, which is just Linux recompiled for the z/Architecture chip set. In my June 2008 post [Yes, Jon, there is a mainframe that can help replace 1500 servers], I mentioned the z10 mainframe had a top speed of nearly 30,000 MIPS (Million Instructions per Second). The new z196 machine can do 50,000 MIPS, a 60 percent increase!
(Update: Back in 2007, IBM and Sun mutually supported [OpenSolaris on an IBM System z mainframe]. Unfortunately, after Oracle acquired Sun, the OpenSolaris Governing Board has [grown uneasy over Oracle's silence] about the future of OpenSolaris on any platform. The OpenSolaris [download site] identifies 2009.06 as the latest release, but only for x86 and SPARC chip sets. Apparently, the 2010.03 release expected five months ago in March has slipped. Now it looks official that [OpenSolaris is Dead].)
The z196 runs a hypervisor called PR/SM that allows the box to be divided into dozens of logical partitions (LPAR), and the z/VM operating system can also act as a hypervisor running hundreds or thousands of guest OS images. Each core can be assigned a specialty engine "personality": GP for general processor, IFL for z/VM and Linux, zAAP for Java and XML processing, and zIIP for database, communications and remote disk mirroring. Like the z9 and z10, the z196 can attach to external disk and tape storage via ESCON, FICON or FCP protocols, and through NFS via 1GbE and 10GbE Ethernet.
- IBM zEnterprise BladeCenter Extension (zBX)
There is a new frame called the zBX that basically holds two IBM BladeCenter chassis, each capable of 14 blades, so total of 28 blades per zBX frame. For now, only select blade servers are supported inside, but IBM plans to expand this to include more as testing continues. The POWER-based blades can run native AIX, IBM's other UNIX operating system, and the x86-based blades can run Linux-x86 workloads, for example. Each of these blade servers can run a single OS natively, or run a hypervisor to have multiple guest OS images. IBM plans to look into running other POWER and x86-based operating systems in the future.
If you are already familiar with IBM's BladeCenter, then you can skip this paragraph. Basically, you have a chassis that holds 14 blades connected to a "mid-plane". On the back of the chassis, you have hot-swappable modules that snap into the other side of the mid-plane. There are modules for FCP, FCoE and Ethernet connectivity, which allows blades to talk to each other, as well as external storage. BladeCenter Management modules serve as both the service processor as well as the keyboard, video and mouse Local Console Manager (LCM). All of the IBM storage options available to IBM BladeCenter apply to zBX as well.
Besides general purpose blades, IBM will offer "accelerator" blades that will offload work from the z196. For example, let's say an OLAP-style query is issued via SQL to DB2 on z/OS. In the process of parsing the complicated query, it creates a Materialized Query Table (MQT) to temporarily hold some data. This MQT contains just the columnar data required, which can then be transferred to a set of blade servers known as the Smart Analytics Optimizer (SAO), then processes the request and sends the results back. The Smart Analytics Optimizer comes in various sizes, from small (7 blades) to extra large (56 blades, 28 in each of two zBX frames). A 14-blade configuration can hold about 1TB of compressed DB2 data in memory for processing.
- IBM zEnterprise Unified Resource Manager
You can have up to eight z196 machines and up to four zBX frames connected together into a monstrously large system. There are two internal networks. The Inter-ensemble data network (IEDN) is a 10GbE that connects all the OS images together, and can be further subdivided into separate virtual LANs (VLAN). The Inter-node management network (INMN) is a 1000 Mbps Base-T Ethernet that connects all the host servers together to be managed under a single pane of glass known as the Unified Resource Manager. It is based on IBM Systems Director.
By integrating service management, the Unified Resource Manager can handle Operations, Energy Management, Hypervisor Management, Virtual Server Lifecycle Management, Platform Performance Management, and Network Management, all from one place.
- IBM Rational Developer for System z Unit Test (RDz)
But what about developers and testers, such as those Independent Software Vendors (ISV) that produce mainframe software. How can IBM make their lives easier?
Phil Smith on z/Journal provides a history of [IBM Mainframe Emulation]. Back in 2007, three emulation options were in use in various shops:
- Open Mainframe, from Platform Solutions, Inc. (PSI)
- FLEX-ES, from Fundamental Software, Inc.
- Hercules, which is an open source package
None of these are viable options today. Nobody wanted to pay IBM for its Intellectual Property on the z/Architecture or license the use of the z/OS operating system. To fill the void, IBM put out an officially-supported emulation environment called IBM System z Professional Development Tool (zPDT) available to IBM employees, IBM Business Partners and ISVs that register through IBM Partnerworld. To help out developers and testers who work at clients that run mainframes, IBM now offers IBM Rational Developer for System z Unit Test, which is a modified version of zPDT that can run on a x86-based laptop or shared IBM System x server. Based on the open source [Eclipse IDE], the RDz emulates GP, IFL, zAAP and zIIP engines on a Linux-x86 base. A four-core x86 server can emulate a 3-engine mainframe.
With RDz, a developer can write code, compile and unit test all without consuming any mainframe MIPS. The interface is similar to Rational Application Developer (RAD), and so similar skills, tools and interfaces used to write Java, C/C++ and Fortran code can also be used for JCL, CICS, IMS, COBOL and PL/I on the mainframe. An IBM study ["Benchmarking IDE Efficiency"] found that developers using RDz were 30 percent more productive than using native z/OS ISPF. (I mention the use of RAD in my post [Three Things to do on the IBM Cloud]).
What does this all mean for the IT industry? First, the zEnterprise is perfectly positioned for [three-tier architecture] applications. A typical example could be a client-facing web-server on x86, talking to business logic running on POWER7, which in turn talks to database on z/OS in the z196 mainframe. Second, the zEnterprise is well-positioned for government agencies looking to modernize their operations and significantly reduce costs, corporations looking to consolidate data centers, and service providers looking to deploy public cloud offerings. Third, IBM storage is a great fit for the zEnterprise, with the IBM DS8000 series, XIV, SONAS and Information Archive accessible from both z196 and zBX servers.
To learn more, see the [12-page brochure] or review the collection of [IBM Redbooks]. Check out the [IBM Conferences schedule] for an event near you. Next year, the IBM Storage University will be held July 18-22, 2011 in Orlando, Flordia.
technorati tags: IBM, Technical University, zEnterprise, x86, POWER7, RISC, z/OS, Linux, AIX, OpenSolaris, Oracle, FICON, NFS, z196, zBX, DB2, SAO, IEDN, INMN, RDz, ISV, Eclipse, Cloud Computing
Jon Toigo has a funny cartoon on his post, [As I Listen to EMC Brag on “New” Functionality…
]. Basically, it pokes fun that many of us bloggers argue which vendor was first to introduce some technology or another. We all do this, myself included.
Recently, Claus Mikkelsen's, currently with HDS, [brought up accurately some past history from the 1990s], which is before many storage bloggers hired on with their current employers. Claus and I worked together for IBM back then, so I recognized many of the events he mentions that I can't talk about either. In many cases, IBM or HDS delivered new features before EMC.
I've been reading with some amusement as fellow blogger Barry Burke asked Claus a series of questions about Hitachi's latest High Availability Manager (HAM) feature. Claus was too busy with his "day job" and chose to shut Barry down. Sadly, HDS set themselves up for ridicule this round, first by over-hyping a function before its announcement, and then announcing a feature that IBM and EMC have offered for a while. The problem and confusion for many is that each vendor uses different terminology. Hitachi's HAM is similar to IBM's HyperSwap and EMC's AutoSwap. The implementations are different, of course, which is often why vendors are often asked to compare and contrast one implementation to another.
In his latest response,[how to mind the future of a mission-critical world], Barry reports that several HDS bloggers now censor his comments.That's too bad. I don't censor comments, within reason, including Barry's inane questions about IBM's products, and am glad that he does not censor my inane questions to him about EMC products in return. The entire blogosphere benefits from these exchanges, even if they are a bit heated sometimes.
We all have day jobs, and often are just too busy, or too lazy, to read dozens or hundreds of pages of materials, if we can even find them in the first place. Not everyone has the luxury of a "competitive marketing" team to help do the research for you, so if we can get an accurate answer or clarification about a product that is generally available directly from a vendor's subject matter expert, I am all for that.
technorati tags: IBM, Jon Toigo, HDS, Claus Mikkelsen, EMC, Barry Burke, HAM
Looks like fellow blogger and arch nemesis BarryB from EMC is once again stirring up trouble, this time he focuses his attention on IBM's leadership in Solid State Disk (SSD) on the IBM System Storage DS8000 disk systems in his post [IBM's amazing splash dance, part deux
], a follow-up to [IBM's amazing splash dance
] and multi-vendor tirade [don't miss the amazing vendor flash dance
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
- You can have 16 or 32 SSD per DA pair. However, you can only have a maximum of 128 SSD drives total in any DS8100 or DS8300. In the case of the IBM DS8300 with 8 DA pairs, it makes more senseto spread the SSD out across all 8 pairs, and perhaps this is what confused BarryB.
- Yes, you can order an all-SSD model of the IBM DS8000 disk system. I don't see anywhere in the RedPaper that suggests otherwise, and I have confirmed with our offering manager that this is the case.
- The 73GB and 146GB are freshly manufactured from STEC. The 146GB drive and 200GB drives are actually the same drive but just formatted differently. The 200GB format does not offer as much spare capacity for wear-leveling, and are therefore intended only for read-intensive workloads. (Perhaps EMC wants you to find this out the hard way so that you replace them more often???) These reduced-spare-capacity formats may not be appropriate with some write-intensive workloads. Don't let anyone from EMC try to misrepresent the 73GB or 146GB drives from STEC as older, obsolete, collecting dust in a warehouse, or otherwise no longer manufactured by STEC.
- You can relocate data from HDD to SSD using "Data Set FlashCopy", a feature that does not involve host-based copy services, does not consume any MIPS on your System z mainframe, and is performed inside the DS8000 disk system. You can also use host-based copy services as well, but it is not the only way.
- You can use any supported level of z/OS with SSD in the IBM DS8000. There is ENHANCED support mentioned in the RedPaper that you get only with z/OS 1.8 and above, allowing you to create automation policies that place data sets onto SSD or non-SSD storage pools. This synergy makes SSD with IBM DS8000 superior to the initial offerings that EMC had offered without this OS support.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
"What about IBM? One thing that we are finding is that IBM really “Gets It” in the area offlash in the data center. Readers of the Pund-IT Review will not only recall that IBM Researchpushed its SSD-based “Quicksilver” storage system to one million IOPS using Fusion-ioflash-based storage, but they also may have noticed that the recent MySQL and mem-cachedappliances recently introduced by Schooner Information Technology are both flash-enableddevices introduced in partnership with IBM. Ironically, while other OEMs are takingthe cautious approach of introducing a standard SSD option to their systems first, IBM appearsto have been working on several approaches simultaneously to bring flash to thedata center not only in SSDs, but in innovative ways as well."
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
Well, it's Tuesday again, and that means more IBM announcements!
Today, IBM announced the enhanced IBM System Storage DS3200 disk system.It is in our DS3000 series, the DS3200 is SAS-attach, DS3300 is iSCSI-attach, and DS3400 is FC-attach. All of them support up to 48 drives, which can be a mix of SAS and SATA drives.
The DS3200 supports the following operating environments (see IBM's [Interop Matrix] for details):
- Microsoft Windows
- Linux (both Linux-x86 and Linux on POWER)
- Sun Solaris
- Novell NetWare
With today's announcements, the DS3200 can be used to boot from, as well as contain data. This is ideal to combine with IBM BladeCenter. With the IBM BladeCenter you can have 14 blades, either x86 or POWER based processors, attached to a DS3200 via SAS switch modules in the back of the chassis.
Let's take an example of how this can be used for a Scale-Out File Services[SoFS] deployment.
First, we start with servers. We can have either three [IBM System x3650] servers, but this would use up all six of the direct-attach ports. Instead, we'll choose the [BladeCenter H chassis], with three HS21 blades for SoFS, and that leaves us with eleven empty blade slots we could put in a management node, or other blades to run applications.
- SAS connectivity modules
The IBM BladeCenter [SAS Connectivity Module] allows the blade servers to connect to a DS3200. Two of them fit right in the back of the BladeCenter chassis, providing full redundancy without consuming additional rack space.
- DS3200 and EXP3000 expansion drawers
We'll have one DS3200 controller with twelve internal drives, and three expansion EXP3000 drawers with twelve drives each, for a total of 48 drives. Using 1TB SATA, this would be 48 TB raw capacity.
The end result? You get a 48TB NAS scalable storage solution, supporting up to 7500 concurrent CIFS and NFS users, with up to 700 MB/sec with large block transfers. By using BladeCenter, you can expand performance by adding more blades to the Chassis, or have some blades running SAP or Oracle RAC have direct read/write access to the SoFS data.
Just another example on how IBM can bring together all the components of a solution to provide customer value!
technorati tags: IBM, DS3200, BladeCenter, Linux, AIX, Windows, Solaris, VMware, NetWare, POWER, SAS, EXP3000, SATA, CIFS, NFS, SoFS
Storage Networking World conference is over, and the buzz from the analysts appears to be focused onXiotech's low-cost RAID brick (LCRB) called Intelligent Storage Element, or ISE.
(Full disclosure: I work for IBM, not Xiotech, in case there weren't enough IBM references on this blog page to remindyou of that. I am writing this piece entirely from publicly available sources of information, and notfrom any internal working relationships between IBM and Xiotech. Xiotech is a member of the IBM BladeCenteralliance and our two companies collaborate together in that regard.)
Fellow blogger Jon Toigo in his DrunkenData blog posted [I’m Humming “ISE ISE Baby” this Week] and then a follow-up post[ISE Launches]. I looked up Xiotech's SPC-1benchmark numbers for the Emprise 5000 with both 73GB and 146GB drives, and at 8,202 IOPS per TB, does not seem to be as fast as IBM SAN VolumeControllers 11,354 IOPS per TB. Xiotech offers an impressive 5 year warranty (by comparison, IBM offers up to 4 years, and EMC I think is stillonly 90 days).Jon also wrote a review in [Enterprise Systems]that goes into more detail about the ISE.
Fellow blogger Robin Harris in his StorageMojo blog posted [SNW update - Xiotech’s ISE and the dilithium solution], feeling that Xiotech should win the "Best Announcement at SNW" prize. He points to the cool video on the[Xiotech website]. In that video, they claim 91,000 IOPS.Given that it took forty(40) 73GB drives (or 4 datapacs) in the previous example to get 8,202 IOPS for 1TB usable, I am guessing the 91,000 IOPS is probably 44 datapacs (440 drives) glommed together, representing 11TB usable.The ISE design appears very similar to the "data modules" used in IBM's XIV Nextra system.
Fellow blogger Mark Twomey from EMC in his StorageZilla blog posted[Xiotech: Industry second]correctly points out that Xiotech's 520-byte block (512 bytes plus extra for added integrity) was not the firstin the industry. Mark explains that EMC CLARiiON had this since the early 1990's, and implies in the title that this must have been the first in the industry, making Xiotech an industry second. Sorry Mark, both EMC and Xiotech were late to the game. IBM had been using 520-byte blocksize on its disk since 1980 with the System/38. This system morphed to the AS/400, and the blocksize was bumped up to 522 bytes in 1990, and is now called the System i, where the blocksize was bumped up yet again to 528 bytes in 2007.
While IBM was clever to do this, it actually means fewer choices for our System i clients, being only able to chooseexternal disk systems that explicitly support these non-standard blocksize values, such as the IBM System Storage DS8000and DS6000 series. (Yes, BarryB, IBM still sells the DS6000!) The DS6000 was specifically designed with the System i and smaller System z mainframes in mind, and in that niche does very well. Fortunately, as I mentioned in my February post [Getting off the island - the new i5/OS V6R1], IBM has now used virtualization, in the form of the VIOS logical partition, to allow i5/OS systems to attach to standard 512-byte block devices, greatly expanding the storage choices for our clients.
(Side note: SNW happens twice per year, so the challenge is having something new and fresh to talk about each time. While Andy Monshaw, General Manager of IBM System Storage, highlighted some of the many emerging technologies in his keynote address, IBM shipped on many of them prior to his last appearance in October 2007: thin provisioning in the IBM System Storage N series, deduplication in the IBM System Storage N series Advanced Single Instance Storage (A-SIS) feature, and Solid State Disk (SSD) drives in the IBM BladeCenter HS21-XM models. Of course, not everyone buys IBM gear the first day it is available, and IBM is not the only vendor to offer these technologies. My point is that for many people, these are still not yet deployed in their own data center, and so they are still in the future for them. However, since these IBM deliveries happened more than six months ago, they're old news in the eyes of the SNW attendees. While those who follow IBM closely would know that, others like[Britney Spears] may not.)
Back in the 1990s, when IBM was developing the IBM SAN Volume Controller (SVC), we generically called the managed disk arrays that were being virtualized by the SVC as "low-cost RAID brick" or LCRB. The IBM DS3400 is a good example of this. However, as we learned, SVC is not just for LCRB, it adds value in front of all kinds of disk systems, including the not-so-low-cost EMC DMX and IBM DS8000 disk systems. ISE might make a reasonable back-end managed disk device for IBM SVC to virtualize. This gives you the new cool features of Xiotech's ISE, with IBM SVC's faster performance, more robust functionality and advanced copy services.
Next week, I'll be in South America in meetings with IBM Business Partners and storage sales reps.
technorati tags: SNW, LCRB, Xiotech, ISE, IBM, BladeCenter, Jon Toigo, DrunkenData, Robin Harris, StorageMojo, SPC, SPC-1, SPC-2, Emprise, SAN Volume Controller, SVC, XIV, Nextra, Mark Twomey, StorageZilla, EMC, CLARiiON, System/38, AS/400, System i, i5/OS, V6R1, VIOS, Andy Monshaw, thin provisioning, N series, deduplication, de-dupe, A-SIS, SSD, HS21 XM, BarryB, Britney Spears, DMX, DS3400