Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Wrapping up my coverage of the annual [2010 System Storage Technical University], I attended what might be perhaps the best session of the conference. Jim Nolting, IBM Semiconductor Manufacturing Engineer, presented the new IBM zEnterprise mainframe, "A New Dimension in Computing", under the Federal track.
The zEnterprises debunks the "one processor fits all" myth. For some I/O-intensive workloads, the mainframe continues to be the most cost-effective platform. However, there are other workloads where a memory-rich Intel or AMD x86 instance might be the best fit, and yet other workloads where the high number of parallel threads of reduced instruction set computing [RISC] such as IBM's POWER7 processor is more cost-effective. The IBM zEnterprise combines all three processor types into a single system, so that you can now run each workload on the processor that is optimized for that workload.
IBM zEnterprise z196 Central Processing Complex (CPC)
Let's start with the new mainframe z196 central processing complex (CPC). Many thought this would be called the z11, but that didn't happen. Basically, the z196 machine has a maximum 96 cores versus z10's 64 core maximum, and each core runs 5.2GHz instead of z10's cores running at 4.7GHz. It is available in air-cooled and water-cooled models. The primary operating system that runs on this is called "z/OS", which when used with its integrated UNIX System Services subsystem, is fully UNIX-certified. The z196 server can also run z/VM, z/VSE, z/TPF and Linux on z, which is just Linux recompiled for the z/Architecture chip set. In my June 2008 post [Yes, Jon, there is a mainframe that can help replace 1500 servers], I mentioned the z10 mainframe had a top speed of nearly 30,000 MIPS (Million Instructions per Second). The new z196 machine can do 50,000 MIPS, a 60 percent increase!
The z196 runs a hypervisor called PR/SM that allows the box to be divided into dozens of logical partitions (LPAR), and the z/VM operating system can also act as a hypervisor running hundreds or thousands of guest OS images. Each core can be assigned a specialty engine "personality": GP for general processor, IFL for z/VM and Linux, zAAP for Java and XML processing, and zIIP for database, communications and remote disk mirroring. Like the z9 and z10, the z196 can attach to external disk and tape storage via ESCON, FICON or FCP protocols, and through NFS via 1GbE and 10GbE Ethernet.
IBM zEnterprise BladeCenter Extension (zBX)
There is a new frame called the zBX that basically holds two IBM BladeCenter chassis, each capable of 14 blades, so total of 28 blades per zBX frame. For now, only select blade servers are supported inside, but IBM plans to expand this to include more as testing continues. The POWER-based blades can run native AIX, IBM's other UNIX operating system, and the x86-based blades can run Linux-x86 workloads, for example. Each of these blade servers can run a single OS natively, or run a hypervisor to have multiple guest OS images. IBM plans to look into running other POWER and x86-based operating systems in the future.
If you are already familiar with IBM's BladeCenter, then you can skip this paragraph. Basically, you have a chassis that holds 14 blades connected to a "mid-plane". On the back of the chassis, you have hot-swappable modules that snap into the other side of the mid-plane. There are modules for FCP, FCoE and Ethernet connectivity, which allows blades to talk to each other, as well as external storage. BladeCenter Management modules serve as both the service processor as well as the keyboard, video and mouse Local Console Manager (LCM). All of the IBM storage options available to IBM BladeCenter apply to zBX as well.
Besides general purpose blades, IBM will offer "accelerator" blades that will offload work from the z196. For example, let's say an OLAP-style query is issued via SQL to DB2 on z/OS. In the process of parsing the complicated query, it creates a Materialized Query Table (MQT) to temporarily hold some data. This MQT contains just the columnar data required, which can then be transferred to a set of blade servers known as the Smart Analytics Optimizer (SAO), then processes the request and sends the results back. The Smart Analytics Optimizer comes in various sizes, from small (7 blades) to extra large (56 blades, 28 in each of two zBX frames). A 14-blade configuration can hold about 1TB of compressed DB2 data in memory for processing.
IBM zEnterprise Unified Resource Manager
You can have up to eight z196 machines and up to four zBX frames connected together into a monstrously large system. There are two internal networks. The Inter-ensemble data network (IEDN) is a 10GbE that connects all the OS images together, and can be further subdivided into separate virtual LANs (VLAN). The Inter-node management network (INMN) is a 1000 Mbps Base-T Ethernet that connects all the host servers together to be managed under a single pane of glass known as the Unified Resource Manager. It is based on IBM Systems Director.
By integrating service management, the Unified Resource Manager can handle Operations, Energy Management, Hypervisor Management, Virtual Server Lifecycle Management, Platform Performance Management, and Network Management, all from one place.
IBM Rational Developer for System z Unit Test (RDz)
But what about developers and testers, such as those Independent Software Vendors (ISV) that produce mainframe software. How can IBM make their lives easier?
Phil Smith on z/Journal provides a history of [IBM Mainframe Emulation]. Back in 2007, three emulation options were in use in various shops:
Open Mainframe, from Platform Solutions, Inc. (PSI)
FLEX-ES, from Fundamental Software, Inc.
Hercules, which is an open source package
None of these are viable options today. Nobody wanted to pay IBM for its Intellectual Property on the z/Architecture or license the use of the z/OS operating system. To fill the void, IBM put out an officially-supported emulation environment called IBM System z Professional Development Tool (zPDT) available to IBM employees, IBM Business Partners and ISVs that register through IBM Partnerworld. To help out developers and testers who work at clients that run mainframes, IBM now offers IBM Rational Developer for System z Unit Test, which is a modified version of zPDT that can run on a x86-based laptop or shared IBM System x server. Based on the open source [Eclipse IDE], the RDz emulates GP, IFL, zAAP and zIIP engines on a Linux-x86 base. A four-core x86 server can emulate a 3-engine mainframe.
With RDz, a developer can write code, compile and unit test all without consuming any mainframe MIPS. The interface is similar to Rational Application Developer (RAD), and so similar skills, tools and interfaces used to write Java, C/C++ and Fortran code can also be used for JCL, CICS, IMS, COBOL and PL/I on the mainframe. An IBM study ["Benchmarking IDE Efficiency"] found that developers using RDz were 30 percent more productive than using native z/OS ISPF. (I mention the use of RAD in my post [Three Things to do on the IBM Cloud]).
What does this all mean for the IT industry? First, the zEnterprise is perfectly positioned for [three-tier architecture] applications. A typical example could be a client-facing web-server on x86, talking to business logic running on POWER7, which in turn talks to database on z/OS in the z196 mainframe. Second, the zEnterprise is well-positioned for government agencies looking to modernize their operations and significantly reduce costs, corporations looking to consolidate data centers, and service providers looking to deploy public cloud offerings. Third, IBM storage is a great fit for the zEnterprise, with the IBM DS8000 series, XIV, SONAS and Information Archive accessible from both z196 and zBX servers.
Now an avid reader of my blog has brought this to my attention. Apparently,
EMC has been showing customers a presentation
[Accelerating Storage Transformation with VMAX and VPLEX] with false and misleading comparison claims between IBM DS8000, HDS VSP and EMC VMAX 40K disk system performance.
(FTC Disclosure: This would be a good time to remind my readers that I work for IBM and own IBM stock. I do not endorse any of the EMC or HDS products mentioned in this post, and have no financial affiliation or investments directly with either EMC nor HDS. I am basing my information solely on the presentation posted on the internet and other sources publicly available, and not on any misrepresentations from EMC speakers at the various conferences where these charts might have been shown.)
The problem with misinformation is that it is not always obvious. The EMC presentation is quite pretty and professional-looking. It is the typical slick, attention-getting, low-content, over-simplified marketing puffery you have come to expect from EMC. There are two slides in particular that I have issue with.
This first graphic implies that IBM and HDS are nearly tied in performance, but that EMC VMAX 40K has nearly triple that bandwidth. Overall the slide has very little detail. That makes it difficult to determine what exactly is being claimed and whether a fair comparison is being made.
The title claims that VMAX 40K is "#1 in High Bandwidth Apps". Only three disk systems are shown so the claim appears to be relative to only the three systems. The wording "High Bandwidth Apps" is confusing considering the cited numbers are for disk systems and no application is identified. By comparison, IBM SONAS can drive up to 105 GB/sec sequential bandwidth, nearly double what EMC claims for its VMAX 40K, so EMC is certainly not even close to #1.
Is the workload random or sequential? That is not easy to determine. The use of "GB/s" along with the large block size of 128KB implies the I/O workload is sequential, which is great for some workloads like high performance computing, technical computing and video broadcasts. Random workloads, on the other hand, are usually measured in I/Os per second (IOPS) with a block size ranging 4KB to 64KB. (I am assuming the 128K blocks refers to 128KB block size, and not reading the same block of cache 128,000 times.)
The slide states "Maximum Sustainable RRH Bandwidth 128K Blocks". The acronym "RRH" is not defined; but I suspect this refers to "random read hits". For random workloads, 100 percent random read hits from cache represents one corner of the infamous "four corners" test. Real-world workloads have a mix of reads and writes, and a mix of cache hits and cache misses. It is also unclear whether the hits are from standard data cache or from internal buffers in adapters (perhaps accessing the same blocks repeatedly) or something else. So is this really for a random workload, or a sequential workload?
(The term "Hitachi Math" was coined by an EMC blogger precisely to slam Hitachi Data Systems for their blatant use of four-corners results, claiming that spouting ridiculously large, but equally unrealistic, 100 percent random read hit results don't provide any useful information. I agree. There are much better industry-standard benchmarks available, such as SPC-1 for random workloads, SPC-2 for sequential workloads, and even benchmarks for specific applications, that represent real-world IT environments. To shame HDS for their use of four-corners results, only for EMC themselves to use similar figures in their own presentation is truly hypocritical of them!)
The IBM system is identified as "DS8000". DS8000 is a generic family name that applies to multiple generations of systems first introduced in 2004. The specific model is not identified, but that is critical information. Is this a first generation DS8100, or the latest DS8800, or something in between?
The slide says "Full System Configs", but that is not defined and configuration details are not identified. Configuration details, also critical information in assessing system performance capabilities, are not specified. If the EMC box costs seven times more than IBM or HDS, would you really buy it to get 3x more performance? Is the EMC packed with the maximum amount of SSD? Were there any SSD in the IBM or HDS boxes to match?
The source of the claimed IBM DS8000 performance numbers is not identified. Did they run their own tests? While I cannot tell, the VMAX may have been configured with 64 Fibre Channel 8Gbps host connections. In that case each channel is theoretically capable of supporting about 800 MB/s at 100% channel utilization. Multiplying 64 x 800MB/s = 51.2GB/s, so did EMC just do the performance comparison on the back of a napkin, assuming there are no other bottlenecks in the system? Even then, I would not round up 51.2 to 52!
Response times were not identified. For random I/Os, response time is a very important metric. It is possible that the Symmetrix was operating with some resources at 100% utilization to get the highest GB/s result, but that would likely make I/O response times unacceptable for real-world random I/O workloads.
IBM and HDS have both published Storage Performance Council [SPC] industry-standard performance benchmarks. EMC has not published any SPC benchmarks for VMAX systems. If EMC is interested in providing customers with audited, detailed performance information along with detailed configuration information, all based on benchmarks designed to represent real-world workloads, EMC can always publish SPC benchmark results as IBM and other vendors have done. In past blog fights, EMC resorts to the excuse that SPC isn't perfect, but can they really argue that vague and unrealistic claims cited in its presentation are better?
The second graphic is so absurd, you would think it came directly from Larry Ellison at an Oracle OpenWorld keynote session. EMC is comparing a configuration with VMAX 40K plus an EMC VFCache host-side flash memory cache card to a configuration with an IBM and HDS disk system without host-side flash memory cache also configured. The comparison is clearly apples-to-oranges. Other disk system configuration details are also omitted.
FAST VP is EMC's name for its sub-volume drive tiering feature, comparable to IBM Easy Tier and Hitachi's Dynamic Tiering. The graph implies that IBM and HDS can only achieve a modest increment improvement from their sub-volume tiering. I beg to differ. I have seen various cases where a small amount of SSD on IBM DS8000 series can drastically improve performance 200 to 400 percent.
The "DBClassify" shown on the graph is a tool run as part of an EMC professional services offering called Database Performance Tiering Assessment, makes recommendations for storing various database objects on different drive tiers based on object usage and importance. Do you really need to pay for professional services? With IBM Easy Tier, you just turn it on, and it works. No analysis required, no tools, no professional services, and no additional charge!
VFCache is an optional product from EMC that currently has no integration whatsoever with VMAX. A fair comparison would have included a host-side flash memory cache (from any vendor) when the IBM or HDS storage system was configured. Or leave it out altogether and just focus on the sub-volume tiering comparison.
Keep in mind that EMC's VFCache supports only selected x86-based hosts. IBM has published a [Statement of Direction] indicating that it will also offer this for Power systems running AIX and Linux host-side flash memory cache integrated with DS8000 Easy Tier.
I feel EMC's claims about IBM DS8000 performance are vague and misleading. EMC appears to lack the kind of technical marketing integrity that IBM strives to attain.
Since EMC is not able or willing to publish fair and meaningful performance comparisons, it is up to me to set the record straight and point out EMC's failings in this matter.
Reminder: It's not to late to register for my Webcast "Solving the Storage Capacity Crisis" on Tuesday, September 25. See my blog post [Upcoming events in September] to register!
Well, it's Tuesday again, and you know what that means? IBM announcements!
Today's announcements are all about the Storwize family, IBM's market-leading Software Defined Storage offerings. Having sold over 55,000 systems, and managing over 1.6 Exabytes of data, IBM continues to be the #1 leader in storage virtualization solutions. The Storwize family consists of the SAN Volume Controller (SVC), Storwize V7000, Storwize V7000 Unified, Flex System V7000, Storwize V5000, Storwize V3700 and V3500.
SAN Volume Controller 2145-DH8
The new 2145-DH8 model is a complete repackaging of this popular storage system. The previous model, the 2145-CG8, was 1U-high x86 server per node, and each node required a separate 1U-high UPS to provide battery protection for its cache. Nobody liked this. The new 2145-DH8 instead is a 2U-high node with two hot-swappable batteries, eliminating the need for UPS altogether. Thus, an SVC node-pair using the 2145-DH8 models takes up the same 4U space, but with fewer cables. The SVC can now also support standard office 110/240 voltage sources.
The new model sports an 8-core processor with 32GB RAM. Since these are 2-socket servers, IBM offers that option to add a second 8-core processor and additional 32GB RAM to help boost Real-time Compression. Each node can have optionally one or two hardware-assisted compression cards which use the Intel QuickAssist chip to boost compression performance.
While the Real-time Compression was in fact, real-time, performed in-line to the read/write I/O process, at latency comparable to uncompressed data for applications, the compression process on older models was entirely software-based, consuming some of the CPU resources, which lowered the maximum IOPS of the solution. With the added cores, added RAM, and hardware-assisted compression chips, IBM resolves that concern. In fact, the new 2145-DH8 with compression can provide more IOPS than an older 2145-CG8 without compression.
The previous model 2145-CG8 allowed you to put up to 4 small SSD drives in the node itself, which were treated the same as externally Flash drives for purposes of having a high-speed storage pool for select volumes, or automated sub-LUN tiering with Easy Tier. The new model 2145-DH8 allows you to attach up to 48 Solid State Drives (SSD) via 12Gb SAS cables. These are housed in the new 2U-high 24F enclosures that can offer up to 38.4 TB of Flash per SVC I/O group.
IBM also re-designed the host/device ports to use Hardware Interface Card (HIC) slots. In the 2145-CG8, you had four FCP ports, two 1GbE Ethernet ports, with options to add two 10GbE Ethernet ports or four additional FCP ports. If you had mostly an FCoE or iSCSI environment, you didn't need the FCP, and if you were mostly a FCP Storage Area Network (SAN) environment, then most of the Ethernet ports went unused. To solve this, the 2145-DH8 can allow you to have up to six HIC cards that are either FCP, Ethernet, or SAS. There are three 1GbE fixed Ethernet ports which can be used for iSCSI and administration.
If you have SVC today, you can upgrade non-disruptively by either swapping out your current SVC engines with the new 2145-DH8 engines, or you can add the new 2145-DH8 engines to your existing SVC cluster. Either way, there is no outage to your applications!
This is the next generation of the popular Storwize V7000. The previous generation had a 4-core processor and 8GB RAM per canister. The new model has an 8-core processor with 32GB of RAM per canister, with the option to double these to boost Real-time compression. There are two canisters per control enclosure, which gives you 64GB to 128GB of RAM per Storwize V7000 I/O group.
The new Storwize V7000 comes with one hardware-assisted compression chip on the mother board of each canister, with the option to add a second chip per canister.
Each canister offers three HIC slots, which can be used for the additional hardware-assist compression chip, FCP or Ethernet ports.
To accommodate these HIC slots, new canisters were needed. Instead of the flat wide style top and bottom, we now have taller, thinner canisters that sit side to side. This side-to-side design is similar to our existing Storwize V5000 and V3700 models.
The previous model could support up to 9 expansion enclosures per control enclosure. The Storwize V7000 can have up to 24 drives in its control enclosure, and now attach up to 20 expansion enclosures, which allows up to 504 drives per control enclosure, and up to a maximum of 1,056 drives per Storwize cluster.
If you have previous models of Storwize V7000, you can add the new Storwize V7000 into the same cluster, or virtualize the previous storage for migration purposes.
The new software applies new capabilities to both new generation hardware as well as the older models, so people with existing gear can benefit as well.
In prior releases, the sub-LUN automated tiering was limited to two levels: Flash and HDD. This lumped all 15K, 10K and 7200 RPM drives into a common HDD category. In the new v7.3.0 code, you can now have three levels: Flash, Enterprise HDD, and Nearline HDD, or two HDD levels: Enterprise and Nearline. The Enterprise level combines 15K and 10K RPM drives, similar to what is done on the IBM System Storage DS8000 disk systems.
The new code is also able balance your storage pools, and can be used with uniform or mixed storage pools to eliminate performance hot spots.
The new code has been enhanced to detect the hardware-assisted compression chip on the new SVC and Storwize V7000 models, and use those if available.
For the Storwize V3700 and V5000 models, the new code allows up to nine expansion enclosures per control enclosure. In the previous models, the V3700 allowed only four expansions, and the V6000 only six expansions per control enclosure. The V3700 can now support up to 240 drives, and the V5000 can support up to 480 drives.
IBM Storwize V7000 Unified File Module software v1.5
For Storwize V7000 Unified clients, there is new software for the File Modules that provide NFS, CIFS, FTP, HTTPS and SCP protocol capability. The new v1.5 code now adds NFS v4 and SMB 2.1 levels of support. Most NFS users are still on NFSv3, but about 20 percent of NFS users are using NFS v4 which offers stateful access. The SMB 2.1 for CIFS was introduced by Microsoft in Windows 7 and Windows Server 2008 R2.
Deterministic ID mapping allows you to map Windows userids to UNIX/Linux group and owner id numbers. In the past, the problem is that this mapping is different on each machine, so people often had to stand up a Windows System for Unix Services (SFU) server to provide consistent ID mapping. Now, with v1.5 code, you will no longer have to do this. The deterministic ID mapping will can now replicate the mapping to each machine without an SFU server.
Active Cloud Engine allows up to ten Storwize V7000 Unified to be connected across distance to form a single global name space. WAN caching, however, was restricted to a single site having write capabilities, while the others were read-only. In v1.5 release, IBM now supports multiple independent writers at different locations on the same fileset.
Security enhancements include multi-tenancy, configurable password policies, session policies, and hardened boot and SSH configurations. With NFS v3/v4, you can now use [Kerberos] for security.
Finally, I am please to see that we now have Cinder support for files on the Storwize V7000 Unified on the OpenStack Havana release that just came out last month. The OpenStack Cinder interface can assign LUNs to virtual machines, but the new Havana release allows NAS systems to dole out files that act as LUNs, such as OVA or VMDK files. The advantage is that these files can managed by Active Cloud Engine, cached locally across global name space, have policies place them on appropriate storage tiers, and inactive Virtual Machine images can be migrated to less expensive disk or tape.
Last week, I presented IBM's strategic initiative, the IBM Information Infrastructure, which is part of IBM's New Enterprise Data Center vision. This week, I will try to get around to talking about some of theproducts that support those solutions.
I was going to set the record straight on a variety of misunderstandings, rumors or speculations, but I think most have been taken care of already. IBM blogger BarryW covered the fact that SVC now supports XIV storage systems, in his post[SVC and XIV],and addressed some of the FUD already. Here was my list:
Now that IBM has an IBM-branded model of XIV, IBM will discontinue (insert another product here)
I had seen speculation that XIV meant the demise of the N series, the DS8000 or IBM's partnership with LSI.However, the launch reminded people that IBM announced a new release of DS8000 features, new models of N series N6000,and the new DS5000 disk, so that squashes those rumors.
IBM XIV is a (insert tier level here) product
While there seems to be no industry-standard or agreement for what a tier-1, tier-2 or tier-3 disk system is, there seemed to be a lot of argument over what pigeon-hole category to put IBM XIV in. No question many people want tier-1 performance and functionality at tier-2 prices, and perhaps IBM XIV is a good step at giving them this. In some circles, tier-1 means support for System z mainframes. The XIV does not have traditional z/OS CKD volume support, but Linux on System z partitions or guests can attach to XIV via SAN Volume Controller (SVC), or through NFS protocol as part of the Scale-Out File Services (SoFS) implementation.
Whenever any radicalgame-changing technology comes along, competitors with last century's products and architectures want to frame the discussion that it is just yet another storage system. IBM plans to update its Disk Magic and otherplanning/modeling tools to help people determine which workloads would be a good fit with XIV.
IBM XIV lacks (insert missing feature here) in the current release
I am glad to see that the accusations that XIV had unprotected, unmirrored cache were retracted. XIV mirrors all writes in the cache of two separate modules, with ECC protection. XIV allows concurrent code loadfor bug fixes to the software. XIV offers many of the features that people enjoy in other disksystems, such as thin provisioning, writeable snapshots, remote disk mirroring, and so on.IBM XIV can be part of a bigger solution, either through SVC, SoFS or GMAS that provide thebusiness value customers are looking for.
IBM XIV uses (insert block mirroring here) and is not as efficient for capacity utilization
It is interesting that this came from a competitor that still recommends RAID-1 or RAID-10 for itsCLARiiON and DMX products.On the IBM XIV, each 1MB chunk is written on two different disks in different modules. When disks wereexpensive, how much usable space for a given set of HDD was worthy of argument. Today, we sell you abig black box, with 79TB usable, for (insert dollar figure here). For those who feel 79TB istoo big to swallow all at once, IBM offers "capacity on demand" pricing, where you can pay initially for as littleas 22TB, but get all the performance, usability, functionality and advanced availability of the full box.
IBM XIV consumes (insert number of Watts here) of energy
For every disk system, a portion of the energy is consumed by the number of hard disk drives (HDD) andthe remainder to UPS, power conversion, processors and cache memory consumption. Again, the XIV is a bigblack box, and you can compare the 8.4 KW of this high-performance, low-cost storage one-frame system with thewattage consumed by competitive two-frame (sometimes called two-bay) systems, if you are willing to take some trade-offs. To getcomparable performance and hot-spot avoidance, competitors may need to over-provision or use faster, energy-consuming FC drives, and offer additional software to monitor and re-balance workloads across RAID ranks.To get comparable availability, competitors may need to drop from RAID-5 down to either RAID-1 or RAID-6.To get comparable usability, competitors may need more storage infrastructure management software to hide theinherent complexity of their multi-RAID design.
Of course, if energy consumption is a major concern for you, XIV can be part of IBM's many blended disk-and-tapesolutions. When it comes to being green, you can't get any greener storage than tape! Blended disk-and-tapesolutions help get the best of both worlds.
Well, I am glad I could help set the record straight. Let me know what other products people you would like me to focus on next.
It's official! My "blook" Inside System Storage - Volume I is now available.
This blog-based book, or “blook”, comprises the first twelve months of posts from this Inside System Storage blog,165 posts in all, from September 1, 2006 to August 31, 2007. Foreword by Jennifer Jones. 404 pages.
IT storage and storage networking concepts
IBM strategy, hardware, software and services
Disk systems, Tape systems, and storage networking
Storage and infrastructure management software
Second Life, Facebook, and other Web 2.0 platforms
IBM’s many alliances, partners and competitors
How IT storage impacts society and industry
You can choose between hardcover (with dust jacket) or paperback versions:
This is not the first time I've been published. I have authored articles for storage industry magazines, written large sections of IBM publications and manuals, submitted presentations and whitepapers to conference proceedings, and even had a short story published with illustrations by the famous cartoon writer[Ted Rall].
But I can say this is my first blook, and as far as I can tell, the first blook from IBM's many bloggers on DeveloperWorks, and the first blook about the IT storage industry.I got the idea when I saw [Lulu Publishing] run a "blook" contest. The Lulu Blooker Prize is the world's first literary prize devoted to "blooks"--books based on blogs or other websites, including webcomics. The [Lulu Blooker Blog] lists past year winners. Lulu is one of the new innovative "print-on-demand" publishers. Rather than printing hundredsor thousands of books in advance, as other publishers require, Lulu doesn't print them until you order them.
I considered cute titles like A Year of Living Dangerously, orAn Engineer in Marketing La-La land, or Around the World in 165 Posts, but settled on a title that matched closely the name of the blog.
In addition to my blog posts, I provide additional insights and behind-the-scenes commentary. If you go to the Luluwebsite above, you can preview an entire chapter in its entirety before purchase. I have added a hefty 56-page Glossary of Acronyms and Terms (GOAT) with over 900 storage-related terms defined, which also doubles as an index back to the post (or posts) that use or further explain each term.
So who might be interested in this blook?
Business Partners and Sales Reps looking to give a nice gift to their best clients and colleagues
Managers looking to reward early-tenure employees and retain the best talent
IT specialists and technicians wanting a marketing perspective of the storage industry
Mentors interested in providing motivation and encouragement to their proteges
Educators looking to provide books for their classroom or library collection
Authors looking to write a blook themselves, to see how to format and structure a finished product
Marketing personnel that want to better understand Web 2.0, Second Life and social networking
Analysts and journalists looking to understand how storage impacts the IT industry, and society overall
College graduates and others interested in a career as a storage administrator
And yes, according to Lulu, if you order soon, you can have it by December 25.
I am still wiping the coffee off my computer screen, inadvertently sprayed when I took a sip while reading HDS' uber-blogger Hu Yoshida's post on storage virtualization and vendor lock-in.
HDS is a major vendor for disk storage virtualization, and Hu Yoshida has been around for a while, so I felt it was fair to disagree with some of the generalizations he made to set the record straight. He's been more careful ever since.
However, his latest post [The Greening of IT: Oxymoron or Journey to a New Reality] mentions an expert panel at SNW that includedMark O’Gara Vice President of Infrastructure Management at Highmark. I was not at the SNW conference last week in Orlando, so I will just give the excerpt from Hu's account of what happened:
"Later I had the opportunity to have lunch with Mark O’Gara. Mark is a West Point graduate so he takes a very disciplined approach to addressing the greening of IT. He emphasized the need for measurements and setting targets. When he started out he did an analysis of power consumption based on vendor specifications and came up with a number of 513 KW for his data center infrastructure....
The physical measurements showed that the biggest consumers of power were in order: Business Intelligence Servers, SAN Storage, Robotic tape Library, and Virtual tape servers....
Another surprise may be that tape libraries are such large consumers of power. Since tape is not spinning most of the time they should consume much less power than spinning disk - right? Apparently not if they are sitting in a robotic tape library with a lot of mechanical moving parts and tape drives that have to accelerate and decelerate at tremendous speeds. A Virtual Tape Library with de-duplication factor of 25:1 and large capacity disks may draw significantly less power than a robotic tape library for a given amount of capacity.
Obviously, I know better than to sip coffee whenever reading Hu's blog. I am down here in South America this week, the coffee is very hot and very delicious, so I am glad I didn't waste any on my laptop screen this time, especially reading that last sentence!
In that report, a 5-year comparison found that a repository based on SATA disk was 23 times more expensive overall, and consumed 290 times more energy, than a tape library based on LTO-4 tape technology. The analysts even considered a disk-based Virtual Tape Library (VTL). Focusing just on backups, at a 20:1 deduplication ratio, the VTL solution was still 5 times per expensive than the tape library. If you use the 25:1 ratio that Hu Yoshida mentions in his post above, that would still be 4 times more than a tape library.
I am not disputing Mark O'Gara's disciplined approach. It is possible that Highmark is using a poorly written backup program, taking full backups every day, to an older non-IBM tape library, in a manner that causes no end of activity to the poor tape robotics inside. But rather than changing over to a VTL, perhaps Mark might be better off investigating the use of IBM Tivoli Storage Manager, using progressive backup techniques, appropriate policies, parameters and settings, to a more energy-efficient IBM tape library.In well tuned backup workloads, the robotics are not very busy. The robot mounts the tape, and then the backup runs for a long time filling up that tape, all the meanwhile the robot is idle waiting for another request.
(Update: My apologies to Mark and his colleagues at Highmark. The above paragraph implied that Mark was using badproducts or configured them incorrectly, and was inappropriate. Mark, my full apology [here])
If you do decide to go with a Virtual Tape Library, for reasons other than energy consumption, doesn't it make sense to buy it from a vendor that understands tape systems, rather than buying it from one that focuses on disk systems? Tape system vendors like IBM, HP or Sun understand tape workloads as well as related backup and archive software, and can provide better guidance and recommendations based on years of experience. Asking advice abouttape systems, including Virtual Tape Libraries, from a disk vendor is like asking for advice on different types of bread from your butcher, or advice about various cuts of meat at the bakery.
The butchers and bakers might give you answers, but it may not be the best advice.
Continuing my rant from Monday's post [Time for a New Laptop], I got my new laptop Wednesday afternoon. I was hoping the transition would be quick, but that was not the case. Here were my initial steps prior to connecting my two laptops together for the big file transfer:
Document what my old workstation has
Back in 2007, I wrote a blog post on how to [Separate Programs from Data]. I have since added a Linux partition for dual-boot on my ThinkPad T60.
Windows XP SP3 operating system and programs
Red Hat Enterprise Linux 5.4
My Documents and other data
I also created a spreadsheet of all my tools, utilities and applications. I combined and deduplicated the list from the following sources:
Control Panel -> Add/Remove programs
Start -> Programs panels
Program taskbar at bottom of screen
The last one was critical. Over the years, I have gotten in the habit of saving those ZIP or EXE files that self-install programs into a separate directory, D:/Install-Files, so that if I had to unintsall an application, due to conflicts or compatability issues, I could re-install it without having to download them again.
So, I have a total of 134 applications, which I have put into the following rough categories:
AV - editing and manipulating audio, video or graphics
Files - backup, copy or manipulate disks, files and file systems
Browser - Internet Explorer, Firefox, Opera and Google Chrome
Communications - Lotus Notes and Lotus Sametime
Connect - programs to connect to different Web and Wi-Fi services
Demo - programs I demonstrate to clients at briefings
Drivers - attach or sync to external devices, cell phones, PDAs
Games - not much here, the basic solitaire, mindsweeper and pinball
Help Desk - programs to diagnose, test and gather system information
Projects - special projects like Second Life or Lego Mindstorms
Lookup - programs to lookup information, like American Airlines TravelDesk
Meeting - I have FIVE different webinar conferencing tools
Office - presentations, spreadsheets and documents
Platform - Java, Adobe Air and other application runtime environments
Player - do I really need SIXTEEN different audio/video players?
Printer - print drivers and printer management software
Scanners - programs that scan for viruses, malware and adware
Tools - calculators, configurators, sizing tools, and estimators
Uploaders - programs to upload photos or files to various Web services
Backup my new workstation
My new ThinkPad T410 has a dual-core i5 64-bit Intel processor, so I burned a 64-bit version of [Clonezilla LiveCD] and booted the new system with that. The new system has the following configuration:
Windows XP SP3 operating system, programs and data
There were only 14.4GB of data, it took 10 minutes to backup to an external USB disk. I ran it twice: first, using the option to dump the entire disk, and the second to dump the selected partition. The results were roughly the same.
Run Workstation Setup Wizard
The Workstation Setup Wizard asks for all the pertinent location information, time zone, userid/password, needed to complete the installation.
I made two small changes to connect C: to D: drive.
Changed "My Documents" to point to D:\Documents which will move the files over from C: to D: to accomodate its new target location. See [Microsoft procedure] for details.
Edited C:\notes\notes.ini to point to D:\notes\data to store all the local replicas of my email and databases.
Install Ubuntu Desktop 10.04 LTS
My plan is to run Windows and Linux guests through virtualization. I decided to try out Ubuntu Desktop 10.04 LTS, affectionately known as Lucid Lynx, which can support a variety of different virtualization tools, including KVM, VirtualBox-OSE and Xen. I have two identical 15GB partitions (sda2 and sda3) that I can use to hold two different systems, or one can be a subdirectory of the other. For now, I'll leave sda3 empty.
Take another backup of my new workstation
I took a fresh new backup of paritions (sda1, sda2, sda6) with Clonezilla.
The next step involved a cross-over Ethernet cable, which I don't have. So that will have to wait until Thursday morning.
Wrapping up my week's theme of storage optimization, I thought I would help clarify the confusion between data reduction and storage efficiency. I have seen many articles and blog posts that either use these two terms interchangeably, as if they were synonyms for each other, or as if one is merely a subset of the other.
Data Reduction is LOSSY
By "Lossy", I mean that reducing data is an irreversible process. Details are lost, but insight is gained. In his paper, [Data Reduction Techniques", Rajana Agarwal defines this simply:
"Data reduction techniques are applied where the goal is to aggregate or amalgamate the information contained in large data sets into manageable (smaller) information nuggets."
Data reduction has been around since the 18th century.
Take for example this histogram from [SearchSoftwareQuality.com]. We have reduced ninety individual student scores, and reduced them down to just five numbers, the counts in each range. This can provide for easier comprehension and comparison with other distributions.
The process is lossy. I cannot determine or re-create an individual student's score from these five histogram values.
This next example, complements of [Michael Hardy], represents another form of data reduction known as ["linear regression analysis"]. The idea is to take a large set of data points between two variables, the x axis along the horizontal and the y axis along the vertical, and find the best line that fits. Thus the data is reduced from many points to just two, slope(a) and intercept(b), resulting in an equation of y=ax+b.
The process is lossy. I cannot determine or re-create any original data point from this slope and intercept equation.
In this last example, from [Yahoo Finance], reduces millions of stock trades to a single point per day, typically closing price, to show the overall growth trend over the course of the past year.
The process is lossy. Even if I knew the low, high and closing price of a particular stock on a particular day, I would not be able to determine or re-create the actual price paid for individual trades that occurred.
Storage Efficiency is LOSSLESS
By contrast, there are many IT methods that can be used to store data in ways that are more efficient, without losing any of the fine detail. Here are some examples:
Thin Provisioning: Instead of storing 30GB of data on 100GB of disk capacity, you store it on 30GB of capacity. All of the data is still there, just none of the wasteful empty space.
Space-efficient Copy: Instead of copying every block of data from source to destination, you copy over only those blocks that have changed since the copy began. The blocks not copied are still available on the source volume, so there is no need to duplicate this data.
Archiving and Space Management: Data can be moved out of production databases and stored elsewhere on disk or tape. Enough XML metadata is carried along so that there is no loss in the fine detail of what each row and column represent.
Data Deduplication: The idea is simple. Find large chunks of data that contain the same exact information as an existing chunk already stored, and merely set a pointer to avoid storing the duplicate copy. This can be done in-line as data is written, or as a post-process task when things are otherwise slow and idle.
When data deduplication first came out, some lawyers were concerned that this was a "lossy" approach, that somehow documents were coming back without some of their original contents. How else can you explain storing 25PB of data on only 1PB of disk?
(In some countries, companies must retain data in their original file formats, as there is concern that converting business documents to PDF or HTML would lose some critical "metadata" information such as modificatoin dates, authorship information, underlying formulae, and so on.)
Well, the concern applies only to those data deduplication methods that calculate a hash code or fingerprint, such as EMC Centera or EMC Data Domain. If the hash code of new incoming data matches the hash code of existing data, then the new data is discarded and assumed to be identical. This is rare, and I have only read of a few occurrences of unique data being discarded in the past five years. To ensure full integrity, IBM ProtecTIER data deduplication solution and IBM N series disk systems chose instead to do full byte-for-byte comparisons.
Compression: There are both lossy and lossless compression techniques. The lossless Lempel-Ziv algorithm is the basis for LTO-DC algorithm used in IBM's Linear Tape Open [LTO] tape drives, the Streaming Lossless Data Compression (SLDC) algorithm used in IBM's [Enterprise-class TS1130] tape drives, and the Adaptive Lossless Data Compression (ALDC) used by the IBM Information Archive for its disk pool collections.
Last month, IBM announced that it was [acquiring Storwize. It's Random Access Compression Engine (RACE) is also a lossless compression algorithm based on Lempel-Ziv. As servers write files, Storwize compresses those files and passes them on to the destination NAS device. When files are read back, Storwize retrieves and decompresses the data back to its original form.
As with tape, the savings from compression can vary, typically from 20 to 80 percent. In other words, 10TB of primary data could take up from 2TB to 8TB of physical space. To estimate what savings you might achieve for your mix of data types, try out the free [Storwize Predictive Modeling Tool].
So why am I making a distinction on terminology here?
Data reduction is already a well-known concept among specific industries, like High-Performance Computing (HPC) and Business Analytics. IBM has the largest marketshare in supercomputers that do data reduction for all kinds of use cases, for scientific research, weather prediction, financial projections, and decision support systems. IBM has also recently acquired a lot of companies related to Business Analytics, such as Cognos, SPSS, CoreMetrics and Unica Corp. These use data reduction on large amounts of business and marketing data to help drive new sources of revenues, provide insight for new products and services, create more focused advertising campaigns, and help understand the marketplace better.
There are certainly enough methods of reducing the quantity of storage capacity consumed, like thin provisioning, data deduplication and compression, to warrant an "umbrella term" that refers to all of them generically. I would prefer we do not "overload" the existing phrase "data reduction" but rather come up with a new phrase, such as "storage efficiency" or "capacity optimization" to refer to this category of features.
IBM is certainly quite involved in both data reduction as well as storage efficiency. If any of my readers can suggest a better phrase, please comment below.
Are you tired of hearing about Cloud Computing without having any hands-on experience? Here's your chance. IBM has recently launched its IBM Development and Test Cloud beta. This gives you a "sandbox" to play in. Here's a few steps to get started:
Generate a "key pair". There are two keys. A "public" key that will reside in the cloud, and a "private" key that you download to your personal computer. Don't lose this key.
Request an IP address. This step is optional, but I went ahead and got a static IP, so I don't have to type in long hostnames like "vm353.developer.ihost.com".
Request storage space. Again, this step is optional, but you can request a 50GB, 100GB and 200GB LUN. I picked a 200GB LUN. Note that each instance comes with some 10 to 30GB storage already. The advantage to a storage LUN is that it is persistent, and you can mount it to different instances.
Start an "instance". An "instance" is a virtual machine, pre-installed with whatever software you chose from the "asset catalog". These are Linux images running under Red Hat Enterprise Virtualization (RHEV) which is based on Linux's kernel virtual machine (KVM). When you start an instance, you get to decide its size (small, medium, or large), whether to use your static IP address, and where to mount your storage LUN. On the examples below, I had each instance with a static IP and mounted the storage LUN to /media/storage subdirectory. The process takes a few minutes.
So, now that you are ready to go, what instance should you pick from the catalog? Here are three examples to get you started:
IBM WebSphere sMASH Application Builder
Base OS server to run LAMP stack
Next, I decided to try out one of the base OS images. There are a lot of books on Linux, Apache, MySQL and PHP (LAMP) which represents nearly 70 percent of the web sites on the internet. This instance let's you install all the software from scratch. Between Red Hat and Novell SUSE distributions of Linux, Red Hat is focused on being the Hypervisor of choice, and SUSE is focusing on being the Guest OS of choice. Most of the images on the "asset catalog" are based on SLES 10 SP2. However, there was a base OS image of Red Hat Enterprise Linux (RHEL) 5.4, so I chose that.
To install software, you either have to find the appropriate RPM package, or download a tarball and compile from source. To try both methods out, I downloaded tarballs of Apache Web Server and PHP, and got the RPM packages for MySQL. If you just want to learn SQL, there are instances on the asset catalog with DB2 and DB2 Express-C already pre-installed. However, if you are already an expert in MySQL, or are following a tutorial or examples based on MySQL from a classroom textbook, or just want a development and test environment that matches what your company uses in production, then by all means install MySQL.
This is where my SSH client comes in handy. I am able to login to my instance and use "wget" to fetch the appropriate files. An alternative is to use "SCP" (also part of PuTTY) to do a secure copy from your personal computer up to the instance. You will need to do everything via command line interface, including editing files, so I found this [VI cheat sheet] useful. I copied all of the tarballs and RPMs on my storage LUN ( /media/storage ) so as not to have to download them again.
Compiling and configuring them is a different matter. By default, you login as an end user, "idcuser" (which stands for IBM Developer Cloud user). However, sometimes you need "root" level access. Use "sudo bash" to get into root level mode, and this allows you to put the files where they need to be. If you haven't done a configure/make/make install in awhile, here's your chance to relive those "glory days".
In the end, I was able to confirm that Apache, MySQL and PHP were all running correctly. I wrote a simple index.php that invoked phpinfo() to show all the settings were set correctly. I rebooted the instance to ensure that all of the services started at boot time.
Rational Application Developer over VDI
This last example, I started an instance pre-installed with Rational Application Developer (RAD), which is a full Integrated Development Environment (IDE) for Java and J2EE applications. I used the "NX Client" to launch a virtual desktop image (VDI) which in this case was Gnome on SLES 10 SP2. You might want to increase the screen resolution on your personal computer so that the VDI does not take up the entire screen.
From this VDI, you can launch any of the programs, just as if it were your own personal computer. Launch RAD, and you get the familiar environment. I created a short Java program and launched it on the internal WebSphere Application Server test image to confirm it was working correctly.
If you are thinking, "This is too good to be true!" there is a small catch. The instances are only up and running for 7 days. After that, they go away, and you have to start up another one. This includes any files you had on the local disk drive. You have a few options to save your work:
Copy the files you want to save to your storage LUN. This storage LUN appears persistent, and continues to exist after the instance goes away.
Take an "image" of your "instance", a function provided in the IBM Developer and Test Cloud. If you start a project Monday morning, work on it all week, then on Friday afternoon, take an "image". This will shutdown your instance, and backup all of the files to your own personal "asset catalog" so that the next time you request an instance, you can chose that "image" as the starting point.
Another option is to request an "extension" which gives you another 7 days for that instance. You can request up to five unique instances running at the same time, so if you wanted to develop and test a multi-host application, perhaps one host that acts as the front-end web server, another host that does some kind of processing, and a third host that manages the database, this is all possible. As far as I can tell, you can do all the above from either a Windows, Mac or Linux personal computer.
Getting hands-on access to Cloud Computing really helps to understand this technology!
Continuing on the [IBM Storage Launch of February 9], John Sing has offered to write the following guest post about the [announcement] of IBM Scale Out Network Attached Storage [IBM SONAS]. John and I have known each other for a while, traveled the world to work with clients and speak at conferences. He is an Executive IT Consultant on the SONAS team.
Guest Post written by John Sing, IBM San Jose, California
What is IBM SONAS? It’s many things, so let’s start with this list:
It’s IBM’s delivery of a productized, pre-packaged Scale Out NAS global virtual file server, delivered in a easy-to-use appliance
IBM’s solution for large enterprise file-based storage requirements, where massive scale in capacity and extreme performance is required, especially for today’s modern analytics-based Competitive Advantage IT applications
Scales to many petabytes of usable storage and billions of files in a single global namespace
Provides integrated central management, central deployment of petabyte levels of storage
Modular commercial-off-the-shelf [COTS] building blocks. I/O, storage, network capacity scale independently of each other. Up to 30 interface nodes and 60 storage nodes, in an IBM General Parallel File System [GPFS]-based cluster. Each 10Gb CEE interface node port is capable of streaming at 900 MB/sec
Files are written in block-sized chunks, striped over as many multiple disk drives in parallel – aggregating throughput on a massive scale (both read and write), as well as providing auto-tuning, auto-balancing
Functionality delivered via one program product, IBM SONAS Software, which provides all of above functions, along with clustered CIFS, NFS v2/v3 with session auto-failover, FTP, high availability, and more
IBM SONAS makes automated tiered storage achievable and realistic at petabyte levels:
Integrated high performance parallel scan engine capable of identifying files at over 10 million files per minute per node
Integrated parallel data movement engine to physically relocate the data within tiered storage
And we’re just scratching the surface. IBM has plans to deploy additional protocols, storage hardware options, and software features.
However, the real question of interest should be, “who really needs that much storage capacity and throughput horsepower?”
The answer may surprise you. IMHO, the answer is: almost any modern enterprise that intends to stay competitive. Hmmm…… Consider this: the reason that IT exists today is no longer to simply save cost (that may have been true 10 years ago). Everyone is reducing cost… but how much competitive advantage is purchased through “let’s cut our IT budget by 10% this year”?
Notice that in today’s world, there are (many) bright people out there, changing our world every day through New Intelligence Competitive Advantage analytics-based IT applications such as real time GPS traffic data, real time energy monitoring and redirection, real time video feed with analytics, text analytics, entity analytics, real time stream computing, image recognition applications, HDTV video on demand, etc. Think of how GPS industry, cell phone / Twitter / Facebook, iPhone and iPad applications, as examples, are creating whole new industries and markets almost overnight.
Then start asking yourself, “What's behind these Competitive Advantage IT applications – as they are the ones that are driving all my storage growth? Why do they need so much storage? What do those applications mean for my storage requirements?”
To be “real-time”, long-held IT paradigms are being broken every day. Things like “data proximity”: we can no longer can extract terabytes of data from production databases and load them to a data warehouse – where’s the “real-time” in that? Instead, today’s modern analytics-based applications demand:
Multiple processes and servers (sometimes numbering in the 100s) simultaneously ….
Running against hundreds of terabytes of data of live production data, streaming in from expanding number of smarter sensors, input devices, users
Producing digital image-intensive results that must be programatically sent to an ever increasing number of mobile devices in geographically dispersed storage
Requiring parallel performance levels, that used to be the domain only of High Performance Computing (HPC)
This is a major paradigm shift in storage – and that is the solution and storage capabilities that IBM SONAS is designed to address. And of course, you should be able to save significant cost through the SONAS global virtual file server consolidation and virtualization as well.
Certainly, this topic warrants more discussion. If you found it interesting, contact me, your local IBM Business Partner or IBM Storage rep to discuss Competitive Advantage IT applications and SONAS further.
Last week, I got the following comment from Bob Swann:
I am looking for the IBM VM Poster or a picture of the IBM VM "Catch the Wave"
Do you know where I might find it?
Well, Bob, I made some phone calls. The company that published these posters no longer exists, butI found a coworker at the Poughkeepsie Briefing Center who still had the poster on his wall, and he was kind enough to take a picture of it for you.
VM: The Wave of the Future (click thumbnail at left to see larger image)
Some may recognize this as a [mash-up] using as a base the famous Japanese 10-inch by 15-inch block print[The Great Wave off Kanagawa] byartist [Katsushika Hokusai]. I had this as my laptop'swallpaper screen image until last year when I was presenting in Kuala Lumpur, Malaysia. I was told that it reminded people about the horrible tsunami caused by the [Indian Ocean earthquake] back in 2004.I was actually scheduled to fly the last week of December 2004 to Jakarta, Indonesia, but at the last minute ourclient team changed plans. I would have been on route over the Pacific ocean when the tsunami hit, and probably stranded over there for weeks or months until the airports re-opened.
The Wave theme was in part to honor the IBM users group called World Alliance VSE VM and Linux (WAVV) which is havingtheir next meeting [April 18-22, 2008] in Chattanooga, Tennessee. I presentedat this conference back in 1996 in Green Bay, Wisconsin, as part of the IBM Linux for S/390 team. It started onthe Sunday that Wisconsin switched their clocks for [DaylightSaving Time], and the few of us from Arizona or other places that don't both with this, all showed up forbreakfast an hour early.
When I was in Australia last year, I was told the wave that sports fans do, by raising their hands in coordinatedsequence, was called the [Mexican Wave]in most other countries. When I was there, Melbourne was trying to outlaw this practice at their cricket matches.
The "wave" represents a powerful metaphor, from z/VM operating system on System z mainframes to VMware and Xenon Intel-based processor machines, as the direction of virtualization that we are heading for future data centers.The Mexican wave represents a glimpse of what humans can accomplish with collaboration on a globalscale. It can also represent the tidal wave of data arising from nearly 60 percent annual growth instorage capacity. (I had to mention storage eventually, to avoid being completely off-topic on this post!)
I hope this is the graphic you were looking for Bob. If anyone else has wave-themed posters they would like to contribute, please post a comment below.
My series last week on IBM Watson (which you can read [here], [here], [here], and [here]) brought attention to IBM's Scale-Out Network Attached Storage [SONAS]. IBM Watson used a customized version of SONAS technology for its internal storage, and like most of the components of IBM Watson, IBM SONAS is commercially available as a stand-alone product.
Like many IBM products, SONAS has gone through various name changes. First introduced by Linda Sanford at an IBM SHARE conference in 2000 under the IBM Research codename Storage Tank, it was then delivered as a software-only offering SAN File System, then as a services offering Scale-out File Services (SoFS), and now as an integrated system appliance, SONAS, in IBM's Cloud Services and Systems portfolio.
If you are not familiar with SONAS, here are a few of my previous posts that go into more detail:
This week, IBM announces that SONAS has set a world record benchmark for performance, [a whopping 403,326 IOPS for a single file system]. The results are based on comparisons of publicly available information from Standard Performance Evaluation Corporation [SPEC], a prominent performance standardization organization with more than 60 member companies. SPEC publishes hundreds of different performance results each quarter covering a wide range of system performance disciplines (CPU, memory, power, and many more). SPECsfs2008_nfs.v3 is the industry-standard benchmark for NAS systems using the NFS protocol.
(Disclaimer: Your mileage may vary. As with any performance benchmark, the SPECsfs benchmark does not replicate any single workload or particular application. Rather, it encapsulates scores of typical activities on a NAS storage system. SPECsfs is based on a compilation of workload data submitted to the SPEC organization, aggregated from tens of thousands of fileservers, using a wide variety of environments and applications. As a result, it is comprised of typical workloads and with typical proportions of data and metadata use as seen in real production environments.)
The configuration tested involves SONAS Release 1.2 on 10 Interface Nodes and 8 Storage Pods, resulting a single file system over 900TB usable capacity.
10 Interface Nodes; each with:
Maximum 144 GB of memory
One active 10GbE port
8 Storage Pods; each with:
2 Storage nodes and 240 drives
Drive type: 15K RPM SAS hard drives
Data Protection using RAID-5 (8+P) ranks
Six spare drives per Storage Pod
IBM wanted a realistic "no compromises" configuration to be tested, by choosing:
Regular 15K RPM SAS drives, rather than a silly configuration full of super-expensive Solid State Drives (SSD) to plump up the results.
Moderate size, typical of what clients are asking for today. The Goldilocks rule applies. This SONAS is not a small configuration under 100TB, and nowhere close to the maximum supported configuration of 7,200 disks across 30 Interface Nodes and 30 Storage Pods.
Single file system, often referred to as a global name space, rather than using an aggregate of smaller file systems added together that would be more complicated to manage. Having multiple file systems often requires changes to applications to take advantage of the aggregate peformance. It is also more difficult to load-balance your performance and capacity across multiple file systems. Of course, SONAS can support up to 256 separate file systems if you have a business need for this complexity.
The results are stunning. IBM SONAS handled three times more workload for a single file system than the next leading contender. All of the major players are there as well, including NetApp, EMC and HP.
Well, it's Tuesday, and that means IBM announcements!
IBM kicks EMC in the teeth with the announcement of System Storage Easy Tier, a new feature available at no additional charge on the DS8700 with the R5.1 level microcode. Barry Whyte introduces the concept in his [post this morning]. I will use SLAM (sub-LUN automatic movement) to refer generically to IBM Easy Tier and EMC FAST v2. EMC has yet to deliver FAST v2, and given that they just recently got full-LUN FAST v1 working a few months ago, it might be next year before you see EMC sub-LUN FAST v2.
Here are the key features of Easy Tier on the DS8700:
Sub-LUN Automatic Movement
IBM made it really easy to implement this on the DS8700. Today, you have "extent pools" that can be either SSD-only or HDD-only. With this new announcement, we introduce "mixed" SSD+HDD extent pools. The hottest extents are moved to SSD, and cooler extents are moved down to HDD. The support applies to both Fixed block architecture (FBA) LUNs as well as Count-Key-Data (CKD) volumes. In other words, an individual LUN or CKD volume can have some of its 1GB extents on SSD and other extents on FC or SATA disk.
Entire-LUN Manual Relocation
Entire-LUN Manual Relocation (ELMR, pronounced "Elmer"?) is similar to what EMC offers now with FAST v1. With this feature, you can now relocate an entire LUN non-disruptively from any extent pool to any other extent pool. You can relocate LUNs from an SSD-only or HDD-only pool over to a new Easy Tier-managed "mixed" pool, or take a LUN out of Easy Tier management by moving it to an SSD-only or HDD-only pool. Of course, this support also applies to both Fixed block architecture (FBA) LUNs as well as Count-Key-Data (CKD) volumes.
This feature also can be used to relocate LUNs and CKD volumes from FC to SATA pools, from RAID-10 to RAID-5 pools, and so on.
What if you already have SSD-only and HDD-only pools and want to use Easy Tier? You can now merge pools to create a "mixed" pool.
Before this announcement, you had to buy 16 solid-state drives at a time, called Mega-packs. Now, you can choose to buy just 8 SSD at a time, called Mini-packs. It turns out that just moving as little as 10 percent of your data from Fibre Channel disk over to Solid-State with Easy Tier can result in up to 300 to 400 percent performance improvement. IBM plans to publish formal SPC-1 benchmark results using Easy Tier-managed mixed extent pool in a few weeks.
Storage Tier Advisor Tool (STAT)
Don't have SSD yet, or not sure how awesome Easy Tier will be for your data center? The IBM Storage Tier Advisor Tool will analyze your extents and estimate how much benefit you will derive if you implement Easy Tier with various amounts of SSD. Those clients with R5.1 microcode on their DS8700 can download from the [DS8700 FTP site].
Yesterday, I promised I would cover other products from the Feb 12 announcement. Today I will focus on the IBM SAN768B director. Some people are confused on the differences between switchesand directors. I find there are three key differences:
Directors are designed to be 24x7 operation, highly available with no single points of failure or repair. Generally, all components in directors are redundant and hot-swappable, including Control Processors. In switches, some components are redundant and hot-swappable, such as fans and power supplies), but not the “motherboard” or controller. Often you have to take down a switch to make firmware or major hardware changes or upgrades.
Directors are designed to take in "blades" with different features, port counts, or protocol capabilities. You can add or remove blades while the system is up and running. Switches have a fixed number of ports. (A Small Form-factor Pluggable optical transceiver [SFP] is the component that turns electric pulses into light pulses (and visa versa). You plug the SFP into the switch, and then the fiber optic cable is plugged into the SFP).
With switches, you often start with a base number of active ports, and then can enable the rest of the ports as you need them.
Directors have hundreds of ports. Switches tend to have 64 ports or less.
Last year, Brocade acquired McDATA. Both were OEMs for IBM, and IBM distinguished that in the naming convention. The IBM SAN***B name was used to denote products manufactured for IBM by Brocade, and a SAN***M name was used to denote products manufactured by McDATA.
At that time, Brocade and McDATA equipment did not mix very well on the same fabric, so IBM retained the naming convention so that you as a customer knew what it worked with.
Brocade now has released with new levels of both operating systems--Brocade's FOS and McDATA's EOS--and their respective fabric managers--Brocade Fabric Manager (FM) and McDATA's Enterprise Fabric Connectivity Manager (EFCM)--so that they have full interoperability.
Brocade's goal is to enhance EFCM to be a common software management platform for all of their products going forward.
IBM used the maximum port count in the name to provide some clue as to the size of the switch or director. The SAN16B-2 or the SAN32B-3 are switches that have a maximum of 16 and 32 ports. The SAN256B supports a maximumeight blades of your choosing.Two different types were supported for FC ports, a 16-port blade and a 32-port blade.If all eight were 32-port blades then the maximum was 256 ports, hence the name. But then Brocade began offering 48-port blades. Should IBM change the name? No, it decided to leave itthe SAN256B even though it can now have a maximum of 384 ports.
Not to confuse anyone, the SAN768B also has a maximum of 384 ports, in the same 14U dimensions, but with a special twist. Normally to connect two directors together you use up ports from each, in what are called "inter-switch links" (ISL).These are ports you are taking away from availability from the servers and storage controllers. The SAN768Boffers a new alternative called "inter-chassis links". Each SAN768B has two processing blades, and each has two ICL ports, so with just four two-meter (2m) cables, you get the equivalent of 128 FC 8 Gbps ISL links without using 128 individual ports on each side. That is like giving you 256 ports back for use with servers and storage!
Since IBM directors require 240 volt power, IBM TotalStorage SAN Cabinet C36 include power distribution units (PDUs). PDUs are just glorified power strips, but a new intelligent PDU (iPDU) option introduces additional intelligence to monitor energy consumption for customers looking to measure, and perhaps charge back, energy consumption to the rest of the business. You can stack two SAN768B in one cabinet, one on top of the other, and connected via ICLs, it wouldlook like one huge 768-port backbone.
As a backbone for your data center, the SAN768B is positioned for two emerging technologies:
8 Gbps Fibre Channel (FC)
The SAN768B is powerful enough to have 32-port blades run full speed on all ports off-blade without oversubscription. Oversubscription is an emotional topic.
Normally, blades (like switches) can handle all traffic at full speed without delays provided the in-bound and out-bound ports involved are all on the same blade. In a director, however, if you need to communicate from a port on one blade to a port on a different blade, it is possible that off-blade traffic might be constrained or delayed in its transit across the backplane.
On the SAN768B, both the 16-port and 32-port blades can run at full 8 Gbps speed, and the 48-port is exposed to oversubscription only if you have more than 32-ports running at full 8 Gbps transferring data off-blade concurrently.
The new 8 Gbps SFPs support auto-negotiation at N-1 and N-2 generation link speeds. This means that they will automatically slow down when communicating with 4Gpbs and 2 Gbps devices, but they cannot communicate with 1 Gbps devices. If you are still using 1 Gbps devices in your data center, you will need to use 4 Gbps SFPs (which also support 2 Gbps and 1 Gbps link speeds) to communicate with those older devices.
Basically, this new technology enables transport of Fibre Channel packets over 10 Gbps Ethernet links. This 10 Gbps Ethernet can also be used to carry traditional iSCSI and TCP/IP traffic. FCoE introduces new extensions to provide Fibre Channel characteristics, like being lossless, and offering consistent performance. The ANSI T11 team is driving FCoE as an open standard, and at the moment it is not fully baked. I suggest you don't buy any FCoE equipment prematurely, as pre-standard devices or host bus adapters could get you burned later when the standard is finalized.
The idea is that FCoE blades can be installed in a SAN768B along with traditional FC blades, allowing routing of traffic between traditional FC and new FCoE ports. Those who have invested in FCIP for long distance replication will be able to continue using either FC or FCoE inputs.
One of the big drivers of FCoE is IBM BladeCenter. Currently, most BladeCenter blades support both Ethernet and FC connectivity and are connected to both Ethernet and FC switches on the back of each BladeCenter chassis. With FCoE, we have the potential to run both FC and IP traffic across simpler all-Ethernet blades, connecting through all-Ethernet switches on the backs of each chassis.
For more information on the IBM SAN768B, see the [IBM Press Release]. For more detailson Brocade's strategy, here is an 8-page white paper on their[Data Center Fabric] vision.
I've gotten suggestions to upgrade the memory and disk storage, and how to fine-tune the Microsoft Windows XP operating system. Others suggested replacing the OS with Linux, and to use the Cloud to avoid some of the storage space limitations.
But first, I have to mention the latest in our series of "Enterprise Systems" videos. The first was being [Data Ready]. The second was being [Security Ready]. The now the third in the series: the 3-minute
[Cloud Ready] video.
So I decided to try different Cloud-oriented Operating Systems, to see if any would be a good fit. Here is what I found:
(FTC Disclosure: I work for IBM and own IBM stock. This blog post is not meant to endorse one OS over another. I have financial interests in, and/or have friends and family who work at some of the various companies mentioned in this post. Some of these companies also have business relationships with IBM.)
Jolicloud and Joli OS 1.2
I gave this OS a try. This is based on Linux, but with an interesting approach. First, you have to be on-line all the time, and this OS is designed for 15-25 year-olds who are on social media websites like Facebook. By having a Jolicloud account, you can access this from any browser on any system, or run the Joli OS operating system, or buy the already pre-installed Jolibook netbook computer.
The Joli OS 1.2 LiveCD ran fine on my T410 with 4GB or RAM, giving me a chance to check it out, but sadly did not run on grandma's Thinkpad R31 with 384MB of RAM. According to the [Jolicloud specifications], Joli OS should run in as little as 384MB of RAM and 2GB of disk storage space, but it didn't for me.
Google Chrome and Chromium OS Vanilla
Like the Jolibook, Google has come out with a $249 Chromebook laptop that runs their "Chrome OS". This is only available via OEM install on desginated hardware, but the open source version is available called Chromium OS. These are also based on Linux.
Rather than compiling from source, Hexxeh has made nightly builds available. You can download [Chromium OS Vanilla] zip file, unzip the image file, and copy it to a 4GB USB memory stick. The compressed image is about 300MB, but uncompressed about 2.5GB, so too big to fit on a CD. The image on the USB stick is actually two partitions, and cannot be run from DVD either.
If you don't have a 4GB USB stick handy, and want to see what all the fuss is about, just install the Google Chrome browser on your Windows or Linux system, and then maximize the browser window. That's it. That is basically what Chromium OS is all about.
Files can be stored locally, or out on your Google Drive. Documents can be edited using "Google Docs" in the Cloud. You can run in "off-line" mode, for example, read your Gmail notes when not connected to the Internet. Music and video files can be played using the "Files" app.
If you really need to get out of the browser, you can hit the right combination of keys to get to the "crosh" command line shell.
Like Joli OS, I was able to run this from my Thinkpad T410 with 4GB of RAM, but not on grandma's Thinkpad R31. It appears that Chromium requires at least 1GB of RAM to run properly.
Android for x86
While researching the Chromium OS, I found that there is an open source community porting [Android to the x86] platform. Android is based on Linux, and would allow your laptop or netbook to run very much like a smartphone or tablet. Most of the apps available to Android should work here as well.
Unfortunately, the project has focused only on selected hardware:
ASUS Eee PCs/Laptops
Viewsonic Viewpad 10
Dell Inspiron Mini Duo
Lenovo ThinkPad x61 Tablet
I tried running the Thinkpad x61 version on both my Thinkpad T410 and grandma's Thinkpad R31, but with no success.
Peppermint OS Three
Next up was Peppermint OS, which claims to be a blend of Linux Mint, Lubuntu, and Xfce, but with a "twist" of aspiring to be a Cloud-oriented OS.
Rather than traditional apps to write documents or maintain a calendar, this OS offers a "Single-Site Browser" (SSB) experience, where you can configure "apps" by pointing to their respective URL. For documents, launch GWoffice, the client for Google Docs. For calendar, launch Google Calendar.
Most Linux distros have both a number and a project name associated with them. For example, Ubuntu 10.04 LTS is known as "Lucid Lynx". The Peppermint OS team avoided this by just calling their latest version "Three" which serves as both its number and its name.
The browser is Chromium, similar to Google Chrome OS above, and uses the "DuckDuckGo" search engine. This is how the Peppermint OS folks make their money to defray the costs of this effort.
Peppermint OS claims to run in systems as little as 192MB or RAM, and only 4GB of disk space. The LiveCD ran well on both my Thinkpad T410, as well as grandma's Thinkpad R31. More importantly, when I installed on the hard drive, it ran well.
The music app "Guayadeque" that came pre-installed was awful. It couldn't play MP3 music out-of-the-box. I had to install the Codec plugins from various "ubuntu-restricted-extras" libraries. I also installed the music app "Rhythmbox", and that worked great. Time from power-on to first-note was less than 2 minutes! However, the problems with the Guayadeque gave me the impression this OS might not be ready for primetime.
I contacted grandma to ask if she has Wi-Fi in her home, and sure enough, she doesn't. Her PC upstairs is direct attached to the cable modem. So, while the Cloud suggestion was worthy of investigation, I will continue to pursue other options that do not require being connected. I certainly do not want to spend any time and effort getting Wi-Fi installed there.