Better to Buy From a Single Integrated Vendor
|
|
Visits (9559)
In his blog post, [The Lure of Kit-Cars], fellow blogger Chuck Hollis (EMC) uses an excellent analogy delineating the differences between kit-cars you build from parts, versus fully-integrated systems that you can drive off the car dealership showroom lot. The analogy holds relatively well, as IT departments can also build their infrastructure from parts, or you can get fully-integrated systems from a variety of vendors.
Certainly, this debate is not new. In my now infamous 2007 post [Supermarkets and Specialty Shops], I explained that there were clients that preferred to get their infrastructure from a single IT supermarket, like IBM or HP, while others were lured into thinking that buying separate parts from butchers, bakers and candlestick makers and other specialty shops was somehow a better idea. Chuck correctly explains that in the early years of the automobile industry, before major car manufacturers had mass-production assembly lines, putting a car together from parts was the only way cars were made. Today, only the few most avid enthusiasts build cars this way. The majority get cars from a single seller and drive away. In my post [Resolving the Identity Crisis], I postulated that EMC appeared to be trying to shed itself of the "disk-only specialty shop" image and over to be more like IBM. Not quite a full IT Supermarket, but perhaps more like a [Trader Joe's] premium-priced retailer. (If you find that EMC's focus on integrated systems appears to be a 180-degree about-face from their historical focus on selling individual best-of-breed products, see my previous discussion of Chuck's contradictions in my blog post: [Is Storage the Next Confusopoly].) While companies like EMC might be making this transition, there is a lot of resistance and inertia from the customer marketplace. I agree with Chuck, companies should not be building kit-cars or IT infrastructures from parts, certainly not from parts sold from different vendors. In my post [Talking about Solutions not Products], I explained how difficult it was to change behavior. CIOs, IT directors and managers need to think differently about their infrastructure. Let's take a quick look at some choices:
Before he earned his PhD in Mechanical Engineering, my father was a car mechanic. I spent much of my teenage years covered in grease, helping my father assembling cars, lifting engines, and rebuilding carburetors. Certainly this was good father-son time, and I certainly did learn something in the process. Like the automobile industry, the IT industry has matured, and it makes no financial sense to build your own IT infrastructure from parts from different vendors. For a test drive of the industry's leading integrated IT systems, see your IBM sales rep or IBM Business Partner.
Tags:  smart+analytics kit-cars dell netapp oracle cloudburst sun acadia vmware ibm chuck+hollis supermarkets infosphere+balanced+wareh... cisco emc specialty-shops hp |
||||||||||||||
IBM high-end disk outsold HDS and HP combined
|
|
Visits (13104)
This week, Hitachi Ltd. announced their next generation disk storage virtualization array, the Virtual Storage Platform, following on the success of its USP V line. It didn't take long for fellow blogger Chuck Hollis (EMC) to comment on this in his blog post [Hitachi's New VSP: Separating The Wheat From The Chaff]. Here are some excerpts:
Chuck implies that neither Hewlett-Packard (HP) nor Hitachi Data Systems (HDS) as vendors provide any value-add from the box manufactured by Hitachi Ltd. so combines them into a single category. I suspect the HP and HDS folks might disagree with that opinion. When I reminded Chuck that IBM was also a major player in the high-end disk space, his response included the following gem: "Many of us in the storage industry believe that IBM currently does not field a competitive high-end storage platform. IDC market share numbers bear out this assertion, as you probably know." While Chuck is certainly entitled to his own beliefs and opinions, believing the world is flat does not make it so. Certainly, I doubt IDC or any other market research firm has put out a survey asking "Do you think IBM offers a competitive high-end disk storage platform?" Of course, if Chuck is basing his opinion on anecdotal conversations with existing EMC customers, I can certainly see how he might have formed this misperception. However, IDC market share numbers don't support Chuck's assertion at all. There is no industry-standard definition of what is a "high-end" or "enterprise-class" disk system. Some define high-end as having the option for mainframe attachment via ESCON and/or FICON protocol. Others might focus on features, functionality, scalability and high 99.999+ percent availability. Others insist high-end requires block-oriented protocols like FC and iSCSI, rather than file-based protocols like NAS and CIFS. For the most demanding mission-critical mix of random and sequential workloads, IBM offers the [IBM System Storage DS8000 series] high-end disk system which connects to mainframes and distributed servers, via FCP and FICON attachment, and supports a variety of drive types and RAID levels. The features that HP and HDS are touting today for the VSP are already available on the IBM DS8000, including sub-LUN automatic tiering between Solid-State drives and spinning disk, called [Easy Tier], thin provisioning, wide striping, point-in-time copies, and long distance synchronous and asynchronous replication. There are lots of analysts that track market share for the IT storage industry, but since Chuck mentions [IDC] specifically, I reviewed the most recent IDC data, published a few weeks ago in their "IDC Worldwide Quarter Disk Storage Tracker" for 2Q 2010, representing April 1 to June 30, 2010 sales. Just in case any of the rankings have changed over time, I also looked at the previous four quarters: 2Q 2009, 3Q 2009, 4Q 2009 and 1Q 2010. (Note: IDC considers its analysis proprietary, out of respect for their business model I will not publish any of the actual facts and figures they have collected. If you would like to get any of the IDC data to form your own opinion, contact them directly.) In the case of IDC, they divide the disk systems into three storage classes: entry-level, midrange and high-end. Their definition of "high-end" is external RAID-protected disk storage that sells for $250,000 USD or more, representing roughly 25 to 30 percent of the external disk storage market overall. Here are IDC's rankings of the four major players for high-end disk systems:
By either measure of market share, units (disk systems) or revenue (US dollars), IDC reports that IBM high-end disk outsold both HDS and HP combined. This has been true for the past five quarters. If a smaller start-up vendor has single digit percent market share, I could accept it being counted as part of Chuck's "occasional sideshow from other vendors trying to claim relevance", but IBM high-end disk has consistently had 20 to 30 percent market share over the past five quarters! Not all of these high-end disk systems are connected to mainframes. According to IDC data, only about 15 to 25 percent of these boxes are counted under their "Mainframe" topology. Chuck further writes: "It's reasonable to expect IBM to sell a respectable amount of storage with their mainframes using a protocol of their own design -- although IBM's two competitors in this rather proprietary space (notably EMC and Hitachi) sell more together than does IBM." The IDC data doesn't support that claim either, Chuck. By either measure of market share, units (disk systems) or revenue (US dollars), IDC reports that IBM disk for mainframes outsold all other vendors (including EMC, HDS, and HP) combined. And again, this has been true for the past five quarters. Here is the IDC ranking for mainframe disk storage:
IBM has over 50 percent market share in this case, primarily because IBM System Storage DS8000 is the industry leader in mainframe-related features and functions, and offers synergy with the rest of the z/Architecture stack. So Chuck, I am not picking a fight with you or asking you to retract or correct your blog post. Your main theme, that the new VSP presents serious competition to EMC's VMAX high-end disk arrays, is certainly something I can agree with. Congratulations to HDS and HP for putting forth what looks like a viable alternative to EMC's VMAX.
Tags:  high-end p9500 hds virtual+storage+platform ds8000 easy+tier enterprise-class usp-v hp chuck+hollis vsp idc emc marketshare ibm hitachi |
||||||||||||||
IBM Launches new Storage Solutions October 2010Well, it's Thursday, and today IBM is having a major launch for storage. We have lots of exciting announcements today, so here is the major highlights:
These are just a subset of today's announcements. To see the rest, read [What's New].
Tags:  ibm svc easy+tier ds8800 lulu announcements storwize+v7000 #ibmstorage sas |
||||||||||||||
IBM ProtecTIER and the Systems Director Storage Control plug-in
|
|
Visits (9841)
It's Tuesday, and you know what that means... IBM Announcements!
To learn more about IBM storage hardware, software or services, see the updated [IBM System Storage] landing page.
Tags:  tsrm deduplication protectier ibm ts7650 ts7650g tsanm symantec ts7610 sspc trellisoft gui tpc openstorage netbackup ost storage+control api systems+director |
||||||||||||||
IBM Storwize Product Name Decoder Ring
|
|
Visits (15194)
IBM had its big launch yesterday of the [IBM Storwize V7000 midrange disk system], and already some have discussed IBM's choice of the name. Fellow blogger Stephen Foskett has an excellent post titled [IBM’s Storwize V7000: 100% SVC; 0% Storwize]. On The Register, Chris Mellor writes [IBM's Midrange Storage Blast - Storwize. But Without Compression]. In his latest [Friday Rant], fellow blogger Chuck Hollis (EMC) feels "the new name is cool, if a bit misleading." In the spirit of the [HP Product Line Decoder Ring] and [Microsoft Codename Tracker], here is your quick IBM product name decoder ring:
If you think this is the first time a company like IBM has pulled shenanigans with product names like this, think again. Here are a few posts that might refresh your memory:
But what about acquisitions? When [IBM acquired Lotus Development Corporation], it kept the "Lotus" brand. New products that fit the "collaboration" function were put under the Lotus brand. I think most people can accept this approach. But have we ever seen an existing product renamed to an acquired name? In my post January 2009 post [Congratulations to Ken on your QCC Milestone], I mentioned that my colleague Ken Hannigan worked on an internal project initially called "Workstation Data Save Facility" (WDSF) which was changed to "Data Facility Distributed Storage Manager" (DFDSM), then renamed to "ADSTAR Distributed Storage Manager" (ADSM), and finally renamed to the name it has today: IBM Tivoli Storage Manager (TSM). Readers reminded me that [IBM acquired Tivoli Systems, Inc.] in 1996, so TSM could not have been an internally developed product. Ha! Wrong! Let's take a quick history lesson on how this came about:
I participated in five months of painful meetings to figure out what to name our new inte However, the new IBM Storwize V7000 midrange product had nothing in common with the appliances acquired from Storwize, the company, so to avoid confusion, the latter products were renamed to [IBM Real-time Compression]. Fellow blogger Steven Kenniston, the Storage Alchemist from Storwize fame now part of IBM from the acquisition, gives his perspective on this in his post [Storwize – What is in a Name, Really?]. While I am often critical of the names and terms IBM uses, I have to say this last set of naming decisions makes a lot of sense to me and I support it wholeheartedly. To learn more about the IBM Storwize V7000 midrange disk system, watch the latest videos on the IBM Virtual Briefing Center (VBC). We have a [short summary version for CFO executives] as well as a [longer version for IT technical professionals].
Tags:  ibm svc lou+gerstner decoder+ring storwize codename stephen+foskett microsoft storwize+v7000 real-time+compression tsm adsm |
||||||||||||||
One of the Faces of the Smarter Planet campaign
|
|
Visits (10377)
To make true advances in any industry or field requires forward thinking—as well as industry insight and experience. It can't be done just by packaging a bag of piece parts and putting a new label on it. But forward thinkers are putting smarter, more powerful technology to uses that were once unimaginable -- either in scale or in progress. ![]() I am pleased that IBM has honored me with recognition as a "forward thinker" on the corporate-wide [IBM Smarter Planet for Smarter IT systems and Infrastructure]. This is quite an honor, being one of the "faces" of IBM's Smarter Planet campaign. I am joined by my esteemed colleagues: [Brian Sanders], [Steve Will], [Willie Favero], and [Kathleen Holm]. Ironically, I didn't even know I made the final cut until I got three, yes three, separate requests for interviews about it. I already reached the "million hits" milestone. Other people track these things for me, so it will be interesting how much additional traffic my latest [15 minutes of fame] will generate. To learn more, visit the [Smarter Planet overview] landing page. Together, we can build a smarter planet!
Tags:  infrastructure smarter+planet ibm forward+thinker kathleen+holm steve+will brian+sanders willie+favero |
||||||||||||||
The Correct Use of the term NearlineA client asked me to explain "Nearline storage" to them. This was easy, I thought, as I started my IBM career on DFHSM, now known as DFSMShsm for z/OS, which was created in 1977 to support the IBM 3850 Mass Storage System (MSS), a virtual storage system that blended disk drives and tape cartridges with robotic automation. Here is a quick recap:
These terms and their definitions have been used for decades, and are consistent with or at least similar to definitions I found on [Wikipedia], [Webopedia], [WiseGEEK], and [SearchStorage]. Sadly, it appears a few storage manufacturers and vendors have been misusing the term "Nearline" to refer to "slower online" spinning disk drives. I find this [June 2005 technology paper from Seagate], and this [2002 NetApp Press Release], the latter of which included this contradiction for their "NearStore" disk array. Here is the excerpt:
Which is it, "online access" or "nearline storage"? If a client asked why slower drives consume less energy or generate less heat, I could explain that, but if they ask why slower drives must have SATA connections, that is a different discussion. The speed of a drive and its connection technology are for the most part independent. A 10K RPM drive can be made with FC, SAS or SATA connection. I am opposed to using "Nearlne" just to distinguish between four-digit speeds (such as 5400 or 7200 RPM) versus "online" for five-digit speeds (10,000 and 15,000 RPM). The difference in performance between 10K RPM and 7200 RPM spinning disks is miniscule compared to the differences between solid-state drives and any spinning disk, or the difference between spinning disk and tape. I am also opposed to using the term "Nearline" for online storage systems just because they are targeted for the typical use cases like backup, archive or other reference information that were previously directed to nearline devices like automated tape libraries. Can we all just agree to refer to drives as "fast" or "slow", or give them RPM rotational speed designations, rather than try to incorrectly imply that FC and SAS drives are always fast, and SATA drives are always slow? Certainly we don't need new terms like "NL-SAS" just to represent a slower SAS connected drive.
Tags:  ibm nl-sas sas maid offline dvd optical online nearline sata netapp ssd seagate fc |
||||||||||||||
Was SAN File System really five years ahead of its time?Fellow master inventor and blogger Barry Whyte (IBM) recounts the past 20 years of history in IT storage from his perspective in a series of blog posts. They are certainly worth a read:
In his last post in this series, he mentions that the amazingly successful IBM SAN Volume Controller was part of a set of projects: "IBM was looking for "new horizon" projects to fund at the time, and three such projects were proposed and created the "Storage Software Group". Those three projects became know externally as TPC, (TotalStorage Productivity Center), SanFS (SAN File System - oh how this was just 5 years too early) and SVC (SAN Volume Controller). The fact that two out of the three of them still exist today is actually pretty good. All of these products came out of research, and its a sad state of affairs when research teams are measured against the percentage of the projects they work on, versus those that turn into revenue generating streams." But this raises the question: Was SAN File System just five years too early? IBM classifies products into three "horizons"; Horizon-1 for well-established mature products, Horizon-2 was for recently launched products, and Horizon-3 was for emerging business opportunities (EBO). Since I had some involvement with these other projects, I thought I would help fill out some of this history from my perspective. Back in 2000, IBM executive [Linda Sanford] was in charge of IBM storage business and presented that IBM Research was working on the concept of "Storage Tank" which would hold Petabytes of data accessible to mainframes and distributed servers. In 2001, I was the lead architect of DFSMS for the IBM z/OS operating system for mainframes, and was asked to be lead architect for the new "Horizon 3" project to be called IBM TotalStorage Productivity Center (TPC), which has since been renamed to IBM Tivoli Storage Productivity Center. In 2002, I was asked to lead a team to port the "SANfs client" for SAN File System from Linux-x86 over to Linux on System z. How easy or difficult to port any code depends on how well it was written with the intent to be ported, and porting the "proof-of-concept" level code proved a bit too challenging for my team of relative new-hires. Once code written by research scientists is sufficiently complete to demonstrate proof of concept, it should be entirely discarded and written from scratch by professional software engineers that follow proper development and documentation procedures. We reminded management of this, and they decided not to make the necessary investment to add Linux on System z as a supported operating system for SAN file system.
In 2003, IBM launched Productivity Center, SAN File System and SAN Volume Controller. These would be lumped together with Horizon-1 product IBM Tivoli Storage Manager and the four products were promoted together as the inap The SAN File System was the productized version of the "Storage Tank" research project. While the SAN Volume Controller used industry standard Fibre Channel Protocol (FCP) to allow support of a variety of operating system clients, the SAN File System required an installed "client" that was only available initially on AIX and Linux-x86. In keeping with the "open" concept, an "open source reference client" was made available so that the folks at Hewlett-Packard, Sun Microsystems and Microsoft could port this over to their respective HP-UX, Solaris and Windows operating systems. Not surprisingly, none were willing to voluntarily add yet another file system to their testing efforts. Barry argues that SANfs was five years ahead of its time. SAN File System tried to bring policy-based management for information, which has been part of DFSMS for z/OS since the 1980s, over to distributed operating systems. The problem is that mainframe people who understand and appreciate the benefits of policy-based management already had it, and non-mainframe couldn't understand the benefits of something they have managed to survive without. (Every time I see VMware presented as a new or clever idea, I have to remind people that this x86-based hypervisor basically implements the mainframe concept of server virtualization introduced by IBM in the 1970s. IBM is the leading reseller of VMware, and supports other server virtualization solutions including Linux KVM, Xen, Hyper-V and PowerVM.) To address the various concerns about SAN File System, the proof-of-concept code from IBM Research was withdrawn from marketing, and new fresh code implementing these concepts were integrated into IBM's existing General Parallel File System (GPFS). This software would then be packaged with a server hardware cluster, exporting global file spaces with broad operating system reach. Initially offered as IBM Scale-out File Services (SoFS) service offering, this was later re-packaged as an appliance, the IBM Scale-Out Network Attached Storage (SONAS) product, and as IBM Smart Business Storage Cloud (SBSC) cloud storage offering. These now offer clustered NAS storage using the industry standard NFS and CIFS clients that nearly all operating systems already have.
Today, these former Horizon-1 products are now Horizon-2 and Horizon-3. They have evolved. Tivoli Storage Productivity Center, GPFS and SAN Volume Controller are all market leaders in their respective areas.
Tags:  storage sanfs storage+tank barry+whyte nfs sbsc cloud+storage cifs tpc sonas svc ebo sofs ibm http nas |