Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
announcement.
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
IBM System Storage Productivity Center v1.4
The latest [System Storage Productivity Center (SSPC) v1.4] can manage all of your DS3000, DS4000, DS5000, DS6000, DS8000 series disk, and SAN Volume Controller. You can get the SSPC built in two modes:
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
In his Backup Blog, fellow blogger Scott Waterhouse from EMC has yet another post about Tivoli Storage Manager (TSM) titled [TSM and the Elephant]. He argues that only the cost of new TSM servers should be considered in any comparison, on the assumption that if you have to deploy another server, you have to attach to it fresh new disk storage, a brand new tape library, and hire an independent group of backup administrators to manage. Of course, that is bull, people use much of existing infrastructure and existing skilled labor pool every time new servers are added, as I tried to point out in my post [TSM Economies of Scale].
However, Scott does suggest that we should look at all the costs, not just the cost of a new server, which we in the industry call Total Cost of Ownership (TCO). Here is an excerpt:
Final point: there is actually a really important secondary point here--what is the TCO of your backup infrastructure. In some ways, TSM is one of the most expensive (number of servers and tape drives, for example), relative to other backup applications. However, I think it would be a really interesting exercise to critically examine the TCO of the various backup applications at different scales to evaluate if there is any genuine cost differentiation between them.
Fortunately, I have a recent TCO/ROI analysis for a large customer in the Eastern United States that compares their existing EMC Legato deployment to a new proposed TSM deployment. The assessment was performed by our IBM Tivoli ROI Analyst team, using a tool developed by Alinean. The process compares the TCO of the currently deployed solution (in this case EMC Legato) with the TCO of the proposed replacement solution (in this case IBM TSM) for 55,000 client nodes at expected growth rates over a three year period, and determines the amount of investment, cost savings and other benefits, and return on investment (ROI).
Here are the results:
"A risk adjusted analysis of the proposed solution's impact was conducted and it was projected that implementing the proposed solutions resulted in $16,174,919 of 3 year cumulative benefits. Of these projected benefits, $8,015,692 are direct benefits and $8,159,227 are indirect benefits.
Top cumulative benefits for the project include:
Backup Coverage Risk Avoidance - $6,749,796
Reduction in Maintenance of Competitive Products - $1,576,000
Reduction in Existing Tivoli Maintenance (Storage and Monitoring) - $1,490,000
IT Operations Labor Savings - Storage Management - $982,919
Network Bandwidth Savings - $575,196
Standardization - $366,667
Future cost avoidance of addtional competitive licenses - $350,000
These benefits can be grouped regarding business impact as:
$6,456,025 in IT cost reductions
$1,559,667 in business operating efficiency improvements
$8,159,227 in business strategic advantage benefits
The proposed project is expected to help the company meet the following goals and drive the following benefits:
Reduce Business Risks $6,749,796
Consolidate and Standardize IT Infrastructure $4,975,667
Reduce IT Infrastructure Costs $2,057,107
Improve IT System Availability / Service Levels $1,409,431
Improve IT Staff Efficiency / Productivity $982,919
To implement the proposed project will require a 3 year cumulative investment of $5,760,094 including:
$0 in initial expenses
$4,650,000 in capital expenditures
$1,110,094 in operating expenditures
Comparing the costs and benefits of the proposed project using discounted cash flow analysis and factoring in a risk-adjusted discount rate of 9.5%, the proposed business case predicts:
Risk Adjusted Return on Investment (RA ROI) of 172%
Return on Investment (ROI) of 181%
Net Present Value (NPV) savings of $8,425,014
Payback period of 9.0 month(s)
Note: The project has been risk-adjusted for an overall deployment schedule of 5 months."
IBM Tivoli Storage Manager uses less bandwidth, fewer disk and tape storage resources than EMC Legato. For even a large deployment of this kind, payback period is only NINE MONTHS. Generally, if you can get a new proposed investment to have less than 24 month payback period you have enough to get both CFO and CIO excited, so this one is a no-brainer.
Perhaps this helps explain why TSM enjoys such a larger marketshare than EMC Legato in the backup software marketplace. No doubt Scott might be able to come up with a counter-example, a very small business with fewer than 10 employees where an EMC Legato deployment might be less expensive than a comparable TSM deployment. However, when it comes to scalability, TSM is king. The majority of the Fortune 1000 companies use Tivoli Storage Manager, and IBM uses TSM internally for its own IT, managed storage services, and cloud computing facilities.
Well, it's Tuesday, and that means IBM announcements! Today is bigger, as there are a lot of Dynamic Infrastructure announcements throughout the company with a common theme, cloud computing and smart business systems that support the new way of doing things. Today, IBM announced its new "IBM Smart Archive" strategy that integrates software, storage, servers and services into solutions that help meet the challenges of today and tomorrow. IBM has been spending the past few years working across its various divisions and acquisitions to ensure that our clients have complete end-to-end solutions.
IBM is introducing new "Smart Business Systems" that can be used on-premises for private-cloud configurations, as well as by cloud-computing companies to offer IT as a service.
IBM [Information Archive] is the first to be unveiled, a disk-only or blended disk-and-tape Information Infrastructure solution that offers a "unified storage" approach with amazing flexibility for dealing with various archive requirements:
For those with applications using the IBM Tivoli Storage Manager (TSM) or IBM System Storage Archive Manager (SSAM) API of the IBM System Storage DR550 data retention solution, the Information Archive will provide a direct migration, supporting this API for existing applications.
For those with IBM N series using SnapLock or the File System Gateway of the DR550, the Information Archive will support various NAS protocols, deployed in stages, including NFS, CIFS, HTTP and FTP access, with Non-Erasable, Non-Rewriteable (NENR) enforcement that are compatible with current IBM N series SnapLock usage.
For those using NAS devices with PACS applications to store X-rays and other medical images, the Information Archive will provide similar NAS protocol interfaces. Information Archive will support both read-only data such as X-rays, as well as read/write data such as Electronic Medical Records.
Information Archive is not just for compliance data that was previously sent to WORM optical media. Instead, it can handle all kinds of data, rewriteable data, read-only data, and data that needs to be locked down for tamper protection. It can handle structured databases, emails, videos and unstructured files, as well as objects stored through the SSAM API.
The Information Archive has all the server, storage and software integrated together into a single machine type/model number. It is based on IBM's General Parallel File System (GPFS) to provide incredible scalability, the same clustered file system used by many of the top 500 supercomputers. Initially, Information Archive will support up to 304TB raw capacity of disk and Petabytes of tape. You can read the [Spec Sheet] for other technical details.
For those who prefer a more "customized" approach, similar to IBM Scale-Out File Services (SoFS), IBM has [Smart Business Storage Cloud]. IBM Global Services can customize a solution that is best for you, using many of the same technologies. In fact, IBM Global Services announced a variety of new cloud-computing services to help enterprises determine the best approach.
In a related announcement, IBM announced [LotusLive iNotes], which you can think of as a "business-ready" version of Google's GoogleApps, Gmail and GoogleCalendar. IBM is focused on security and reliability but leaves out the advertising and data mining that people have been forced to tolerate from consumer-oriented Web 2.0-based solutions. IBM's clients that are already familiar with on-premises version of Lotus Notes will have no trouble using LotusLive iNotes.
There was actually a lot more announced today, which I will try to get to in later posts.
Well, I had a pleasant vacation. I took a trip up to beautiful Lake Powell in Northern Arizona as part of a "Murder Mystery Dinner" weekend. This trip was organized by AAA and Lake Powell in association with the professionals at [Murder Ink Productions] out of Phoenix.
The trip involved two busloads of people from Tucson and Phoenix driving up to Lake Powell, with a series of meals that introduced all the characters and gave out clues to solve a murder. At the end of the dinner on the last evening, we had to guess who dunnit, how, and why. I solved it, and got this lovely tee-shirt.
More importantly, the trip gave me a chance to read
[The Numerati] by Stephen Baker. The author explains all the different ways that "analysts" are able to crunch through the large volumes of data to gain insight. He has chapters on how this is done for shoppers in retail sales, voters for upcoming elections, patients for medical care, and even matchmaking services like chemistry.com. Like the Murder Mystery Dinner, there are too many suspects and too many clues, but these number-crunchers, which Mr. Baker calls The Numerati, are able to figure out through advanced business analytics.
FTC Notice: I recommend this book. I did not receive any compensation to mention this book on this blog, I did not receive a copy of the book free for this review, and I do not know the author. Everyone in my staff is reading this book, and I borrowed a copy from a co-worker.
If you don't understand how this all works, here is a quick 6-minute [video] on YouTube.
Mr. Baker mentions IBM's leadership in this area several times throughout his book. This week,
IBM unveiled
[New Offerings to Help Clients Better Manage Content With Analytics]. Information is like mighty rivers flowing, and it is necessary to get a sense of what is going on around us if we are to make this a smarter planet.
I am proud to announce we have yet another IBM blogger for the storosphere, Rich Swain from IBM's Research Triangle Park in Raleigh, North Carolina will blog about
[News and Information on IBM’s N series].
Rich is a Field Technical Sales Specialist with deep-dive knowledge and experience.
He's already posted a dozen or so entries, to give you a feel for the level of technical detail he will provide.
Please welcome Rich by following his blog and posting comments on his posts.