This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Continuing my week in Las Vegas for the Data Center Conference 2009, I attended a keynote session on Service Management. There were two analysts that co-presented this session.
One analyst was the wife of a real CEO, and the other was the wife of a real CIO, so the two analysts explained that there was a langauge gap between IT and business. Use the analogy of a clock, business is concerned with the time shown on the front face is correct and ticking properly, but behind the scenes, the gears of the clock, represent IT, finance, supply chain and other operations.
Based on recent surveys, there is a 45 percent "alignment" between the goals of CEO and the goals of a CIO. CEOs are concerned about decision making, workforce productivity, and customer satisfaction. CIOs on the other hand are worried about costs, operations and change initiatives. Both CEOs and CIOs are focused on innovations that can improve business process. Service management strives to shorten the language gap between business and IT, by helping to drive operational excellence that benefits both CEO and CIO goals. Recent surveys found the key drivers for this are controlling costs, improving customer satisfaction, availability, agilty and making better business decisions.
Unfortunately, in this economy, the idea of "transformation" is out, and "restructuring" is in. In much the same way that employees have abandoned career development in favor of simple job preservation, companies are focused on tactical solutions to get through this financial meltdown, rather than launching transformation projects like deploying Service Management tools.
How much influence does the CIO have on running the rest of the business? Various surveys have found the following, ranked from most influential to least:
5-9 percent, Enterprise Leader
15-18 percent, Trusted Ally
25-32 percent, Partner
27-35 percent, Transactional
7-20 percent, At Risk
The bottom rank not only have little or no influence, but are at risk of losing their jobs. Evaluations based on a Maturity model finds many I&O operations in trouble, 11 percent taking some pro-active measures, 59 percent committed to improvement, and 30 percent aware of the problems.
IT Service Management tries to bring a similar discipline as Portfolio Management and Application Lifecycle Management. Why can't IT be treated like any other part of the business portfolio? What is the business value of IT? IT can help a business run, grow and even transform. IT can help consolidate and centralize shared services to help leverage resources and offer cost optimizations not just for itself, but for the business as a whole.
CIOs that can adopt IT Service Management can have a "Jacks or Better" chance for a seat at the executive table to help drive the business forward.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
This week I am blogging from beautiful Caesars Palace hotel in Las Vegas, Nevada to report on what I see and hear at the
28th annual Data Center Conference. Today was simply registration, which opened at 4pm, and I was able to get my conference backpack, badge, and details of the week.
Already, I can tell there will be more people here, and it looks like the economy is on the rebound versus last year. Here are my
posts from 12 months ago when I attended this conference in 2008:
This year, we will have the IBM Portable Modular Data Center (PMDC) with XIV and iDataPlex inside, as well as several subject matter experts joining me at the solution center. Look for us in the "Hunter Green" shirts.
I almost sprayed coffee all over my screen when I read this post from fellow blogger from EMC Mark Twomey on his StorageZilla blog titled [Dead End]. In it he implies that you should only consider storage technologies based on x86 technologies such as those from Intel, not other CPU technologies like POWER or MIPS.
When IBM first came out with the SAN Volume Controller in 2003, we were able to show that adding Intel-based SVC nodes can improve the performance and functionality of POWER-based DMX boxes from EMC. EMC salesmen often retorted with "Yes, but do you really want to risk your mission-critical data going through an Intel-based processor solution?" This FUD implied that Intel had a bad reputation for quality and reliability. The original Symmetrix were based on Motorolla 68000's but they modernized to use IBM's POWER chips in their later models. EMC's previous attempt to use Intel technology was their EMC Invista, a commercial failure. It is no surprise then that EMC DMX customers are scared to death to move their mission critical data over to Intel-based V-max.
I have found the primary reason people fear Intel-based solutions is their experience with poorly-written Windows programs. There were enough of these poorly-written Windows programs that everyone has either personal experience, or knows someone who has, and that was enough.
It reminds me of the time I was in Vac, Hungary, giving a lab tour to a set of prospective clients where we manufacture the DS8000 series and SAN Volume Controller. Rows and rows of beautiful Hungarian women sliding disk drives in place, and big hefty Hungarian beefcake moving the finished units to their appropriate places. The head of the facility explained all about the hardware technology, how we check and double check all of the equipment individually, and together as a system. One client stated "Yes, but how often are problems from the hardware? We find nearly all of our problems on disk systems from whichever storage vendor we buy from are in the microcode." It's true.
Both Intel-based processors and POWER-based processors have all the technological functions needed to run storage systems. The difference is all in the microcode. So, if you are looking for safe and stable microcode, the IBM System Storage DS8700 continues its POWER-based tradition for compatibility with previous models. For those that demand x86-based units, the IBM SAN Volume Controller has been around since 2003, the XIV Storage System has been in production since 2005, and our IBM N series are also Intel-based, running Version 7 of the ONTAP operating system.
For those who want to meet me in person, there are two opportunities coming up in December.
Data Center Conference, December 1-4
Once again, I will be blogging from Ceasar's Palace Las Vegas at this year's [Data Center Conference 2009]! Last year's conference was a blast, and this one looks to be quite exciting. IBM is again a premier sponsor. Scheduled to speak are the following IBM executives:
Helene Armitage, the new General Manager of System Software, on "IT-Wide Virtualization, A Prerequisite for a Truly Dynamic Infrastructure"
Steve Sams, the VP of Sites and Facilities, on "Data Center Actions Your CFO Will Love"
Barry Rudolph, the VP of System Storage, on "Meeting the Information Infrastructure Challenge"
We will also have an IBM booth at the Solutions Showcase, showing off the latest in Cloud Computing,
Service Management, Information Infrastructure, and Workload-optimized systems. You will be able to schedule one-on-one sessions with IBM executives and subject matter experts. Best of all, we will have on display a Portable Modular Data Center [PMDC] that can hold a fully operational data center in a standard [20 foot shipping container].
IBM Virtualization and Consolidation Briefing, December 15
This is being done "open house" style. If you can get yourself to the IBM Tucson Executive Briefing Center, IBM will provide you breakfast, a series of presentations, lunch, and then even more presentations. Your stomach and brain will be full by the end of the day. Here is a list of the presentations:
Well, it's Tuesday again, and we have more IBM announcements.
XIV asynchronous mirror
For those not using XIV behind SAN Volume Controller, [XIV now offers native asynchronous mirroring] support to another XIV far, far away. Unlike other disk systems that are limited to two or three sites, an XIV can mirror to up to 15 other sites. The mirroring can be at the individual volume, or a consistency group of multiple volumes. Each mirror pair can have its own recovery point objective (RPO). For example, a consistency group of mission critical application data might be given an RPO of 30 seconds, but less important data might be given an RPO of 20 minutes. This allows the XIV to prioritize packets it sends across the network.
As with XIV synchronous mirror, this new asynchronous mirror feature can send the data over either its
Fibre Channel ports (via FCIP) or its Ethernet ports.
The IBM System Storage SAN384B and SAN768B directors now offer [two new blades!]
A 24-port FCoCEE blade where each port can handle 10Gb convergence enhanced Ethernet (CEE). CEE can be used to transmit Fibre Channel, TCP/IP, iSCSI and other Ethernet protocols. This connect directly to server's converged network adapter (CNA) cards.
A 24-port mixed blade, with 12 FC ports (1Gbps, 2Ggbs, 4Gbps, 8Gbps), 10 Ethernet ports (1GbE) and 2 Ethernet ports (10GbE). This would connect to traditional server NIC, TOE and HBA cards as well as traditional NAS, iSCSI and FC based storage devices.
IBM also announced the IBM System Storage [SAN06B-R Fibre Channel router]. This has 16 FC ports (1Gbps up to 8Gbps) and six Ethernet ports (1GbE), with support for both FC routing as well as FCIP extended distance support.
With the holiday season coming up at the end of the year, now is a great time to ask Santa for a new shiny pair of XIV systems, and some extra networking gear to connect them.
Well, I had a pleasant vacation. I took a trip up to beautiful Lake Powell in Northern Arizona as part of a "Murder Mystery Dinner" weekend. This trip was organized by AAA and Lake Powell in association with the professionals at [Murder Ink Productions] out of Phoenix.
The trip involved two busloads of people from Tucson and Phoenix driving up to Lake Powell, with a series of meals that introduced all the characters and gave out clues to solve a murder. At the end of the dinner on the last evening, we had to guess who dunnit, how, and why. I solved it, and got this lovely tee-shirt.
More importantly, the trip gave me a chance to read
[The Numerati] by Stephen Baker. The author explains all the different ways that "analysts" are able to crunch through the large volumes of data to gain insight. He has chapters on how this is done for shoppers in retail sales, voters for upcoming elections, patients for medical care, and even matchmaking services like chemistry.com. Like the Murder Mystery Dinner, there are too many suspects and too many clues, but these number-crunchers, which Mr. Baker calls The Numerati, are able to figure out through advanced business analytics.
FTC Notice: I recommend this book. I did not receive any compensation to mention this book on this blog, I did not receive a copy of the book free for this review, and I do not know the author. Everyone in my staff is reading this book, and I borrowed a copy from a co-worker.
If you don't understand how this all works, here is a quick 6-minute [video] on YouTube.
In his Backup Blog, fellow blogger Scott Waterhouse from EMC has yet another post about Tivoli Storage Manager (TSM) titled [TSM and the Elephant]. He argues that only the cost of new TSM servers should be considered in any comparison, on the assumption that if you have to deploy another server, you have to attach to it fresh new disk storage, a brand new tape library, and hire an independent group of backup administrators to manage. Of course, that is bull, people use much of existing infrastructure and existing skilled labor pool every time new servers are added, as I tried to point out in my post [TSM Economies of Scale].
However, Scott does suggest that we should look at all the costs, not just the cost of a new server, which we in the industry call Total Cost of Ownership (TCO). Here is an excerpt:
Final point: there is actually a really important secondary point here--what is the TCO of your backup infrastructure. In some ways, TSM is one of the most expensive (number of servers and tape drives, for example), relative to other backup applications. However, I think it would be a really interesting exercise to critically examine the TCO of the various backup applications at different scales to evaluate if there is any genuine cost differentiation between them.
Fortunately, I have a recent TCO/ROI analysis for a large customer in the Eastern United States that compares their existing EMC Legato deployment to a new proposed TSM deployment. The assessment was performed by our IBM Tivoli ROI Analyst team, using a tool developed by Alinean. The process compares the TCO of the currently deployed solution (in this case EMC Legato) with the TCO of the proposed replacement solution (in this case IBM TSM) for 55,000 client nodes at expected growth rates over a three year period, and determines the amount of investment, cost savings and other benefits, and return on investment (ROI).
Here are the results:
"A risk adjusted analysis of the proposed solution's impact was conducted and it was projected that implementing the proposed solutions resulted in $16,174,919 of 3 year cumulative benefits. Of these projected benefits, $8,015,692 are direct benefits and $8,159,227 are indirect benefits.
Top cumulative benefits for the project include:
Backup Coverage Risk Avoidance - $6,749,796
Reduction in Maintenance of Competitive Products - $1,576,000
Reduction in Existing Tivoli Maintenance (Storage and Monitoring) - $1,490,000
IT Operations Labor Savings - Storage Management - $982,919
Network Bandwidth Savings - $575,196
Standardization - $366,667
Future cost avoidance of addtional competitive licenses - $350,000
These benefits can be grouped regarding business impact as:
$6,456,025 in IT cost reductions
$1,559,667 in business operating efficiency improvements
$8,159,227 in business strategic advantage benefits
The proposed project is expected to help the company meet the following goals and drive the following benefits:
Reduce Business Risks $6,749,796
Consolidate and Standardize IT Infrastructure $4,975,667
Reduce IT Infrastructure Costs $2,057,107
Improve IT System Availability / Service Levels $1,409,431
Improve IT Staff Efficiency / Productivity $982,919
To implement the proposed project will require a 3 year cumulative investment of $5,760,094 including:
$0 in initial expenses
$4,650,000 in capital expenditures
$1,110,094 in operating expenditures
Comparing the costs and benefits of the proposed project using discounted cash flow analysis and factoring in a risk-adjusted discount rate of 9.5%, the proposed business case predicts:
Risk Adjusted Return on Investment (RA ROI) of 172%
Return on Investment (ROI) of 181%
Net Present Value (NPV) savings of $8,425,014
Payback period of 9.0 month(s)
Note: The project has been risk-adjusted for an overall deployment schedule of 5 months."
IBM Tivoli Storage Manager uses less bandwidth, fewer disk and tape storage resources than EMC Legato. For even a large deployment of this kind, payback period is only NINE MONTHS. Generally, if you can get a new proposed investment to have less than 24 month payback period you have enough to get both CFO and CIO excited, so this one is a no-brainer.
Perhaps this helps explain why TSM enjoys such a larger marketshare than EMC Legato in the backup software marketplace. No doubt Scott might be able to come up with a counter-example, a very small business with fewer than 10 employees where an EMC Legato deployment might be less expensive than a comparable TSM deployment. However, when it comes to scalability, TSM is king. The majority of the Fortune 1000 companies use Tivoli Storage Manager, and IBM uses TSM internally for its own IT, managed storage services, and cloud computing facilities.
Please welcome new IBM blogger Keith Stevenson, his new blog is called [Infovore]. He gives his take on the big October 20 announcement we had this week,
and will continue to cover topics related to storage of information.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
I am proud to announce we have yet another IBM blogger for the storosphere, Rich Swain from IBM's Research Triangle Park in Raleigh, North Carolina will blog about
[News and Information on IBM’s N series].
Rich is a Field Technical Sales Specialist with deep-dive knowledge and experience.
He's already posted a dozen or so entries, to give you a feel for the level of technical detail he will provide.
Please welcome Rich by following his blog and posting comments on his posts.
Well, it's Wednesday! Normally, IBM makes its announcements on Tuesday's, but this week that landed on the 13th, and some people are superstitious, so we pushed it back to today. Fellow IBM master inventory Barry Whyte starts the first in a series of posts with: New SVC v5 CF8 node with native SSD support.
There are really two separate items being announced for the IBM System Storage SAN Volume Controller (SVC):
SVC v5 software
The software moves from a 32-bit kernel to a 64-bit kernel. Fortunately, IBM had the foresight to know that would happen back in 2005, so models 8F2, 8F4 and 8G4 can be upgrade to this new software level and gain new functionality. This is because these models have 64-bit capable processors. Those with six-year-old 4F2 will continue to run on SVC 4.3.1, but should consider it's about time to upgrade soon.
New 2145-CF8 model
The CF8 is based on the IBM System x 3550M2. Each node can have up to 4 Solid-State Drives (SSD) that can be treated as SVC Managed Disk Groups. Virtual disks can be easily migrated from hard disk drives (HDD) over to SSD, processed, and then move them back to HDD. By treating the SSD as managed disks, rather than an extension of the cache, we are able to support all of the features and functions in a seamless manner.
As Barry says, IBM has been working on this for quite a while, and based on initial responses looks to be quite successful in the market!
(Note: I have been informed that this week the U.S. Federal Trade Commission has [announced an update] to its
[16 CFR Part 255: Guides Concerning the Use of Endorsements and Testimonials in Advertising]. As if it were not obvious enough already, I must emphasize that I work for IBM, IBM provides me all the equipment and related documentation that I need for me to blog about IBM solutions, and that I am paid to blog as part of my job description. Both my boss and I agree I am not paid enough, but that is another matter. Beginning December 1, 2009, all positive mentions of IBM products, solutions and services on this blog might be considered a "celebrity endorsement" by the FTC and others under these new guidelines. Negative mentions of IBM products are probably typos.)
At a conference once, a presenter discussing tips and techniques about public speaking told everyone to be
aware that everyone in the audience is "tuned into radio station WIIFM" (What's In It For Me). If a member of the audience cannot figure out why the information being presented is relevant to them individually, they may not pay attention for long. Likewise, when it comes to archiving data for long term retention, I think we have many people are tuned into KEFM (the Keep Everything Forever methodology). Two classic articles from Drew Robb on the subject are [Can Data Ever Be Deleted?] and [Experts Question 'Keep Everything' Philosophy].
(Note: For those of my readers who do not live in the US, most radio stations start with
the letter "K" if they are on the left half of the country, and "W" if they are on the right half. See
Thomas H. White's [Early Radio History] to learn more.)
Contrary to popular belief, IBM would rather have their clients implement a viable archive strategy than just mindlessly buying more disk and tape for a "Keep Everything Forever" methodology. Keeping all information around forever can be a liability, as data that you store can be used against you in a court of law. It can also make it difficult to find the information that you do need, because the sheer volume of information to sort through makes the process more time consuming.
The problem with most archive storage solutions is that they are inflexible, treating all data the same under a common set of rules. The IBM Information Archive is different. You can have up to three separate "collections".
Each collection can have its own set of policies and rules. You can have a collection that is locked down
for compliance with full Non-Erasable, Non-Rewriteable (NENR) enforcement, and another collection that allows
full read/write/delete capability.
Each collection can consist of either files or objects. Unlike other storage devices that force you to convert files into objects, or objects into files for their own benefit.
IBM Information Archive is scalable enough to support up to a billion of either files or objects per collection.
Each collection can span storage tiers, even across disk and tape resources.
Object collections are accessed using IBM System Storage Archive Manager (SSAM) application programming interface (API). People who use IBM Tivoli Storage Manager (TSM) archive or IBM System Storage DR550 are already familiar with this interface. An object can represent the archived slice of a repository, a set of rows from a database, a collection of emails from an individual mailbox user, etc.
File collections can be used for any type of data you would store on a NAS device. This includes databases, email repositories, static Web pages, seismic data, user documents, spreadsheets, presentations, medical images, photos, videos, and so on.
The IBM Information Archive solution was designed to work with a variety of Enterprise Content Management (ECM) software, and is part of the overall IBM Smart Archive strategy.
Well, it's Tuesday, and that means IBM announcements! Today is bigger, as there are a lot of Dynamic Infrastructure announcements throughout the company with a common theme, cloud computing and smart business systems that support the new way of doing things. Today, IBM announced its new "IBM Smart Archive" strategy that integrates software, storage, servers and services into solutions that help meet the challenges of today and tomorrow. IBM has been spending the past few years working across its various divisions and acquisitions to ensure that our clients have complete end-to-end solutions.
IBM is introducing new "Smart Business Systems" that can be used on-premises for private-cloud configurations, as well as by cloud-computing companies to offer IT as a service.
IBM [Information Archive] is the first to be unveiled, a disk-only or blended disk-and-tape Information Infrastructure solution that offers a "unified storage" approach with amazing flexibility for dealing with various archive requirements:
For those with applications using the IBM Tivoli Storage Manager (TSM) or IBM System Storage Archive Manager (SSAM) API of the IBM System Storage DR550 data retention solution, the Information Archive will provide a direct migration, supporting this API for existing applications.
For those with IBM N series using SnapLock or the File System Gateway of the DR550, the Information Archive will support various NAS protocols, deployed in stages, including NFS, CIFS, HTTP and FTP access, with Non-Erasable, Non-Rewriteable (NENR) enforcement that are compatible with current IBM N series SnapLock usage.
For those using NAS devices with PACS applications to store X-rays and other medical images, the Information Archive will provide similar NAS protocol interfaces. Information Archive will support both read-only data such as X-rays, as well as read/write data such as Electronic Medical Records.
Information Archive is not just for compliance data that was previously sent to WORM optical media. Instead, it can handle all kinds of data, rewriteable data, read-only data, and data that needs to be locked down for tamper protection. It can handle structured databases, emails, videos and unstructured files, as well as objects stored through the SSAM API.
The Information Archive has all the server, storage and software integrated together into a single machine type/model number. It is based on IBM's General Parallel File System (GPFS) to provide incredible scalability, the same clustered file system used by many of the top 500 supercomputers. Initially, Information Archive will support up to 304TB raw capacity of disk and Petabytes of tape. You can read the [Spec Sheet] for other technical details.
For those who prefer a more "customized" approach, similar to IBM Scale-Out File Services (SoFS), IBM has [Smart Business Storage Cloud]. IBM Global Services can customize a solution that is best for you, using many of the same technologies. In fact, IBM Global Services announced a variety of new cloud-computing services to help enterprises determine the best approach.
In a related announcement, IBM announced [LotusLive iNotes], which you can think of as a "business-ready" version of Google's GoogleApps, Gmail and GoogleCalendar. IBM is focused on security and reliability but leaves out the advertising and data mining that people have been forced to tolerate from consumer-oriented Web 2.0-based solutions. IBM's clients that are already familiar with on-premises version of Lotus Notes will have no trouble using LotusLive iNotes.
There was actually a lot more announced today, which I will try to get to in later posts.
Bruce Allen from BR Allen Associates LLC, an IT technology strategy and consulting firm, has written an excellent 9-page White Paper contrasting IBM and EMC's latest strategies. Here are some key excerpts:
"The term “information infrastructure” is over 40 years old, but its characteristics and requirements in today’s world are quite new indeed. Specifically, federating all storage enterprise wide, consolidating and standardizing onto virtualized, high-capacity media, and enabling dynamic, cloud-ready provisioning are among the major new IT challenges. Moreover, continued explosive storage growth demands that a systematic approach be crafted to address the full spectrum of current and future (information) compliance, availability, retention and security goals. For many customers, this transformation must occur amidst a storage growth rate of 50%-70% CAGR.
...IBM’s Information Infrastructure focus is a core element and foundational pillar in its Dynamic Infrastructure and New Intelligence initiatives, both well defined and tightly coupled to an umbrella vision and strategy called “Smarter Planet.” It is also important to remember that IBM has its own vast, internal infrastructure, and is transforming it in the same manner prescribed to customers. IBM’s increased investment in solution centers and expertise to develop and test drive customer solutions demonstrates its resolve in this area.
...In contrast, storage vendor EMC references information infrastructure as half of its bifurcated strategy,1 with virtualization being the other half. The two are represented by slightly overlapping circles, and interestingly, these two circles essentially mirror the EMC organization. ...Analysis of both the strategy and the organization indicates a continued strong product focus, a stark contrast to IBM’s strategy that puts solutions first and products second.
...IBM’s Information Infrastructure strategy and portfolio takes a more holistic approach and appears to be shifting its own organizations and partners from pure product focus to a true solution orientation that more directly addresses customer needs. ...IBM views these elements as integral to any information-led transformation, but its competitors fall well short in this arena.
...As a system vendor, IBM clearly has a more in-depth set of offerings and a more elegant strategy and vision for providing a dynamic information environment than its competitors. None of the other system vendors have made the strides, or the investments, that IBM has.
...Because of its size and breadth, IBM uniquely has all of the pieces, and also has a vast information infrastructure of its own to build and manage. IBM often uses its internal systems to showcase new capabilities, as shown in these examples:
In an early cloud computing production pilot, IBM was able to reduce costs by managing more than 92,000 worldwide users with one storage cloud and one delivery team. Lessons learned from this deployment helped IBM establish cloud computing requirements for today’s products and services.
In 2009, IBM deployed a unified, centralized customer support portal for all technical support tools and information. The portal unifies all IBM systems, software, and services support sites, including those from recent acquisitions. By leveraging its own portal, database, and storage technology, IBM was able to consolidate multiple support sites into a single portal. The new portal dramatically simplifies the user experience for clients with multiple IBM products, while helping IBM control infrastructure costs.
...a key difference between IBM and EMC is IBM’s orientation to total-solution provisioning, not just for one application at a time, but for the entire set of infrastructure needs that customers have. To ensure this, a clearly articulated strategy and vision keeps IBM’s focus on the bigger picture as it addresses each customer’s requirements.
...Efforts tied to cloud computing have helped vendor organizations to work together better toward composite and integrated solutions, but the vague specifications and lack of immediate revenue keep most vendor sales organizations focused on their respective products. The only other way to address the challenges of integrating people and technology as described above is to put a clear strategy in place with specific tactical goals and objectives. This is where IBM leads the industry in making demonstrable progress in building solutions that achieve the goals of its dynamic infrastructure model and strategy.
...IBM is in a unique position to deliver and support the full information infrastructure “stack” and address all of its clients’ information-centric challenges. The combination of IBM’s storage technology, information management products, aggressive financing, and best-of-breed integrated services supported by world-class expertise and proven experience, provide the building blocks for the world’s strongest information infrastructure portfolio.
Mr. Allen also discusses the successes of two real client examples, Virginia Commonwealth University Health Systems (VCUHS), and INTTRA, the largest multi-carrier e-commerce platform for the ocean shipping industry.
Well, it's Tuesday, which means IBM Announcements!
We have both disk and tape related announcements today.
2 TB Drives
Yes, they are finally here. IBM now offers [2 TB SATA drives for its IBM System Storage DCS9900 series] disk systems. These are 5400 RPM, slower than traditional 7200 RPM SATA drives. This increases the maximum capacity of a single DCS9900 from 1200 TB to 2400 TB. The DCS9900 is IBM's MAID system (Massive Array of Idle Disk) which allows for drive spin-down to reduce energy costs and is ideal for long term retention of archive data that must remain on disk for High Performance Computing or video streaming.
TS3000 System Console
The TS3000 System Console [provides improved features for service and support] of up to 24 tape library frames or 43 unique tape systems. Tape frames include those of the TS7740, TS7720 and TS7650. Tape systems include TS3500, TS3400 or 3494 libraries as well as stand-alone TS1120 and TS1130 drives. Having the TS3000 System Console in place is a benefit to both IBM and the customer, as it improves IBM's ability to provide service in a more timely manner.
Both announcements are part of IBM's strategy to provide cost-effective, energy-efficient, long-term retention storage for archive data.
I saw this as an opportunity to promote the new IBM Tivoli Storage Manager v6.1 which offers a variety of new scalability features, and continues to provide excellent economies of scale for large deployments, in my post [IBM has scalable backup solutions].
"So does TSM scale? Sure! Just add more servers. But this is not an economy of scale. Nothing gets less expensive as the capacity grows. You get a more or less linear growth of costs that is directly correlated to the growth of primary storage capacity. (Technically, it costs will jump at regular and predictable intervals, by regular and predictable and equal amounts, as you add TSM servers to the infrastructure--but on average it is a direct linear growth. Assuming you are right sized right now, if you were to double your primary storage capacity, you would double the size of the TSM infrastructure, and double your associated costs.)"
I talked about inaccurate vendor FUD in my post [The murals in restaurants], and recently, I saw StorageBod's piece, [FUDdy Waters]. So what would "economies of scale" look like? Using Scott's own words:
Without Economies of Scale
"If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data."
With Economies of Scalee
"If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale."
So, let's do some simple examples. I'll focus on a backup solution just for employee workstations, each employee has 100GB of personal data to backup on their laptop or PC. We'll look at a one-person company, a ten-person company, and a hundred-person company.
Case 1: The one-person company
Here the sole owner needs a backup solution. Here are all the steps she might perform:
Spend hours of time evaluating different backup products available, and make sure her operating system, file system and applications are supported
Spend hours shopping for external media, this could be an external USB disk drive, optical DVD drive, or tape drive, and confirm it is supported by the selected backup software.
Purchase the backup software, external drive, and if optical or tape, blank media cartridges.
Spend time learning the product, purchase "Backup for Dummies" or similar book, and/or taking a training class.
Install and configure the software
Operate the software, or set it up to run automatically, and take the media offsite at the end of the day, and back each morning
Case 2: The ten-person company
I guess if each of the ten employees went off and performed all of the same steps as above, there would be no economies of scale.
Fortunately, co-workers are amazingly efficient in avoiding unnecessary work.
Rather than have all ten people evaluate backup solutions, have one person do it. If everyone runs the same or similar operating system, file systems and applications, this can be done about the same as the one-person case.
Ditto on the storage media. Why should 10 people go off and evaluate their own storage media. One person can do it for all ten people in about the same time as it takes for one person.
Purchasing the software and hardware. Ok, here is where some costs may be linear, depending on your choices. Some software vendors give bulk discounts, so purchasing 10 seats of the same software could be less than 10 times the cost of one license. As for storage hardware, it might be possible to share drives and even media. Perhaps one or two storage systems can be shared by the entire team.
For a lot of backup software, most of the work is in the initial set up, then it runs automatically afterwards. That is the case for TSM. You create a "dsm.opt" file, and it can list all of the include/exclude files and other rules and policies. Once the first person sets this up, they share it with their co-workers.
Hopefully, if storage hardware was consolidated, such that you have fewer drives than people, you can probably have fewer people responsible for operations. For example, let's have the first five employees sharing one drive managed by Joe, and the second five employees sharing a second drive managed by Sally. Only two people need to spend time taking media offsite, bringing it back and so on.
Case 3: The hundred-person company
Again, it is possible that a hundred-person company consists of 10 departments of 10 people each, and they all follow the above approach independently, resulting in no economies of scale. But again, that is not likely.
Here one or a few people can invest time to evaluate backup solutions. Certainly far less than 100 times the effort for a one-person company.
Same with storage media. With 100 employees, you can now invest in a tape library with robotic automation.
Purchase of software and hardware. Again, discounts will probably apply for large deployments. Purchasing 1 tape library for all one hundred people is less than 10 times the cost and effort of 10 departments all making independent purchases.
With a hundred employees, you may have some differences in operating system, file systems and applications. Still, this might mean two to five versions of dsm.opt, and not 10 or 100 independent configurations.
Operations is where the big savings happen. TSM has "progressive incremental backup" so it only backs up changed data. Other backup schemes involve taking period full backups which tie up the network and consume a lot of back end resources. In head-to-head comparisons between IBM Tivoli Storage Manager and Symantec's NetBackup, IBM TSM was shown to use significantly less network LAN bandwidth, less disk storage capacity, and fewer tape cartridges than NetBackup.
The savings are even greater with data deduplication. Either using hardware, like IBM TS76750 ProtecTIER data deduplication solution, or software like the data deduplication capability built-in with IBM TSM v6.1, you can take advantage of the fact that 100 employees might have a lot of common data between them.
So, I have demonstrated how savings through economies of scale are achieved using IBM Tivoli Storage Manager. Adding one more person in each case is cheaper than the first person. The situation is not linear as Scott suggests. But what about larger deployments? IBM TS3500 Tape Library can hold one PB of data in only 10 square feet of data center floorspace. The IBM TS7650G gateway can manage up to 1 PB of disk, holding as much as 25 PB of backup copies. IT Analysts Tony Palmer, Brian Garrett and Lauren Whitehouse from Enterprise Strategy Group tried IBM TSM v6.1 out for themselves and wrote up a ["Lab Validation"] report. Here is an excerpt:
"Backup/recovery software that embeds data reduction technology can address all three of these factors handily. IBM TSM 6.1 now has native deduplication capabilities built into its Extended Edition (EE) as a no-cost option. After data is written to the primary disk pool, a deduplication operation can be scheduled to eliminate redundancy at the sub-file level. Data deduplication, as its name implies, identifies and eliminates redundant data.
TSM 6.1 also includes features that optimize TSM scalability and manageability to meet increasingly demanding service levels resulting from relentless data growth. The move from a proprietary back-end database to IBM DB2 improves scalability, availability, and performance without adding complexity; the DB2 database is automatically maintained and managed by TSM. IBM upgraded the monitoring and reporting capabilities to near real-time and completely redesigned the dashboard that provides visibility into the system. TSM and TSM EE include these enhanced monitoring and reporting capabilities at no cost."
The majority of Fortune 1000 customers use IBM Tivoli Storage Manager, and it is the backup software that IBM uses itself in its own huge data centers, including the cloud computing facilities. In combination with IBM Tivoli FastBack for remote office/branch office (ROBO) situations, and complemented with point-in-time and disk mirroring hardware capabilities such as IBM FlashCopy, Metro Mirror, and Global Mirror, IBM Tivoli Storage Manager can be an effective, scalable part of a complete Unified Recovery Management solution.
This week, some of my coworkers are out at
[VMworld 2009] in San Francisco. IBM is a platinum sponsor, and is the leading reseller of VMware software. Here is the floor plan for our IBM booth there:
Virtual Data Center in a Box & Virtual Networking on
IBM & VMware Joint Collaboration on Power Monitoring
“Always on IT” Business Continuity Solution
IBM System Storage™ XIV®
[IBM XIV Storage System] is a revolutionary, easily managed, open disk system, designed to meet today’s ongoing IT challenges. This system now supports VMware 4.0 and extends the benefits of virtualization to your storage system, enabling easy provisioning and self-tuning after hardware changes. Its unique grid-based architecture represents the next generation of high-end storage and delivers outstanding performance, scalability, reliability and features, along with management simplicity and exceptional TCO.
IBM Storage Solutions with VMware
Featured products include: The new IBM System Storage DS5020 , Virtual Disk solutions with IBM System Storage SAN Volume Controller, IBM Tivoli Storage Productivity Center, and IBM System Storage ProtecTIER Data Deduplication solutions.
Server virtualization with VMware vSphere offers significant benefits to an organization, including increased asset utilization, simplified management and faster server provisioning. In addition to these benefits, VMware enables business agility and business continuity with more advanced features such as VMotion, high availability, fault tolerance, and Site Recovery Manager that all require dependable high-performance shared storage. Adding storage solutions --including virtualized storage-- from IBM delivers complementary benefits to your information infrastructure that extend and enhance the benefits of VMware vSphere while increasing overall reliability, availability and performance to help you transform into a dynamic infrastructure. IBM can provide the right storage solution for your environment and requirements. Our solutions help maximize efficiency with lower costs and provide affordable, scalable storage solutions that help you solve your particular needs.
Stop by to learn how our the exciting new storage solutions can help optimize VMware including self-encrypting storage, automated, affordable disaster recovery with VMware SRM easier and faster provisioning of storage for virtual machines, dramatically improved storage utilization with ProtecTIER deduplication, and how the DS5000 has lower costs Total Cost of Acquisition (TCA) than typical competitors.
IBM Smart Business Desktop Cloud
IBM System x® iDataPlex™: Get More on the Floor
Virtual Client Solutions from IBM
IBM also is sponsoring some breakout sessions:
Leverage Storage Solutions for a Smarter Infrastructure
Simplify and Optimize with IBM N series
IBM SAN Volume Controller: Virtualized Storage for Virtual Servers
XIV: Storage Reinvented for today's dynamic world
Wish I was there, looks like a lot of good information!
This week, SHARE conference is being held at the Colorado Convention Center in Denver, Colorado. I covered this conference for 10 years earlier in my career. Now, my colleague Curtis Neal covers these on a regular basis, and is giving the following presentations this week:
IBM Virtual Tape Products: DILIGENT ProtecTIER and TS7700 Update
Wednesday (1:30-2:30pm), Curtis will present IBM's premier virtual tape libraries. The TS7650G ProtecTIER Data Deduplication solution supports distributed systems like Windows, UNIX and Windows. The TS7700 supports the IBM System z mainframe.
SAN Volume Controller Update
Thursday (8:00-9:00am), Curtis will cover the latest features of the IBM System Storage SAN Volume Controller (SVC). SVC has features like thin provisioning, vDisk mirroring, and cascaded FlashCopy support.
IBM System Storage DS5000 Update
Friday (8:00-9:00am), Curtis will cover the DS5100 and DS5300, and probably the new DS5020 model. These are midrange disk systems that provide excellent performance for distributed systems, full disk encryption, and intermix of FC and SATA drives.
Unlike other conferences where people just go once and are never seen again, SHARE brings back the same people back year after year, so that you can maintain relationships across organizations, and can carry on forward-looking strategic discussions.
Well, it's Tuesday again, and that means IBM announcements!
We've got a variety of storage-related items today, so here's my quick recap:
DS5020 and EXP520 disk systems
[IBM System Storage DS5020]
provides the functional replacement for DS4700 disk systems. These are combined controller
and 16 drives in a compact 3U package.
The EXP520 expansion drawer provides additional 16 drives per 3U drawer. A DS5020 can
support upo to six additional EXP520, for a total of 112 drives per system.
The DS5020 supports both 8 Gbps FC as well as 1GbE iSCSI.
New Remote Support Manager (DS-RSM model RS2)
The [IBM System Storage DS-RSM Model
RS2] supports of up to 50 disk systems, any mix of DS3000, DS4000 and DS5000 series.
It includes "call home" support, which is really "email home", sending error alerts to IBM
if there are any problems. The RSM also allows IBM to dial-in to perform diagnostics before
arrival, reducing the time needed to resolve a problem. The model RS2 is a beefier model
with more processing power than the prior generation RS1.
New Ethernet Switches
With the increased interest in iSCSI protocol, and the new upcoming Fibre Channel over
Convergence Enhanced Ethernet (FCoCEE), IBM's re-entrance into the ethernet switch market
has drawn a lot of interest.
The [IBM Ethernet Switch r-
series] offers 4-slot, 8-slot, 16-slot, and 32-slot models. Each slot can handle either
16 10GbE ports, or 48 1GbE ports. This means up to 1,536 ports.
The [c-series] now offers a
24-port model. This is either 24 copper and 4 fiber optic, or 24 fiber optic.
The "hybrid fiber" SFP fiber optic can handle either single or multi-mode, eliminating the
need to commit to one or the other, providing greater data center flexibility.
The [IBM Ethernet Switch B24X]
offers 24 fiber optic (that can handle 10GbE or 1GbE) and 4 copper (10/100/1000 MbE RJ45)
Storage Optimization and Integration Services
[IBM Storage Optimization and
Integration Services] are available. IBM service consultants use IBM's own
Storage Enterprise Resource Planner (SERP) software to evaluate your environment and provide
recommendations on how to improve your information infrastructure. This can be especially
helpful if you are looking at deploying server virtualization like VMware or Hyper-V.
As people look towards deploying a dynamic infrastructure, these new offerings can be a
Eventually, there comes a time to drop support for older, outdated programs that don't meet the latest standards. I had several complain that they could not read my last post on Internet Explorer 6. The post reads fine on more modern browsers like Firefox 3 and even Google's Chrome browser, but not IE6.
Google confirms that warnings are appearing:
[Official: YouTube to stop IE6 support].
My choice is to either stop embedding YouTube videos, some of which are created by my own marketing team specifically on my behalf, or drop support for IE6. I choose the latter. If you are still using IE6, please consider switching to Firefox 3 or Google Chrome instead.
This week, scientists at IBM Research and the California Institute of
Technology announced a scientific advancement that could be a major
breakthrough in enabling the semiconductor industry to pack more power
and speed into tiny computer chips, while making them more energy
efficient and less expensive to manufacture. IBM is a leader in
solid-state technology, and this scientific breakthrough shows promise.
But first, a discussion of how solid-state chips are made in the first place. Basically, a round thin wafer is etched using [photolithography]
with lots of tiny transistor circuits. The same chip is repeated over
and over on a single wafer, and once the wafer is complete, it is
chopped up into little individual squares. Wikipedia has a nice article
on [semiconductor device fabrication], but I found this
[YouTube video] more illuminating.
Up until now, the industry was able to get features down to 22
nanometers, and were hitting physical limitations to get down to
anything smaller. The new development from IBM and Caltech is to use
self-assembling DNA strands, folded into specific shapes using other
strands that act as staples, and then using these folded structures as
scaffolding to place in nanotubes. The result? Features as small as 6 nanometers. How cool is that?
While NAND Flash Solid-State Drives are available today, this new
technique can help develop newer, better technologies like Phase Change
Over on his Backup Blog, fellow blogger Scott Waterhouse from EMC has a post titled
[Backup Sucks: Reason #38]. Here is an excerpt:
Unfortunately, we have not been able to successfully leverage economies of scale in the world of backup and recovery. If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data.
If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale.
I suspect that where Scott mentions we in the above excerpt, he is referring to EMC in general, with products like
Legato. Fortunately, IBM has scalable backup solutions, using either a hardware approach, or one purely with software.
The hardware approach involves using deduplication hardware technology as the storage pool for IBM Tivoli Storage Manager (TSM). Using this approach, IBM Tivoli Storage Manager would receive data from dozens, hundreds or even thousands
of client nodes, and the backup copies would be sent to an IBM TS7650 ProtecTIER data deduplication appliance, IBM TS7650G gateway, or IBM N series with A-SIS. In most cases, companies have standardized on the operating systems and applications used on these nodes, and multiple copies of data reside across employee laptops. As a result, as you have more nodes backing up, you are able to achieve benefits of scale.
Perhaps your budget isn't big enough to handle new hardware purchases at this time, in this economy. Have no fear,
IBM also offers deduplication built right into the IBM Tivoli Storage Manager v6 software itself. You can use sequential access disk storage pool for this. TSM scans and identifies duplicate chunks of data in the backup copies, and also archive and HSM data, and reclaims the space when found.
If your company is using a backup software product that doesn't scale well, perhaps now is a good time to switch over to IBM Tivoli Storage Manager. TSM is perhaps the most scalable backup software product in the marketplace, giving IBM an "irrefutable advantage" over the competition.
This week, I was in the Phoenix area presenting at TechData's TechSelect University. TechData is one of IBM's IT distributors,
and TechSelect is their community of 440 resellers and 20 vendors. This year they celebrate their 10 year anniversary of this event. I covered three particular topics, and I was videotaped for those who were not able to attend my session. (There were very few empty seats at my sessions)
IBM Business Partners now realize that the "killer app" for storage is combining the IBM System Storage SAN Volume Controller with entry-level or midrange disk storage systems for an awesome solution. Solutions based on either the Entry Edition or the standard hardware models can compete well with a variety of robust features, including thin provisioning, vDisk mirroring, FlashCopy, Metro and Global Mirror. This has the advantage that the SVC can extend these functions not just to newly purchased disk capacity, but also existing storage capacity. The newly purchased capacity can be DS3400, DS4700 or the new DS5000 models. This is great "investment protection" for small and medium sized businesses.
LTO-4 drives and automation
The Linear Tape Open (LTO) consortium--consisting of IBM, HP and Quantum--has proven wildly successful, ending the
vendor-lockin from SDLT tape. I presented the latest LTO-4 offerings, including the TS2240, TS2340, TS2900, TS3100
and TS3200. The LTO consortium has already worked out a technology roadmap for LTO-5 and LTO-6. The LTO-4 drives
support WORM cartridges and on-board hardware-based encryption. The encryption keys can be managed with IBM Tivoli Key Lifecycle Manager (TKLM).
SAN and FCoCEE switches
IBM has agreements with Brocade, Cisco and Juniper Networks for various networking gear. I focused on entry-level switches for SAN fabrics, the SAN24B-4 and Cisco 9124, as well as new equipment for Convergence Enhanced Ethernet (CEE),
including IBM's Converged Network Adapater (CNA) for System x servers, and the SAN32B switch that has 24 10GbE CEE ports and 8 FC ports that support 8/4/2 and 4/2/1 SFP transceivers. FCoE Clients that want to deploy Fibre Channel over CEE (FCoCEE) today have everything the need to get started.
The venue was the
[Sheraton Wild Horse Pass Resort and Spa] in Chandler, just south of Phoenix. This compound includes [Rawhide], an 1800's era Western Town attraction, a rodeo arena, and a casino still under construction.
Dinners were held nearby at the infamous
[Rustler's Rooste] Steakhouse on South mountain.
Back in June, I mentioned this blog was [Moving to MyDeveloperWorks] which is based on IBM Lotus Connections.
Finally, the move is complete for all bloggers. If you are having problems with the redirects, you might need to unsubscribe and re-subscribe in your RSS feed reader. Here are the new links for several IBM bloggers that have moved over:
Continuing my week in Chicago, for the IBM Storage Symposium 2008, we had sessions that focused on individual products. IBM System Storage SAN Volume Controller (SVC) was a popular topic.
SVC - Everything you wanted to know, but were afraid to ask!
Bill Wiegand, IBM ATS, who has been working with SAN Volume Controller since it was first introduced in 2003. answered some frequently asked questions about IBM System Storage SAN Volume Controller.
Do you have to upgrade all of your HBAs, switches and disk arrays to the recommended firmware levels before upgrading SVC? No. These are recommended levels, but not required. If you do plan to update firmware levels, focus on the host end first, switches next, and disk arrays last.
How do we request special support for stuff not yet listed on the Interop Matrix?
Submit an RPQ/SCORE, same as for any other IBM hardware.
How do we sign up for SVC hints and tips? Go to the IBM
[SVC Support Site] and select the "My Notifications" under the "Stay Informed" box on the right panel.
When we call IBM for SVC support, do we select "Hardware" or "Software"?
While the SVC is a piece of hardware, there are very few mechanical parts involved. Unless there are sparks,
smoke, or front bezel buttons dangling from springs, select "Software". Most of the questions are
related to the software components of SVC.
When we have SVC virtualizing non-IBM disk arrays, who should we call first?
IBM has world-renown service, with some of IT's smartest people working the queues. All of the major storage vendors play nice
as part of the [TSAnet Agreement when a mutual customer is impacted.
When in doubt, call IBM first, and if necessary, IBM will contact other vendors on your behalf to resolve.
What is the difference between livedump and a Full System Dump?
Most problems can be resolved with a livedump. While not complete information, it is generally enough,
and is completely non-disruptive. Other times, the full state of the machine is required, so a Full System Dump
is requested. This involves rebooting one of the two nodes, so virtual disks may temporarily run slower on that
What does "svc_snap -c" do?The "svc_snap" command on the CLI generates a snap file, which includes the cluster error log and trace files from all nodes. The "-c" parameter includes the configuration and virtual-to-physical mapping that can be useful for
disaster recovery and problem determination.
I just sent IBM a check to upgrade my TB-based license on my SVC, how long should I wait for IBM to send me a software license key?
IBM trusts its clients. No software license key will be sent. Once the check clears, you are good to go.
During migration from old disk arrays to new disk arrays, I will temporarily have 79TB more disk under SVC management, do I need to get a temporary TB-based license upgrade during the brief migration period?
Nope. Again, we trust you. However, if you are concerned about this at all, contact IBM and they will print out
a nice "Conformance Letter" in case you need to show your boss.
How should I maintain my Windows-based SVC Master Console or SSPC server?
Treat this like any other Windows-based server in your shop, install Microsoft-recommended Windows updates,
run Anti-virus scans, and so on.
Where can I find useful "How To" information on SVC?
Specify "SAN Volume Controller" in the search field of the
[IBM Redbooks vast library of helpful books.
I just added more managed disks to my managed disk group (MDG), can I get help writing a script to redistribute the extents to improve wide-striping performance?
Yes, IBM has scripting tools available for download on
[AlphaWorks]. For example, svctools will take
the output of the "lsinfo" command, and generate the appropriate SVC CLI to re-migrate the disks around to optimize
performance. Of course, if you prefer, you can use IBM Tivoli Storage Productivity Center instead for a more
Any rules of thumb for sizing SVC deployments?
IBM's Disk Magic tool includes support for SVC deployments. Plan for 250 IOPS/TB for light workloads,
500 IOPS/TB for average workloads, and 750 IOPS/TB for heavy workloads.
Can I migrate virtual disks from one manage disk group (MDG) to another of different extent size?
Yes, the new Vdisk Mirroring capability can be used to do this. Create the mirror for your Vdisk between the
two MDGs, wait for the copy to complete, and then split the mirror.
Can I add or replace SVC nodes non-disruptively? Absolutely, see the Technotes
[SVC Node Replacement page.
Can I really order an SVC EE in Flamingo Pink? Yes. While my blog post that started all
this [Pink It and Shrink It] was initially just some Photoshop humor, the IBM product manager for SVC accepted this color choice as an RPQ option.
The default color remains Raven Black.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended two presentations on XIV.
XIV Storage - Best Practices
Izhar Sharon, IBM Technical Sales Specialist for XIV, presented best practices using XIV in various environments.He started out explaining the innovative XIV architecture: a SATA-based disk system from IBM can outperformFC-based disk systems from other vendors using massive parallelism. He used a sports analogy:
"The men's world record for running 800 meters was set in 1997 by Wilson Kipketer of Denmark in a time of 1:41.11.
However, if you have eight men running, 100 meters each, they will all cross the finish line in about 10 seconds."
Since XIV is already self-tuning, what kind of best practices are left to present? Izhar presented best practicesfor software, hosts, switches and storage virtualization products that attach to the XIV. Here's some quickpoints:
Use as many paths as possible.
IBM does not require you to purchase and install multipathing software as other competitors might. Instead, theXIV relies on multipathing capabilities inherent to each operating system.For multipathing preference, choose Round-Robin, which is now available onAIX and VMware vSphere 4.0, for example. Otherwise, fixed-path is preferred over most-recently-used (MRU).
Encourage parallel I/O requests.
XIV architecture does not subscribe to the outdated notion of a "global cache". Instead, the cache is distributed across the modules, to reduce performance bottlenecks. Each HBA on the XIV can handle about 1400requests. If you have fewer than 1400 hosts attached to the XIV, you can further increase parallel I/O requests by specifying a large queue depth in the host bus adapter (HBA).An HBA queue depth of 64 is a good start. Additional settings mightbe required in the BIOS, operating system or application for multiple threads and processes.
For sequential workloads, select host stripe size less than 1MB. For random, select host stripe size larger than 1MB. Set rr_min_io between ten(10) and the queue depth(typically 64), setting it to half of the queue depth is a good starting point.
If you have long-running batch jobs, consider breaking them up into smaller steps and run in parallel.
Define fewer, larger LUNs
Generally, you no longer need to define many small LUNs, a practice that was often required on traditionaldisk systems. This means that you can now define just 1 or 2 LUNs per application, and greatly simplifymanagement. If your application must have multiple LUNs in order to do multiple threads or concurrent I/O requests, then, by all means, define multiple LUNs.
Modern Data Base Management Systems (DBMS) like DB2 and Oracle already parallelize their I/O requests, sothere is no need for host-based striping across many logical volumes. XIV already stripes the data for you.If you use Oracle Automated Storage Management (ASM), use 8MB to 16MB extent sizes for optimal performance.
For those virtualizing XIV with SAN Volume Controller (SVC), define manage disks as 1632GB LUNs, in multiple of six LUNs per managed disk group (MDG), to balance across the six interface modules. Define SVC extent size to 1GB.
XIV is ideal for VMware. Create big LUNs for your VMFS that you can access via FCP or iSCSI.
Organize data to simplify Snapshots.
You no longer need to separate logs from databases for performance reasons. However, for some backup productslike IBM Tivoli Storage Manager (TSM) for Advanced Copy Services (ACS), you might want to keep them separatefor snapshot reasons. Gernally, putting all data for an application on one big LUNgreatly simplifies administration and snapshot processing, without losing performance.If you define multiple LUNs for an application, simply put them into the same "consistencygroup" so that they are all snapshot together.
OS boot image disks can be snapshot before applying any patches, updates or application software, so that ifthere are any problems, you can reboot to the previous image.
Employ sizing tools to plan for capacity and performance.
The SAP Quicksizer tool can be used for new SAP deployments, employing either the user-based orthroughput-based sizing model approach. The result is in mythical unit called "SAPS", which represents0.4 IOPS for ERP/OLTP workloads, and 0.6 IOPS for BI/BW and OLAP workloads.
If you already have SAP or other applications running, use actual I/O measurements. IBM Business Partners and field technical sales specialists have an updated version of Disk Magic that can help size XIV configurations fromPERFMON and iostat figures.
Lee La Frese, IBM STSM for Enteprise Storage Performance Engineering, presented internal lab test results forthe XIV under various workloads, based on the latest hardware/software levels [announced two weeks ago]. Three workloadswere tested:
Web 2.0 (80/20/40) - 80 percent READ, 20 percent WRITE, 40 percent cache hits for READ.YouTube, FlickR, and the growing list at [GoWeb20] are applications with heavy read activity, but because of[long-tail effects], may not be as cache friendly.
Social Networking (50/50/50) - 50 percent READ, 50 percent WRITE, 50 percent cache hits for READ.Lotus Connections, Microsoft Sharepoint, and many other [social networking] usage are more write intensive.
Database (70/30/50) - 70 percent READ, 30 percent WRITE, 50 percent cache hits for READ.The traditional workload characteristics for most business applications, especially databases like DB2 andOracle on Linux, UNIX and Windows servers.
The results were quite impressive. There was more than enough performance for tier 2 application workloads,and most tier 1 applications. The performance was nearly linear from the smallest 6-module to the largest 15-module configuration. Some key points:
A full 15-module XIV overwhelms a single SVC 8F4 node-pair. For a full XIV, consider 4 to 8 nodes 8F4 models, or 2 to 4 nodes of an 8G4. For read-intensive cache-friendly workloads, an SVC in front of XIV was able to deliver over 300,000 IOPS.
A single node TS7650G ProtecTIER can handle 6 to 9 XIV modules. Two nodes of TS7650G were needed to drivea full 15-module XIV. A single node TS7650 in front of XIV was able to ingest 680 MB/sec on the seventh day with17 percent per-day change rate test workload using 64 virtual drives. Reading the data back got over 950 MB/sec.
For SAP environments where response time 20-30 msec are acceptable, the 15-module XIV delivered over 60,000 IOPS. Reducing this down to 25,000-30,000 cut the msec response time to a faster 10-15 msec.
These were all done as internal lab tests. Your mileage may vary.
Not surprisingly, XIV was quite the popular topic here this week at the Storage Symposium. There were many moresessions, but these were the only two that I attended.
Continuing my week in Chicago, at the IBM System x and BladeCenter Technical Conference, I attended an
awesome session that summarized IBM's Linux directions. Pat Byers presented the global forces that are
forcing customers to re-evaluate the TCO of their operating system choices, the need for rapid integration
in an ever-changing business climate, government stimulus packages, and technology that has enabled much
better solutions than we had during the last economic turn-down in 2001-2003.
IBM has been committed to Linux for over 10 years now. I was part of the initial IBM team in the 1990s to work on Linux for the mainframe. In various roles, I helped get Linux attachment tested for disk and tape systems, and helped get Linux selected as an operating system platform of choice for our storage management software.
Today, Linux-based server generate $7 Billion US dollars in revenues. For UNIX customers, Linux provides greater flexibility for hardware platform. For Windows customers, Linux provides better security and reliability.
Initially, Linux was used for simple infrastructure applications, edge-of-the-network and Web-based workloads.
This evolved to Application and Data serving, Enterprise applications like ERP, CRM and SCM. Today,
Linux is well positioned to help IBM make our world a smarter planet, able to handle business-critical applications. It is the only operating system to scale to the full capability of the biggest IBM System x3950M2 server.
Pat gave an examples of IBM's work with Linux helping clients.
City of Stockholm
The city of Stockholm, Sweden introduced congestion pricing to reduce traffic.
IBM helped them deploy systems to collect tariffs from 300,000 vehicles a day, with real-time scanning and recognition of vehicle license plates, Web-accessible payment processing, and analytics for metrics and reporting. This configuration was able to
[reduce traffic by 25 percent in the first month].
IBM helped [ConAgra Foods] switch their SAP environment from a monolithic Solaris on SPARC deployment, to a more distributed one using Novell SUSE Linux on x86. The result? Six times faster performance at 75 percent lower total cost of ownership!
IBM's strategy has been to focus on working with two of the major Linux distributors: Red Hat and Novell. It also works with [Asianux] which is like the UnitedLinux for Asia, internationalized for Japan, Korea, and China. It handles special requests for other distributions, from CentOS to Ubuntu, as needed on a case by case basis.
IBM's Linux Technology Center of 600 employees help to enable IBM products for Linux, make Linux a better operating system, expand Linux's reach, and help drive collaboration and innovation. In fact, IBM is the #3 corporate contributor to the open source Linux kernel, behind Red Hat (#1) and Novell (#2). For most IBM products, IBM tests with Linux as rigorously as it does Microsoft Windows. IBM offers complete RTS/ServicePac and SupportLine service and support contracts for Red Hat and Novell Linux.
At the IBM Solutions Center this week, several booths used Linux bootable USB sticks to run their software.
[Novell SUSE Studio] was developed to help
customize Linux to the specific needs for independent vendors.
Both Red Hat and Novell offer distributions in four categories:
Standard - for small entry-level servers, with support for a few virtual guests
Advanced Platform - for bigger servers, and support for many or unlimited number of virtual guests
High Performance Computing - HPC and Analytics for large grid deployments
Real Time - for real time processing, such as with
[IBM WebSphere Real Time], where
sub-second response time is critical.
A key difference between Red Hat and Novell appears to be on their strategy towards server virtualization.
Red Hat wants to position itself as the hypervisor of choice, for both servers and desk top virtualization, announcing Kernel-based Virtual Machine
[KVM] on their Red Hat Enterprise Linux (RHEL) 5.4 release, and their new upcoming
RHEV-V, a tight 128MB hypervisor to compete against VMware ESXi. Meanwhile, Novell is focusing SUSE to be
the perfect virtual guest OS, being hypervisor-aware an dhaving consistent terms and licensing when run under any hypervisor, including VMware, Hyper-V, Citrix Xen, KVM or others.
IBM has tons of solutions that are based on Linux, including the IBM Information Server blade, the InfoSphere Balanced Warehouse, SAN Volume Controller (SVC), TS7650 ProtecTIER data deduplication virtual tape library, Grid Medical Archive Solution (GMAS), Scale-out File Services (SoFS), Lotus Foundations, and the IBM Smart Cube.
If you are interested in trying out Linux, IBM offers evaluation copies at no charge for 30 to 90 days. For
more on how to deploy Linux successfully on IBM servers, see the
[IBM Linux Blueprints] landing page.