This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
The "Storage Resource Agent" introduced for Linux, AIX and Windows in v4.1 is a lightweight agent, written in native "C" language instead of Java, to avoid all the resources that Java consumes. In this release, it is now supported for HP-UX and Solaris, and adds file level and database level storage resource management (SRM) reporting for all five platforms.
For new customer deployments, this will eliminate all the pain setting up a "Common Agent Manager". The Productivity Center server will send out the agent, the agent collects the data, and can then optionally uninstall itself. In this manner, you always have the latest version of the code collecting the data. For those with Common Agent Manager already installed, you can continue running as is, or slowly transition over to the new lightweight agent methodology.
Full support for IBM XIV Storage System
IBM XIV® Storage System support updated to include provisioning, data path explorer and performance management reporting. Before this release, Productivity Center could only discover and provide rudimentary capacity information for XIV systems. Now you can carve LUNs and monitor XIV disk performance just like you can with most other disk systems.
Storage Area Network (SAN) configuration planning
For those who have both Productivity Center Standard Edition (SE) and Productivity Center for Replication, the SAN Config Planner is now "replication-aware" and will add LUNs to existing copy sessions, or create new copy sessions, and ensure that the devices chosen meet the appropriate criteria.
HyperSwap™ for the IBM AIX® environment
On z/OS mainframes, if you experience an outage on a storage system, Productivity Center for Replication (TPC-R) can automatically swap to the synchronous mirror copy without disruption to the operating system or application. Now, IBM has extended this awesome feature to the AIX platform for high availability in POWER-based server environments.
Detailed Session Reporting for Global Mirror
Before, TPC-R enforced the notion of only one Global Mirror master per storage system. Now, TPC-R v4.2 is capable of supporting multiple Global MIrror sessions, and provide more detailed session reporting for these environments. This can be useful if for some unknown reason the bits are not being shoveled from point A to point B, and you need to do some "problem determination".
SVC Incremental FlashCopy
Productivity Center for Replication now adds support for the "Incremental" feature of SVC FlashCopy. While FlashCopy requests are processed instantaneously, there is background processing required that can consume cycles. Incremental processing keeps track of what changed since the last FlashCopy, and minimizes this behind-the-scenes overhead.
Integrated Distributed Disaster Recovery manager
IBM Tivoli System Automation Application Manager [TSA-AM] can now integrate with TPC-R to provide application-aware disaster recovery capability. This can coordinate between IBM Tivoli System Automation for Multiplatforms [TSA-for-MP], IBM HACMP/PowerHA, as well as other clustering products like Microsoft Cluster Services (MSCS) and Veritas Cluster Services on Solaris. When TSA-AM detects an outage, it can notify Globally Dispersed Parallet Sysplex Distributed Cluster Management (GDPS-DCM) to take action. This integration was actually completed with TPC v4.1 back in April, but got buried deep inside our big storage launch, so I bring it up again as a gentle reminder that IBM offers the best end-to-end management on the planet.
At last month's Storage University, I presented an overview of [Tivoli Storage Productivity Center v4.1]. Many of the questions were along the lines of "When will TPC do xyz?" and all I could answer was "Soon" since I knew they would be delivered with this TPC v4.2 release, but I couldn't provide any more details than that at the time.
IBM Tivoli Storage Productivity Center version 4.1 Overview
In conferences like these, there are two types of product-level presentations. An "Overview" explains how products work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. This session was an Overview of [Tivoli Storage Productivity Center], plus some information of IBM's Storage Enterprise Resource Planner [SERP] from IBM's acquisition of NovusCG.
I was one of the original lead architects of Productivity Center many years ago, and was able to share many personal experiences about its evolution in development and in the field at client facilities. Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
I would like to thank my colleague Harley Puckett for his assistance in putting the finishing touches on this presentation. This was my best attended session of the week, indicating there is a lot of interest in this product in particular, and managing a heterogeneous mix of storage devices in general. To hear a quick video introduction, see Harley Puckett's presentation at the [IBM Virtual Briefing Center].
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performing ILM assessments for clients, and now there are dozens of IBM practitioners in Global Services and Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center [EBC], because it addresses several top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
When I presented ILM last year, I did not get many attendees. This time I had more, perhaps because of the recent announcement of ILM and HSM support in IBM SONAS and our April announcement of IBM DS8700 Easy Tier has renewed interest in this area.
I have safely returned back to Tucson, but have still a lot of notes of the other sessions I attended, so will cover them this week.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
Data reclamation, rationalization and planning
Virtualization and tiering
Backup, business continuity and disaster recovery
Storage process and governance
Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.