Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Continuing my coverage of the [IBM System Storage Technical University 2011], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This year, we had two, one for System x called "Ask the eXperts", and one for System Storage called "Storage Free-for-All". This post covers the latter.
(Disclaimer: Do not shoot the messenger! We had a dozen or more experts on the panel, representing System Storage hardware, Tivoli Storage software, and Storage services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. I have spelled out acronyms and provided links to relevant materials. The answers from individual IBMers may not reflect the official position of IBM management. Where appropriate, my own commentary will be in italics.)
You are in the wrong session! Go to "Ask the eXperts" session next door!
The TSM GUI sucks! Are there any plans to improve it?
Yes, we are aware that products like IBM XIV have raised the bar for what people expect from graphical user interfaces. We have plans to improve the TSM GUI. IBM's new GUI for the SAN Volume Controller and Storwize V7000 has been well-received, and will be used as a template for the GUIs of other storage hardware and software products. The GUI uses the latest HTML5, Dojo widgets and AJAX technologies, eliminating Java dependencies on the client browser.
Can we run the TSM Admin GUI from a non-Windows host?
IBM has plans to offer this. Most likely, this will be browser-based, so that any OS with a modern browser can be used.
As hard disk drives grow larger in capacity, RAID-5 becomes less viable. What is IBM doing to address this?
IBM is aware of this problem. IBM offers RAID-DP on the IBM N series, RAID-X on the IBM XIV, and RAID-6 on its other disk systems.
TPC licensing is outrageous! What is IBM going to do about it?
About 25 percent of DS8000 disk systems have SSD installed. Now that IBM DS8000 Easy Tier supports "any two" tiers, roughly 50 percent of DS8000 now have Easy Tier activated. No idea on how Easy Tier has been adopted on SVC or Storwize V7000.
We have an 8-node SVC cluster, should we put 8 SSD drives into a single node-pair, or spread them out?
We recommend putting a separate Solid-State Drive in each SVC node, with RAID-1 between nodes of a node-pair. By separating the SSD across I/O groups, you can reduce node-to-node traffic.
How well has SVC 6.2 been adopted?
The inventory call-home data is not yet available. The only SVC hardware model that does not support this level of software was the 2145-4F2 introduced in 2003. Every other model since then can be updated to this level.
Will IBM offer 600GB FDE drives for the IBM DS8700?
Currently, IBM offers 300GB and 450GB 15K RPM drives with the Full-Disk Encryption (FDE) capability for the DS8700, and 450GB and 600GB 10K RPM drives with FDE for the IBM DS8800. IBM is working with its disk suppliers to offer FDE on other disk capacities, and on SSD and NL-SAS drives as well, so that all can be used with IBM Easy Tier.
Is there a reason for the feature lag between the Easy Tier capabilities of the DS8000, and that of the SVC/Storwize V7000?
We have one team for Easy Tier, so they implement it first on DS8000, then port it over to SVC/Storwize V7000.
Does it even make sense to have separate storage tiers, especially when you factor in the cost of SVC and TPC to make it manageable?
It depends! We understand this is a trade-off between cost and complexity. Most data centers have three or more storage tiers already, so products like SVC can help simplify interoperability.
Are there best practices for combining SVC with DS8000? Can we share one DS8000 system across two or more SVC clusters?
Yes, you can share one DS8000 across multiple SVC clusters. DS8000 has auto-restripe, so consider having two big extent pools. The queue depth is 3 to 60, so aim to have up to 60 managed disks on your DS8000 assigned to SVC. The more managed disks the better.
The IBM System Storage Interopability Center (SSIC) site does not seem to be designed well for SAN Volume Controller.
Yes, we are aware of that. It was designed based on traditional Hardware Compatability Lists (HCL), but storage virtualization presents unique challenges.
How does the 24-hour learning period work for IBM Easy Tier? We have batch processing that runs from 2am to 8am on Sundays.
You can have Easy Tier monitor across this batch job window, and turn Easy Tier management between tiers on and off as needed.
Now that NetApp has acquired LSI, is the DS3000 still viable?
Yes, IBM has a strong OEM relationship with both NetApp and LSI, and this continues after the acquisition.
If have managed disks from a DS8000 multi-rank extent pool assigned to multiple SVC clusters, won't this affect performance?
Yes, possibly. Keep managed disks on seperate extent pools if this is a big concern. A PERL script is available to re-balance SVC striped volumes as needed after these changes.
Is the IBM [TPC Reporter] a replacement for IBM Tivoli Storage Productivity Center?
No, it is software, available at no additional charge, that provides additional reporting to those who have already licensed Tivoli Storage Productivity Center 4.1 and above. It will be updated as needed when new versions of Productivity Center are released.
We are experiencing lots of stability issues with SDD, SDD-PCM and SDD-DSM multipathing drivers. Are these getting the development attention they deserve?
IBM's direction is to shift toward native OS-based multipathing drivers.
Is anyone actually thinking of deploying public cloud storage in the near-term?
A few hands in the audience were raised.
None of the IBM storage devices seem to have [REST API]. Cloud storage providers are demanding this. What are IBM plans?
IBM plans to offer REST on SONAS. IBM uses SONAS internally for its own cloud storage offerings.
If you ask a DB2 specialist, an AIX specialist, and a System Storage specialist, on how to configure System p and System Storage for optimal performance, you get three different answers. Are there any IBMers who are cross-functional that can help?
Yes, for example, Earl Jew is an IBM Field Technical Support Specialist (FTSS) for both System p and Storage, and can help you with that.
Both Oracle and Microsoft recommend RAID-10 for their applications.
Don't listen to them. Feel free to use RAID-5, RAID-6 or RAID-X instead.
Resizing SVC source volumes forces ongoing FlashCopy or Metro Mirror relatiohships to be stopped. Does IBM plan to address this?
Currently, you have to stop, resize both source and target, then start the relationship again. Consider getting IBM Tivoli Storage Productivity Center for Replication (TPC-R).
IBM continues to support this for exising clients. For new deployments, IBM offers SONAS and the Information Archive (IA).
When will I be able to move SVC volumes between I/O groups?
You can today, but it is disruptive to the operating system. IBM is investigating making this less disruptive.
Will XIV ever support the mainframe?
It does already, with support for both Linux and z/VM today. For VSE support, use SVC with XIV. For those with the new zBX extension, XIV storage can be used with all of the POWER and x86-based operating systems supported. IBM has no plans to offer direct FICON attachment for z/OS or z/TPF.
Not a question - Kudos to the TSM and ProtecTIER team in supporting native IP-based replication!
Thanks!
When will IBM offer POWER-based models of the XIV, SVC and other storage devices?
IBM's decision to use industry-standard x86 technology has proven quite successful. However, IBM re-looks at this decision every so many years. Once again, the last iteration determined that it was not worth doing. A POWER-based model might not beat the price/performance of current x86 models, and maintaining two separate code bases would hinder development of new innovations.
We have both System i and System z, what is IBM doing to address the fact that PowerHA and GDPS are different?
IBM TPC-R has a service offering extension to support "IBM i" environments. GDPS plans to support multi-platform environments as well.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
Over on the Tivoli Storage Blog, there is an exchange over the concept of a "Storage Hypervisor". This started with fellow IBMer Ron Riffe's blog post [Enabling Private IT for Storage Cloud -- Part I], with a promise to provide parts 2 and 3 in the next few weeks. Here's an excerpt:
"Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them.
In August, Gartner published a paper [Use Heterogeneous Storage Virtualization as a Bridge to the Cloud] that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration.
A good storage hypervisor helps you be smart.
Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s 'in the cloud'."
"Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing."
At this point, you might scratch your head and ask "Does this Storage Hypervisor exist, or is this just a theoretical exercise?" The answer of course is "Yes, it does exist!" Just like VMware offers vSphere and vCenter, IBM offers block-level disk virtualization through the SAN Volume Controller(SVC) and Storwize V7000 products, with a full management support from Tivoli Storage Productivity Center Standard Edition.
SVC has supported every release of VMware since the 2.5 version. IBM is the leading reseller of VMware, so it makes sense for IBM and VMware development to collaborate and make sure all the products run smoothly together. SVC presents volumes that can be formatted for VMFS file system to hold your VMDK files, accessible via FCP protocol. IBM and VMware have some key synergies:
Management integration with Tivoli Storage Productivity Center and VMware vCenter plug-in
VAAI support: Hardware-assisted locking, hardware-assisted zeroing, and hardware-assisted copying. Some of the competitors, like EMC VPLEX, don't have this!
Space-efficient FlashCopy. Let's say you need 250 VM images, all running a particular level of Windows. A boot volume of 20GB each would consume 5000GB (5 TB) of capacity. Instead, create a Golden Master volume. Then, take 249 copies with space-efficient FlashCopy, which only consumes space for the modified portions of the new volumes. For each copy, make the necessary changes like unique hostname and IP address, changing only a few blocks of data each. The end result? 250 unique VM boot volumes in less than 25GB of space, a 200:1 reduction!
Support for VMware's Site Recovery Manager using SVC's Metro Mirror or Global Mirror features for remote-distance replication.
Data center federation. SVC allows you to seamlessly do vMotion from one datacenter to another using its "stretched cluster" capability. Basically, SVC makes a single image of the volume available to both locations, and stores two physical copies, one in each location. You can lose either datacenter and still have uninterrupted access to your data. VMware's HA or Fault Tolerance features can kick in, same as usual.
But unlike tools that work only with VMware, IBM's storage hypervisor works with a variety of server virtualization technologies, including Microsoft Hyper-V, Xen, OracleVM, Linux KVM, PowerVM, z/VM and PR/SM. This is important, as a recent poll on the Hot Aisle blog indicates that [44 percent run 2 or more server hypervisors]!
Join the conversation! The virtual dialogue on this topic will continue in a [live group chat] this Friday, September 23, 2011 from 12 noon to 1pm EDT. Join me and about 20 other top storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
IBM Storage Strategy for the Smarter Computing Era
I presented this session on Thursday morning. It is a session I give frequently at the IBM Tucson Executive Briefing Center (EBC). IBM launched [Smarter Computing initiative at IBM Pulse conference]. My presentation covered the role of storage in Business Analytics, Workload Optimized Systems, and Cloud Computing.
Layer 8: Cloud Computing and the new IT Delivery Model
Ed Batewell, IBM Field Technical Support Specialist, presented this overview on Cloud Computing. The "Layer 8" is a subtle reference to the [7-layer OSI Model] for networking protocols. Ed cites insights from the [2011 IBM Global CIO Survey]. Of the 3000 companies surveyed, 60 percent plan to use or deploy clouds. In USA, 70 percent of CIOs have significant plans for cloud within the next 3-5 years. These numbers are double the statistics gleamed from the 2009 Global CIO survey. Clouds are one of IBM's big four initiatives, expecting to generate $7 Billion USD annual revenues by 2015.
IBM is recognized in the industry as one of "Big 5" vendors (Google, Yahoo, Microsoft, and Amazon round out the rest). As such, IBM has contributed to the industry a set of best practices known as the [Cloud Computing Reference Architect (36-page document)]. As is typical for IBM, this architecture is end-to-end complete, covering the three main participants for successful cloud deployments:
Consumers: the people and systems that use cloud computing services
Providers: the people, infrastructure and business operations needed to deliver IT services to consumers
Developers: the people and their development tools that create apps and platforms for cloud computing
IBM is working hard to eliminate all barriers to adoption for Cloud Computing. [Mirage image management] can patch VM images offline to address "Day 0" viruses. [Hybrid Cloud Integrator can help integrate new Cloud technologies to legacy applications. [IBM Systems Director VMcontrol] can manage VM images from z/VM on the mainframe, to PowerVM on UNIX servers, to VMware, Microsoft, Xen and KVM for x86 servers. IBM's [Cloud Service Provider Platform (CSP2)] is designed for Telecoms to offer Cloud Computing services. IBM CloudBurst is a "Cloud-in-a-Can" optimized stack of servers, storage and switches that can be installed in five days and comes in various "tee-shirt sizes" (Small, Medium, Large and Extra Large), depending on how many VMs you want to run.
Ed mentioned that companies trying to build their own traditional IT applications and environments, in an effort to compete against the cost-effective Clouds, reminded him of Thomas Thwaites' project of building a toaster from scratch. You can watch the [TED video, 11 minutes]:
An interesting project is [Reservoir] which IBM is working with other industry leaders to develop a way to seamlessly migrate VMs from one location to another, globally, without requiring shared storage, SAN zones or Ethernet subnets. This is similar to how energy companies buy and sell electricity to each other, as needed, or the way telecommunications companies allow roaming acorss each others networks.
IBM System Networking - Convergence
Jeff Currier, IBM Executive Consultant for the new IBM System Networking group, presented this session on Network Convergence. Storage is expected to grow 44x, from 0.8 [Zettabytes] in 2009, to 35 Zetabytes by the year 2020. The role of the network is growing in importance. IBM refers to this converged loss-less Ethernet network as "Convergence Enhanced Ethernet" (CEE), which Cisco uses the term "Data Center Ethernet" (DCE), and the rest of the industry uses "Data Center Bridging" (DCB).
To make this happen, we need to replace Spanning Tree Protocol [STP] that eliminates walking in circles in a multi-hop network configuration, with a new Layer 2 Multipathing (L2MP) protocol. The two competing for the title are Shortest Path Bridging (IEEE 802.1aq) and Transparent Interconnect of Lots of Links (IETF TRILL).
All roads lead to Ethernet. While FCoE has not caught on as fast as everyone hoped, iSCSI has benefited from all the enhancements to the Ethernet standard. iSCSI works in both lossy and lossless versions of Ethernet, and seems to be the preferred choice for new greenfield deployments for Small and Medium sized Businesses (SMB). Larger enterprises continue to use Fibre Channel (FCP and FICON), but might use single-hop FCoE from the servers to top-of-rack switches. Both iSCSI and FCoE scale well, but FCoE is considered more efficient.
IBM has a strategy, and is investing heavily in these standards, technologies, and core competencies.
Can you believe it has been five years since I started blogging?
(If you absolutely abhor the navel-gazing associated with blogging-about-blogging posts, then by all means stop reading now!)
Back in July 2005, IBM decided to merge together two brands, IBM eServer and IBM TotalStorage, into a single all-encompassing "IBM Systems" brand. Thus TotalStorage brand became the "IBM System Storage" product line of the "IBM Systems" brand. The next six months was spent renaming some (not all) of the products. The following January, I was named the Marketing Strategist for this new product line, with the mission to help promote the new naming convention.
We looked at possibly doing a regularly-scheduled podcast, but nobody back then, including myself, were familar with audio editing tools. Instead, we chose a blog. Most blogs at IBM are internal, safely hidden behind the firewall, accessible only to IBM employees. I wanted mine to be different, to be accessible to the public, clients, prospects, IBM Business Partners, and yes, even those working for IBM's various competitors. One thing I like about blogs is that if you have a typo, or make a mistake, you can go back and correct it after it has posted.
Marketing through social media is quite different than traditional marketing techniques. Management was supportive, but legal wanted to review and approval everything I wrote before I posted it onto my blog. Official IBM Press Releases, for example, go through a dozen reviews before they are finally made public. I refused. This kind of review and approval would ruin the blogging process.
Fortunately, this blog was not my first attempt at technical writing. Our legal counsel reviewed my past trip reports from various conferences, and decided to let me blog without review. Occasionally, someone will reivew my blog once already posted, and ask me to make some corrections. It reminds me of my favorite saying used heavily within IBM:
Despite these delays, we managed to launch this blog in September 2006, just in time to celebrate the 50th anniversary of disk systems. IBM introduced the industry's first commercial disk system on September 13, 1956.
Over the years, this blog has helped sales reps and IBM Business Partners close deals, and address the FUD their prospects heard from competition. I have helped my readers get in touch with the right people within IBM. And, I have "sent the elevator back down", helping other IBMers launch their own blogs, including [Barry Whyte], [Elisabeth Stahl], and [Anthony Vandewerdt].
Today, bloggers have a profound impact on the world. Not everyone has a positive view on this. Bloggers and other users of social media have been seen as whistle-blowers for fraudulent corporations, as activists against corrupt governments and dictators, and as subject matter experts and fact checkers referenced during television and radio newscasts. In a recent movie, one of the major characters was a trouble-making blogger, and another character describes his blogging as nothing more than "graffiti with punctuation."
I want to thank all of my readers for making this the #1 most influential blog on IBM DeveloperWorks in 2011! This blog has been [published in a series of books], Inside System Storage Volume I and Volume II. And yes, before you all ask in the comments below, I am actively working on Volume III.
For a bit of nostalgia, I invite you to read my first 21 blog posts that I posted back in [September 2006].
I have been working on Information Lifecycle Management (ILM) since before they coined the phrase. There were several break-out sessions on the third day at the [IBM System Storage Technical University 2011] related to new twists to ILM.
The Intelligent Storage Service Catalog (ISSC) and Smarter ILM
Hans Ammitzboll, Solution Rep for IBM Global Technology Services (GTS), presented an approach to ILM focused on using different storage products for different tiers. Is this new? Not at all! The original use of the phrase "Information Lifecycle Management" was coined in the early 1990s by StorageTek to help sell automated tape libraries.
Unfortunately, disk-only vendors started using the term ILM to refer to disk-to-disk tiering inside the disk array. Hans feels it does not make sense to put the least expensive penny-per-GB 7200 RPM disk inside the most expense enterprise-class high-end disk arrays.
IBM GTS manages not only IBM's internal operations, but the IT operations of hundreds of other clients. To help manage all this storage, they developed software to supplement reporting, monitoring and movement of data from one tier to another.
The Intelligent Storage Service Catalog (ISSC) can save up to 80 percent of planning time for managing storage. What did people use before? Hans poked fun at chargeback and showback systems that "offer savings" but don't actually "impose savings". He referred to these as Name-and-Shame, where the top 10 offenders of storage usage.
His storage pyramid involves a variety of devices, with IBM DS8000, SVC and XIV for the high-end, midrange disk like Storwize V7000, and blended disk-and-tape solutions like SONAS and Information Archive (IA) for the lower tiers.
To help people understand these concepts, IBM developed the [Thinking Worlds Player] game.
Managing your Data with Policies on SONAS
Mark Taylor, IBM Advanced Technical Services, presented the policy-driven automation of IBM's Scale-Out NAS (SONAS). A SONAS system can hold 1 to 256 file systems, and each file system is further divided into fileset containers. Think of fileset containers like 'tree branches' of the file system.es.
SONAS supports policies for file placement, file movement, and file deletion. These are SQL-like statements that are then applied to specific file systems in the SONAS. Input variables include date last modified, date last accessed, file name, file size, fileset container name, user id and group id. You can choose to have the rules be case-sensitive or case-insensitive. The rules support macros. A macro pre-processor can help simplify calculations and other definitions that are used repeatedly.
Each file system in SONAS consists of one or more storage pools. For file systems with multiple pools, file placement policies can determine which pool to place each file. Normally, when a set of files are in a specific sub-directory on other NAS systems, all the files will be on the same type of disk. With SONAS, some files can be placed on 15K RPM drives, and other files on slower 7200 RPM drives. This file virtualization separates the logical grouping of files from the physical placement of them.
Once files are placed, other policies can be written to migrate from one disk pool to another, migrate from disk to tape, or delete the file. Migrating from one disk pool to another is done by relocation. The next time the file is accessed, it will be accessed directly from the new pool. When migrating from disk to tape, a stub is left in the directory structure metadata, so that subsequent access will cause the file to be recalled automatically from tape, back to disk. Policies can determine which storage pool files are recalled to when this happens.
Migrating from disk to tape involves sending the data from SONAS to external storage pool manager, such as IBM Tivoli Storage Manager (TSM) server connected to a tape library. SONAS supports pre-migration, which allows the data to be copied to tape, but left on disk, until space is needed to be freed up. For example, a policy with THRESHOLD(90,70,50) will kick in when the file system is 90 percent full, file will be migrated (moved) to tape until it reaches 70 percent, and then files will be pre-migrated (copied) to tape until it reaches 50 percent.
Policies to delete files can apply to both disk and tape pools. Files deleted on tape remove the stub from the directory structure metadata and notify the external storage pool manager to clean up its records for the tape data.
If this all sounds like a radically new way of managing data, it isn't. Many of these functions are based on IBM's Data Facility Storage Management Subsystem (DFSMS) for the mainframe. In effect, SONAS brings mainframe-class functionality to distributed systems.
Understanding IBM SONAS Use Cases
For many, the concept of a scale-out NAS is new. Stephen Edel, IBM SONAS product offering manager, presented a variety of use cases where SONAS has been successful.
First, let's consider backup. IBM SONAS has built-in support for Tivoli Storage Manager (TSM), as well as supporting the NDMP industry standard protocol, for use with Symantec NetBackup, Commvault Simpana, and EMC Legato Networker. While many NAS solutions support NDMP, IBM SONAS can support up to 128 session per interface node, and up to 30 interface nodes, for parallel processing. SONAS has a high-speed file scan to identify files to be backed up, and will pre-fetch the small files into cache to speed up the backup process. A SONAS system can support up to 256 systems, and each file system can be backed up on its own unique schedule if you like. Different file systems can be backed up to different backup servers.
SONAS also has anti-virus support, with your choice of Symantec or McAfee. An anti-virus scan can be run on demand, as needed, or as files are individually accessed. When a Windows client reads a file, SONAS will determine if it has been already scanned with the most recent anti-virus signatures, and if not, will scan before allowing the file to be read. SONAS will also scan new files created.
Successful SONAS deployments addressed the following workloads:
content capture including video capture
content distribution
file production/rendering
high performance computing, research and business analytics
"Cheap and Deep" archive
worldwide information exchange and geographically distant collaboration
cloud hosting
SONAS is selling well in Government, Universities, Healthcare, and Media/Entertainment, but is not limited to these industries. It can be used for private cloud deployments and public cloud deployments. Having centralized management for Petabytes of data can be cost-effective either way.
IBM SONAS brings the latest techologies to bring a Smarter ILM to a variety of workloads and use cases.
Wrapping up my coverage of the [IBM System Storage Technical University 2011], I attended a few sessions on Friday morning. The last session was Glenn Anderson's "IT Game Changers: the IT Professional's Guide to Becoming a Technology Trailblazer." Glenn used to run the Storage University events, but now is the conference manager for the System z mainframe events.
Glenn organized this talk from lessons from the following books:
Glen suggested that IT professionals should understand the dissatisfaction with IT that is driving companies to switch over to Cloud Computing. IT professionals should adopt a service-oriented approach, realize the full potential of new disruptive technologies, and know when to "jump the curve" to the next generation of technology. For example, IT professionals should lead the movement to Cloud. If you build your own private cloud, or purchase some time for instances on a public cloud, you will be in a better position to be the "trusted advisor" to IT management.
CIOs should encourage IT to be part of the corporate strategy, but may have to fix the broken IT funding model. The IT department should be a "value center" not a "cost center" as it has been traditionally treated. When treated as a "cost center", IT departments only focus on cost reductions, and not looking at ways that the IT department can help drive revenues, improve customer service, or enhance employee productivity. A well-orgnized IT department can be a competitive advantage.
Taking a "service-oriented" approach allows IT and Business Process to come together. Often times, IT and business professionals don't communicate well, and this new service-oriented approach can bridge the gap. Service Oriented Architecture [SOA] can help connect existing legacy applications to the new Cloud Computing environment.
IT budgets should consist of two parts. Strategic funding for new IT projects, and an operational budget for keeping current applications running. Roughly 45 percent of capital investment in USA goes toward IT. Too often, the IT department is focused on itself, on technology and reducing costs, and not enough on aligning IT with business transformation. When IT is used in conjunction with a sound business strategy, their can be significant payoff.
After 550 years, the printing press and printed materials are being pushed from center. While other electronic media like radio and television have been around for a while, the internet and digital publishing are constantly available, and represent a shift from traditional printed materials.
When evaluating new technologies, IT professionals should ask themselves a few questions. Is it easy to use? Does it enable people to connect in new ways? Is it more cost-effective, or tap new sources of revenue? Does it shift power from one player to another? A new intellectual ethic is taking hold. Becoming an IT Game Changer can help stay one step ahead as Cloud Computing and other new IT platforms are adopted.
We had our first "Future of IT Storage" Lunch-and-Learn here in Indianapolis, IN. We held it at the [Harry & Izzy's Restaurant], which looks like it has been in business for quite a while, but actually was only started four years ago. It is the sister restaurant for St. Elmo's next door which has been running since 1902, so it maintains a sense of that heritage, but with a bit more casual atmosphere.
Please note that in the wake of Hurricane Irene, the [Burlington, MA (Boston Area) event] has been postponed, probably to October or November. We have already notified all the people who signed up, but in case you planned just to show up, I wanted to let you know here in this blog.
Special thanks to Karen Harrison and Kerry Ingram for their help in setting up this event! Also a shout-out to Leanna and Amy, our two waitresses who served us today!
Last week, fellow IBMer Ron Riffe started his three-part series on the Storage Hypervisor. I discussed Part I already in my previous post [Storage Hypervisor Integration with VMware]. We wrapped up the week with a Live Chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
"The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog.
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
A storage hypervisor increases the utilization of storage resources, and optimizes what is most scarce in your environment. For Linux, UNIX and Windows servers, you typically see utilization rates of 20 to 35 percent, and this can be raised to 55 to 80 percent with a storage hypervisor. But what is most scarce in your environment? Time! In a competitive world, it is not big animals eating smaller ones as much as fast ones eating the slow.
Want faster time-to-market? A storage hypervisor can help reduce the time it takes to provision storage, from weeks down to minutes. If your business needs to react quickly to changes in the marketplace, you certainly don't want your IT infrastructure to slow you down like a boat anchor.
Want more time with your friends and family? A storage hypervisor can migrate the data non-disruptively, during the week, during the day, during normal operating hours, instead of scheduling down-time on an evenings and weekends. As companies adopt a 24-by-7 approach to operations, there are fewer and fewer opportunities in the year for scheduled outages. Some companies get stuck paying maintenance after their warranty expires, because they were not able to move the data off in time.
Want to take advantage of the new Solid-State Drives? Most admins don't have time to figure out what applications, workloads or indexes would best benefit from this new technology? Let your storage hypervisor automated tiering do this for you! In fact, a storage hypervisor can gather enough performance and usage statistics to determine the characteristics of your workload in advance, so that you can predict whether solid-state drives are right for you, and how much benefit you would get from them.
Want more time spent on strategic projects? A storage hypervisor allows any server to connect to any storage. This eliminates the time wasted to determine when and how, and let's you focus on the what and why of your more strategic transformational projects.
If this sounds all too familiar, it is similar to the benefits that one gets from a server hypervisor -- better utilization of CPU resources, optimizing the management and administration time, with the agility and flexibility to deploy new technologies in and decommission older ones out.
"Server virtualization is a fairly easy concept to understand: Add a layer of software that allows processing capability to work across multiple operating environments. It drives both efficiency and performance because it puts to good use resources that would otherwise sit idle.
Storage virtualization is a different animal. It doesn't free up capacity that you didn't know you had. Rather, it allows existing storage resources to be combined and reconfigured to more closely match shifting data requirements. It's a subtle distinction, but one that makes a lot of difference between what many enterprises expect to gain from the technology and what it actually delivers."
Jon Toigo on his DrunkenData blog brings back the sanity with his post [Once More Into the Fray]. Here is an excerpt:
"What enables me to turn off certain value-add functionality is that it is smarter and more efficient to do these functions at a storage hypervisor layer, where services can be deployed and made available to all disk, not to just one stand bearing a vendor’s three letter acronym on its bezel. Doesn’t that make sense?
I think of an abstraction layer. We abstract away software components from commodity hardware components so that we can be more flexible in the delivery of services provided by software rather than isolating their functionality on specific hardware boxes. The latter creates islands of functionality, increasing the number of widgets that must be managed and requiring the constant inflation of the labor force required to manage an ever expanding kit. This is true for servers, for networks and for storage.
Can we please get past the BS discussion of what qualifies as a hypervisor in some guy’s opinion and instead focus on how we are going to deal with the reality of cutting budgets by 20% while increasing service levels by 10%. That, my friends, is the real challenge of our times."
Did you miss out on last Friday's Live Chat? We are doing it again this Friday, covering parts I and II of Ron's posts, so please join the conversation! The virtual dialogue on this topic will continue in another [Live Chat] on September 30, 2011 from 12 noon to 1pm Eastern Time.
Last February, IBM introduced Watson on the Jeopardy! game show. These three shows were re-aired this week in the United States. I wrote a series of blog posts back then:
This last one on how to build your own Watson, Jr. has gotten over 69,000 hits! While several people told me they plan to build their own, I have not heard back from anyone yet, so perhaps it is taking longer than expected.
IBM and Wellpoint announced this week that it will be [putting Watson to work] in healthcare. [Wellpoint] is one of the largest health benefits company in the United States, with over 70 million people served through its affiliate plans and its various subsidiaries. I am one of the development lab advocates for Wellpoint, and have been proud to work with the account team to help Wellpoint achieve their goals.
This marks the first commercial deployment of IBM Watson. This is a joint effort. IBM will develop the base IBM Watson for healthcare platform, and Wellpoint will then develop healthcare-specific solutions to run on this platform. Watson's ability to analyze the meaning and context of human language, and quickly process vast amounts of information to suggest options targeted to a patient's circumstances, can assist decision makers, such as physicians and nurses, in identifying the most likely diagnosis and treatment options for their patients.
Is this going to put doctors out of business? No. Physicians find it challenging to read and understand hundreds or thousands of pages of text, and put this into their practice. IBM Watson, on the other hand, can scan through hundred of millions of pages in just a few seconds to help answer a question or provide recommendations. Together, doctors armed with access to IBM Watson will be able to improve the quality and effectiveness of medical care.
From an insurance point of view, improving the quality of care will help reduce medical mistakes and malpractice lawsuits. This is a win-win for everyone except ambulance-chasing lawyers!
Every September, IBM Tucson spends a Wednesday or Saturday to help out local non-profit charities. The event is orgnaized the the local United Way. My first one was packing boxes of food for the [Community Food Bank of Southern Arizona] on September 12, 2001, the day after the [tragic events in New York and Washington DC]. The mindless activity of putting a bottle, bag or can into one box after another helped us cope with the shock and awe that week.
So, it seemed fitting on the 10th anniversary of that event to go back to the Community Food Bank and help pack boxes of food. The facility received nearly $200,000 in donations in response to the [shooting of US Congresswoman Gabrielle Giffords]. Her husband, astronaut Mark Kelly, suggested that dontaions go in part to the Tucson Community Food Bank, and with the money they were able to expand operations, dedicating a portion as the [Gabrielle Giffords Family Assistance Center] to bring together food handouts with the [Supplemental Nutrition Assistance Program for food stamps, and the Women with Infant Children (WIC) program. One-stop assistance!
This year, nearly 500 Tucson IBMers to complete 22 projects at 17 nonprofit agencies. We were not alone, we were joined by volunteers from Bank of America, Texas Instruments, Tucson Medical Center, Geico Insurance, University of Arizona, Cox Cable TV, Desert Diamond Casinos, The Westin La Paloma Resort and Spa, the Arizona Lottery, Community Partnership of Southern Arizona (CPSA), Pizza Hut, Arizona Daily Star, 94.9 MixFM Radio, BizTucson, and News 4 Tucson (our local NBC affiliate).
In a bit of competition, our team, Team B, of 14 IBMers, competed against another team, Team A, of 20 people. Despite having fewer people, we were able to pack 746 boxes, representing 20,000 pounds of food, beating out Team A which only packed 18,000 pounds. (I have chosen not to identify anyone on Team A, no need to rub their noses in it. This was all for a good cause.)
Each box contained cereal, canned evaporated milk, canned vegetables and fruits, fruit juice, rice, and dry beans. My job on the assembly line was to put two half-gallon jugs of grape juice in the box and move it down the line.
What lessons can a team of people learn from an activity like this?
When you put a bunch of efficiency experts from IBM on a task, they will self-organize and self-manage for optimum performance, just as we don on our regular day jobs.
No matter what you plan in advance, individual personalities and strengths surface, encouraging minor adjustments to process and procedures to be more efficient.
In an assembly line process, where each person has to wait for the person before them to finish their assigned task, it becomes obvious who is not pulling their fair share of the work. In this manner, everyone holds everyone else accountable for their output.
This was a great day for a good cause. The Community Food Bank qualifies for the Arizona [Working Poor Tax Credit] program. For every dollar the Community Food Bank receives, they can give 10 dollars of food to someone in need.
Special thanks to Greg Kishi for being our team leader for this event, and to Carol Tribble for taking these photographs.
Last week, US President Barack Obama declared September 2011 as "National Preparedness Month". Here is an excerpt of the press release:
Whenever our Nation has been challenged, the American people have responded with faith, courage, and strength. This year, natural disasters have tested our response ability across all levels of government. Our thoughts and prayers are with those whose lives have been impacted by recent storms, and we will continue to stand with them in their time of need. This September also marks the 10th anniversary of the tragic events of September 11, 2001, which united our country both in our shared grief and in our determination to prevent future generations from experiencing similar devastation. Our Nation has weathered many hardships, but we have always pulled together as one Nation to help our neighbors prepare for, respond to, and recover from these extraordinary challenges.
In April of this year, a devastating series of tornadoes challenged our resilience and tested our resolve. In the weeks that followed, people from all walks of life throughout the Midwest and the South joined together to help affected towns recover and rebuild. In Joplin, Missouri, pickup trucks became ambulances, doors served as stretchers, and a university transformed itself into a hospital. Local businesses contributed by using trucks to ship donations, or by rushing food to those in need. Disability community leaders worked side-by-side with emergency managers to ensure that survivors with disabilities were fully included in relief and recovery efforts. These stories reveal what we can accomplish through readiness and collaboration, and underscore that in America, no problem is too hard and no challenge is too great.
Preparedness is a shared responsibility, and my Administration is dedicated to implementing a "whole community" approach to disaster response. This requires collaboration at all levels of government, and with America's private and nonprofit sectors. Individuals also play a vital role in securing our country. The National Preparedness Month Coalition gives everyone the chance to join together and share information across the United States. Americans can also support volunteer programs through www.Serve.gov, or find tools to prepare for any emergency by visiting the Federal Emergency Management Agency's Ready Campaign website at [www.Ready.gov] or [www.Listo.gov].
In the last few days, we have been tested once again by Hurricane Irene. While affected communities in many States rebuild, we remember that preparedness is essential. Although we cannot always know when and where a disaster will hit, we can ensure we are ready to respond. Together, we can equip our families and communities to be resilient through times of hardship and to respond to adversity in the same way America always has -- by picking ourselves up and continuing the task of keeping our country strong and safe.
NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim September 2011 as National Preparedness Month. I encourage all Americans to recognize the importance of preparedness and observe this month by working together to enhance our national security, resilience, and readiness.
IBM has several webinars to help you prepare for upcoming disasters.
Today, September 8, at 4pm EDT, IBM is hosting a [CloudChat on Business Resilience] will focus on resiliency and continuity in the cloud—a timely topic considering the recent weather events on the East Coast of the U.S. This chat will include Richard Cocchiara, IBM Distinguished Engineer and CTO, IBM Business Continuity and Resiliency Services (@RichCocchiara1) and Patrick Corcoran, Global Business Development, IBM Business Continuity and Resiliency Services (@PatCorcoranIBM).
Don't think you can afford Disaster Recovery planning? Next week, September 13, I will be joined with a few other experts on freeing up much needed funds from your tight IT budget, by being more efficient. The Webinar [Taming Data Growth Made Easy] is part of IBM's "IT Budget Killer" series.
Lastly, on September 21, IBM will have the Webinar [Planning for Disaster Recovery in a Power Environment: Best Practices to Protect Your Data]. This will cover principal lessons learned from disasters like Hurricane Katrina and the World Trade Center, local and regional considerations for Disaster Recovery Planning, planning Recovery Time Objectives (RTOs), and best practices for automation, mirroring and multiple Site Operational Efficiencies. A customer case study from University of Rochester Medical Center (URMC) will help reinforce the concepts, with a discussion on how a major hospital ensures Business Continuity via Contingency Planning using IBM Power Systems. The speakers in clude Steve Finnes, World Wide Offering Manager for IBM Power Systems, Vic Peltz, Consulting IT Architect for WW Business Continuance Technical Marketing, and Rick Haverty, Director of IT Infrastructure at University of Rochester Medical Center (URMC).
Hopefully, you will find these webinars useful and informative!