This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
This week I was aboard the Queen Mary in Long Beach, California! This was a business event organized by [Key Info Systems], a valued IBM Business Partner. Key Info resells IBM servers, storage and switches.
The Queen Mary retired in 1967, and has been converted into a hotel and events venue. The locals just parked their car and walked on board, but I got to stay Tuesday through Thursday in one of the cabins. It was long and narrow, with round windows! There were four dials for the bathtub: Cold Salt, Hot Fresh, Cold Fresh, and Hot Salt.
Stepping on the boat was like walking back in time through history! If you decide to go see it, check out the [Art Deco bar at the front of the Promenade deck. The ship is still in the water, but is permanently docked. It is sectioned off to prevent the ocean waves from affecting it, so we did not have the nauseous moving back and forth normally associated with cruise ships.
(It is with a bit of irony that we are on the Queen Mary just days after the tragedy of the [Costa Concordia], the largest Italian cruise ship that ran aground near Isola de Giglio. The captain will have to explain how he [fell into a lifeboat] before he had a chance to wait for everyone else to get safely off the shipwreck. He was certainly no [Captain Sulley]! I am thankful that most of the 4,200 people survived the incident.)
Lief Morin, Founder and Chief Executive for Key Info Systems, kicked off the meeting with highlights of 2011 successes. I have known Lief for years, as Key Info comes to the Tucson EBC on a frequent basis. This event was designed to give his sellers an update of what is the latest for each product line, and what to look forward to in the next 12-18 months.
The next speaker was from Vision Solutions that provides High Availability solutions for IBM i on Power Systems. In 2010, their company nearly doubled in size with the acquisition of Double-Take, which provides data replication for x86 servers running Windows, Linux, VMware, Hyper-V and other hypervisors. The capabilities of Double-Take sounded similar to what IBM offers with [Tivoli Storage Manager FastBack] and [Tivoli Storage Manager for Virtual Environments].
Dinner at Sir Winston's
Rather than take the "Ghosts and Legends" tour, I opted for dinner at the Queen Mary's signature restaurant, Sir Winston's. This is a fancy place, so dress accordingly. If you want the Raspberry soufflé, order it early as it takes 30 minutes to prepare!
[Storwize V7000], including the new Storwize V7000 Unified configuration
Storage is an important part of the Key Info Systems revenue stream, so I was glad to have lots of questions and interactions from the audience.
Murder Mystery Dinner
The acting troupe from [Dinner Detective] put on quite the show for us! With all that is going on in the world, it is good to laugh out loud every now and then.
In other murder mystery dinners I have participated in, each person is assigned a "character" and given a script of what to say and when to say it. This was different, we got to pick our own characters. I chose "Doctor Watson", from the Sherlock Holmes series. Several attendees thought it was a double meaning with [IBM Watson], the computer that figured out the clues on Jeopardy! television game show, and has since been [put to work at Wellpoint] to help out the Healthcare industry.
After the "murder" happened, two actors portraying policemen selected members of the audience to answer questions. We didn't get a script of what to say, so everyone had to "ad lib". I was singled out as a suspect, and had fun playing along in character. One of the attendees afterwards said he was impressed that I was able to fabricate such amusing and elaborate responses to their personal and embarassing questions. As a public speaker for IBM, I have had a lot of practice thinking quickly on my feet.
Fibre Channel and Ethernet Switches
The next two speakers gave us an update on Fibre Channel and Ethernet switches, and their thoughts on the inevitability of Fibre Channel over Ethernet (FCoE). One of the exciting new developments is the [Brocade Network Subscription] which creates a flexible pay-per-use Ethernet port rental model for customers. This is especially timely given the Financial Accounting Standards Board proposed [FASB Change 13] that affects operating leases in the balance sheet.
With the Brocade Network Subscription, you pay monthly for the ports you are using. Need more ports, Brocade will install the added gear. Use fewer ports, Brocade will take the equipment back. There is no term endpoint or residual value like tradtional leasing, so when you are done using the equipment, give it back any time. This is ideal for companies that may need to have a lot of Ethernet ports for the next 2-3 years, but then plan to taper down, and don't want to get stuck with a long-term commitment or capital depreciation.
The last speaker was from VMware. IBM is the #1 reseller of VMware, and VMware commands an impressive 81 percent marketshare in the x86 virtualization space. The speaker presented VMware's strategy going forward, which aligns well with IBM's own strategy, to help companies Cloud-enable their existing IT infrastructures, in preparation for eventual moves to Hybrid or Public cloud deployments.
Special thanks to Lief Morin for sponsoring this event, Raquel Hernandez from IBM for coordinating my travel, and Pete, Christina and Kendrell from Key Info Systems for organizing the activities!
This is my final post on my coverage of the 30th annual [Data Center Conference]. IBM was a Platinum sponsor, and there were over 2,600 attendees, of which 27 percent were IT Directors or higher. Two thirds of the companies have 5000 employees or more. Here is a recap of the last few sessions I attended.
Best Practices for Data Center consolidation
As if the conference co-chairs aren't already super-busy, here they are presenting one of the breakout sessions. In the 1990s, consolidation was done purely to reduce total cost of ownership (TCO). Today, there are a variety of other reasons, including issues with power and cooling, service level agreements, and security.
Of these, 25 percent plan to have more data centers in three years, and 47 percent plan to consolidate to fewer. The benefits to consolidation include economies of scale, staff reduction, reduced hardware facilities costs, and application retirement. Challenges include dealing with politics, building new facilities to replace the old ones, and bandwidth. Here were some of the primary reasons why data center consolidation projects fail:
Human Resources (HR) issues
Resources not freed available
Lack of Project Management skills
No rationalization at consolidated site
Interactive Polling Results
The last keynote session was Thursday morning. The conference co-chairs present the highlights of the interactive polling that was done during the week at this conference.
The first topic was social media. There was a lot of Twitter activity with hashtag #GartnerDC that I followed throughout the week. Most of the tweets seem to be from people who were not actually at the conference.
Some 45 percent of the attendees have implemented social media initiatives at their companies. What tooling are they using to accomplish this? There are some provided by the major ITSM vendors, tools specific for corporate social media such as Yammer, collaboration tools like Microsoft SharePoint and IBM's Lotus Connections, and public sites like Facebook and Twitter. Here were the poll results:
The next topic was focused on Mobile devices and Cloud Computing. For example, do companies store data in public cloud, or plan to in the future, for mobile devices?
One third of the attendees allow employees to bring their own tablet to work with full IT support. Only 18 percent allow employees to bring their own PC or laptop. Over 40 percent felt that their IT department was not yet ready to support smartphones.
What are the main drivers to adopt private cloud? Some are deploying private clouds as a way to defend their IT jobs from going to the public cloud. Here were the poll results:
What problems are companies trying to solve with cloud computing? Here were the poll results:
A majority of attendees that use VMware are exploring LInux KVM, such as Red Hat Enterprise Virtualization (RHEV) or Microsfot Hyper-V. What storage protocol are attendees using for their server virtualization? Here were the poll results:
The next topic was the process for IT service management. The top three were ITIL, CMMI and DevOps, with the majority using ITIL or ITIL in combination with something else. These are needed for release management, change management, performance management, capacity management and incident management. How collaborative is the relationship between IT operations and application development? Here were the poll results:
How well does IT operations contribute to business innovation? This year 38 percent were satisfied, and 33 percent unsatisfied. This was a big improvement over last year, that found 19 percent satisfied, 64 percent unsatisfied.
Building a Private Storage Cloud: Is It a Science Experiment?
While everyone understands the benefits of private and public cloud computing, there seems to be hesitation about hosted cloud storage. Some people have already adopted some form of cloud storage, and other plan to within 12 months. Here were the poll results:
The top three reasons for considering public cloud storage was to adopt lower-cost storage tier, to benefit from off-site storage, and staff constraints. The top concerns were security and performance.
The IT department will need to start thinking like a cloud provider, and perhaps adopt a hybrid cloud approach. What IT equipment can be re-used? What will the new IT operations look like in a Cloud environment? What were the primary use cases for cloud storage? Here were the poll results:
In addition to the major cloud providers (IBM, Amazon, etc.) there are a variety of new cloud storage startups to address these business needs.
So that wraps up my coverage of this conference. In addition to attending great keynote and breakout sessions, I was able to have great one-on-one discussions with clients at the Solution Showcase booth, during breaks and at meals. IBM's focus on Big Data, Workload-optimized Systems, and Cloud seems to resonate well with the analysts and attendees. I want to give special thinks to Lynda, Dana, Peggy, Hugo, David, Rick, Cris, Richard, Denise, Chloe, and all my colleagues, friends and family from Arizona for their support!
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday morning sessions.
A Data Center Perspective on MegaVendors
The morning started with a keynote session. The analyst felt that the eight most strategic or disruptive companies in the past few decades were: IBM, HP, Cisco, SAP, Oracle, Apple and Google. Of these, he focused on the first three, which he termed the "Megavendors", presented in alphabetical order.
Cisco enjoys high-margins and a loyal customer base with Ethernet switch gear. Their new strategy to sell UP and ACROSS the stack moves them into lower-margin business like servers. Their strong agenda with NetApp is not in sync with their partnership with EMC. They recently had senior management turn-over.
HP enjoys a large customer base and is recognized for good design and manufacturing capabilities. Their challenges are mostly organizational, distracted by changes at the top and an untested and ever-changing vision, shifting gears and messages too often. Concerns over the Itanium have not helped them lately.
IBM defies simple description. One can easily recognize Cisco as an "Ethernet Switch" company, HP as a "Printer Company", Oracle as a "Database Company', but you can't say that IBM is an "XYZ" company, as it has re-invented itself successfully over its past 100 years, with a strong focus on client relationships. IBM enjoys high margins, sustainable cost structure, huge resources, a proficient sales team, and is recognized for its innovation with a strong IBM Research division. Their "Smarter Planet" vision has been effective in supporting their individual brands and unlock new opportuties. IBM's focus on growth markets takes advantage of their global reach.
His final advice was to look for "good enough" solutions that are "built for change" rather than "built to last".
Chris works in the Data Center Management and Optimization Services team. IBM owns and/or manages over 425 data centers, representing over 8 million square feet of floorspace. This includes managing 13 million desktops, and 325,000 x86 and UNIX server images, and 1,235 mainframes. IBM is able to pool resources and segment the complexity for flexible resource balancing.
Chris gave an example of a company that selected a Cloud Compute service provided on the East coast a Cloud Storage provider on the West coast, both for offering low rates, but was disappointed in the latency between the two.
Chris asked "How did 5 percent utilization on x86 servers ever become acceptable?" When IBM is brought in to manage a data center, it takes a "No Server Left Behind" approach to reduce risk and allow for a strong focus on end-user transition. Each server is evaluated for its current utilization:
Amazingly, many servers are unused. These are recycled properly.
1 to 19 percent
Workload is virtualized and moved to a new server.
20 to 39 percent
Use IBM's Active Energy Manager to monitor the server.
40 to 59 percent
Add more VMs to this virtualized server.
over 60 percent
Manage the workload balance on this server.
This approach allows IBM to achieve a 60 to 70 percent utilization average on x86 machines, with an ROI payback period of 6 to 18 months, and 2x-3x increase of servers-managed-per-FTE.
Storage is classified using Information Lifecycle Management (ILM) best practices, using automation with pre-defined data placement and movement policies. This allows only 5 percent of data to be on Tier-1, 15 percent on Tier-2, 15 percent on Tier-3, and 65 percent on Tier-4 storage.
Chris recommends adopting IT Service Management, and to shift away from one-off builds, stand-alone apps, and siloed cost management structures, and over to standardization and shared resources.
You may have heard of "Follow-the-sun" but have you heard of "Follow-the-moon"? Global companies often establish "follow-the-sun" for customer service, re-directing phone calls to be handled by people in countries during their respective daytime hours. In the same manner, server and storage virtualization allows workloads to be moved to data centers during night-time hours, following the moon, to take advantage of "free cooling" using outside air instead of computer room air conditioning (CRAC).
Since 2007, IBM has been able to double computer processing capability without increasing energy consumption or carbon gas emissions.
It's Wednesday, Day 3, and I can tell already that the attendees are suffering from "information overload'.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
Data reclamation, rationalization and planning
Virtualization and tiering
Backup, business continuity and disaster recovery
Storage process and governance
Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of some of the Tuesday afternoon sessions:
Brocade: Maximizing Your Cloud: How Data Centers Must Evolve
This was a session sponsored by Brocade to promote their concept of the "Ethernet Fabric". The first speaker, John McHugh, was from Brocade, and the second speaker was a client testimonial, Jamie Shepard, EVP for International Computerware, Inc.
John had an interesting take on today's network challenges. He feels that most LANs are organized for "North-South" traffic, referring to upload/downloads between clients and servers. However, the networks of tomorrow will need to focus on "East-West" traffic, referring to servers talking to other servers.
John was also opposed to integrated stacks that combine servers, storage and networking into a single appliance, as this prevents independent scaling of resources.
The Future of Backup is Not Backup
Primary data is growing at 40 to 60 percent compound annual growth rate (CAGR), but backup data is growing faster. Why? Because data that was not backed up before are now being backed up, including test data, development data, and mobile application data.
Backup costs are 19x more expensive than production software costs. There is an enormous gap in data protection because companies fail to factor this into their budgets. It is not uncommon for IT departments to use multiple backup tools, for example one tool for VMs, and another tool for servers, and a third product for desktops.
part of the problem is identifying who "buys" the backup software. The server team might focus on the operating systems supported. The storage team focuses on the disk and tape media supported. The application owners focus on the features and capabilities for backup that minimize impact to their application.
The analyst organized these issues into three "C's" of backup concerns: Cost, Capability and Complexity. Cost is not just the software license fee for the backup software, but the cost of backup media, courier fees, and transmisison bandwidth. Capability refers to the features and functions, and IT folks are tired of having to augment their backup solution with additional tools and scripts to compensate for lack of capability. Complexity refers to the challenges trying to get existing backup software to tackle new sources like Virtual Machines, Mobile apps, and so on.
Has everyone moved to a tape-less backup system? Polling results found that people are shifting back to tape, either in a tape-only environment, or to supplement their disk or disk-based virtual tape library (VTL). Here are the polling results:
The poll also showed the top three backup software vendors were Symantec, IBM and Commvault, which is consistent with marketshare. However, the analyst feels that by 2014, an estimated 30 percent of companies will change their backup softwar vendor out of frustration over cost, capability and/or complexity.
There are a lot new backup software products specific to dealing with Virtual Machines. Some are focused exclusively on VMware. When asked what tool people used to backup their VMs, the polling results showed the following. NOte that 20 percent for Other includes products from major vendors, like IBM Tivoli Storage Manager for Virtual Environments, as the analyst was more interested in the uptake of backup software from startups.
Some companies are considering Cloud Computing for backup. This is one area where having the cloud service provider at a distance is an actual advantage for added protection. A poll asking whether some or most data is backed up to the Cloud, either already today, or plans for the near future within the next 12 or 24 months, showed the following:
In addition to backup service providers, there are now several startups that offer file sharing, and some are adding "versioning" to this that can serve as an alternative to backup. These include DropBox, SugarSync, iCloud, SpiderOak and ShareFile.
The final topic was Snapshot and Disk Replication. These tend to be hardware-based, so they may not have options for versioning, scheduling, or application-aware capabilities normally associated with backup software. Space-efficient snapshots, which point unchanged data back to the original source, may not provide full data protection that disparate backup copies would provide. Here were polling results on whether snapshot/replication was used to augment or replace some or most of their backups:
Some of his observations and recommendations:
Maintenance is more expensive than acquisition cost. Don't focus on the tip of the iceberg. Some backup software is more efficient for bandwidth and media which will save tons of money in the long run.
Try to optimize what you have. He calls this the "Starbuck's effect". If you just need one coffee, then paying $4.50 for a cup makes sense. But if you need 100 coffees, you might be better off buying the beans.
Design backups to meet service level agreements (SLAs). In the past, backup was treated as one-size-fits-all, but today you can now focus on a workload by workload basis.
Be conservative in adopting new technologies until you have your backup procedures in place to handle data protection.
Backup is for operational recovery, not long-term retention of data. A poll showed two-thirds of the audience kept backup versions for longer than 60 days! Re-evaluate how long you keep backups, and how many versions you keep. If you need long-term retention, use archive process instead.
Recovery testing is a dying art. Practice recovery procedures so that you can do it safely and correctly when it matters most.
The analyst had a series of awesome pictures of large structures, the pyramids of Giza, the Chrysler building, and so on, and how they would look without their foundations in place. Backup is a foundation and should be treated as such in all IT planning purposes.
IT is evolving, but some basic needs like networking and backup procedures don't change. As companies re-evaluate their IT operations for Big Data, Cloud Computing and other new technologies, it is best to remember that some basic needs must be met as part of those evaluations.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Monday afternoon sessions:
IBM Watson and your Data Center
Steve Sams, IBM VP of Site and Facilities Services, cleverly used IBM Watson as a way to explain how analytics can be used to help manage your data center. Sadly, most of the people at my table missed the connection between IBM Watson and Analytics. How does answering a single trivia question in under three seconds relate to the ongoing operations of a data center? If you were similarly confused, take a peak at my series of IBM Watson blog posts:
The analyst who presented this topic was probably the fastest-speaking Texan I have met. He covered various aspects of Cloud Computing that people need to consider. Why hasn't Cloud taken off sooner? The analyst feels that Cloud Computing wasn't ready for us, and we weren't ready for Cloud Computing. The fundamentals of Cloud Computing have not changed, but we as a society have. Now that many end users are comfortable consuming public cloud resources, from Facebook to Twitter to Gmail, they are beginning to ask for similar from their corporate IT.
Legal issues - see this hour-long video, [Cloud Law & Order], which discusses legal issues related to Cloud Computing.
Employee staffing - need to re-tool and re-train IT employees to start thinking of their IT as a service provider internally.
Hybrid Cloud - rather than struggle choosing between private and public cloud methodologies, consider a combination of both.
University of Rochester Medical Center (URMC) Cracks Code on Data Growth
Often times, the hour is split, 30 minutes of the sponsor talking about various products, followed by 30 minutes of the client giving a user experience. Instead, I decided to let the client speak for 45 minutes, and then I moderated the Q&A for the remaining 15 minutes. This revised format seemed to be well-received!
University of Rochester is in New York, about 60 miles east of Buffalo, and 90 miles from Toronto across Lake Ontario. Six years ago, Rick Haverty joined URMC as the Director of Infrastructure services, managing 130 of the 300 IT personnel at the Medical Center. I met Rick back in May, when he presented at the IBM [Storage Innovation Executive Summit] in New York City.
URMC has DS8000, DS5000, XIV, SONAS, Storwize V7000 and is in the process of deploying Storwize V7000 Unified. He presented how he has used these for continuous operations and high availability, while controlling storage growth and costs.
The Q&A was lively, focusing on how his team manages 1PB of disk storage with just four storage administrators, his choice of a "Vendor Neutral Archive" (VNA), and his experiences with integration.
This was a great afternoon, and I was glad to get all my speaking gigs done early in the week. I would like to thank Rick Haverty of URMC for doing a great job presenting this afternoon!
Wrapping up my coverage of the [IBM System Storage Technical University 2011], I attended a few sessions on Friday morning. The last session was Glenn Anderson's "IT Game Changers: the IT Professional's Guide to Becoming a Technology Trailblazer." Glenn used to run the Storage University events, but now is the conference manager for the System z mainframe events.
Glenn organized this talk from lessons from the following books:
Glen suggested that IT professionals should understand the dissatisfaction with IT that is driving companies to switch over to Cloud Computing. IT professionals should adopt a service-oriented approach, realize the full potential of new disruptive technologies, and know when to "jump the curve" to the next generation of technology. For example, IT professionals should lead the movement to Cloud. If you build your own private cloud, or purchase some time for instances on a public cloud, you will be in a better position to be the "trusted advisor" to IT management.
CIOs should encourage IT to be part of the corporate strategy, but may have to fix the broken IT funding model. The IT department should be a "value center" not a "cost center" as it has been traditionally treated. When treated as a "cost center", IT departments only focus on cost reductions, and not looking at ways that the IT department can help drive revenues, improve customer service, or enhance employee productivity. A well-orgnized IT department can be a competitive advantage.
Taking a "service-oriented" approach allows IT and Business Process to come together. Often times, IT and business professionals don't communicate well, and this new service-oriented approach can bridge the gap. Service Oriented Architecture [SOA] can help connect existing legacy applications to the new Cloud Computing environment.
IT budgets should consist of two parts. Strategic funding for new IT projects, and an operational budget for keeping current applications running. Roughly 45 percent of capital investment in USA goes toward IT. Too often, the IT department is focused on itself, on technology and reducing costs, and not enough on aligning IT with business transformation. When IT is used in conjunction with a sound business strategy, their can be significant payoff.
After 550 years, the printing press and printed materials are being pushed from center. While other electronic media like radio and television have been around for a while, the internet and digital publishing are constantly available, and represent a shift from traditional printed materials.
When evaluating new technologies, IT professionals should ask themselves a few questions. Is it easy to use? Does it enable people to connect in new ways? Is it more cost-effective, or tap new sources of revenue? Does it shift power from one player to another? A new intellectual ethic is taking hold. Becoming an IT Game Changer can help stay one step ahead as Cloud Computing and other new IT platforms are adopted.
IBM Storage Strategy for the Smarter Computing Era
I presented this session on Thursday morning. It is a session I give frequently at the IBM Tucson Executive Briefing Center (EBC). IBM launched [Smarter Computing initiative at IBM Pulse conference]. My presentation covered the role of storage in Business Analytics, Workload Optimized Systems, and Cloud Computing.
Layer 8: Cloud Computing and the new IT Delivery Model
Ed Batewell, IBM Field Technical Support Specialist, presented this overview on Cloud Computing. The "Layer 8" is a subtle reference to the [7-layer OSI Model] for networking protocols. Ed cites insights from the [2011 IBM Global CIO Survey]. Of the 3000 companies surveyed, 60 percent plan to use or deploy clouds. In USA, 70 percent of CIOs have significant plans for cloud within the next 3-5 years. These numbers are double the statistics gleamed from the 2009 Global CIO survey. Clouds are one of IBM's big four initiatives, expecting to generate $7 Billion USD annual revenues by 2015.
IBM is recognized in the industry as one of "Big 5" vendors (Google, Yahoo, Microsoft, and Amazon round out the rest). As such, IBM has contributed to the industry a set of best practices known as the [Cloud Computing Reference Architect (36-page document)]. As is typical for IBM, this architecture is end-to-end complete, covering the three main participants for successful cloud deployments:
Consumers: the people and systems that use cloud computing services
Providers: the people, infrastructure and business operations needed to deliver IT services to consumers
Developers: the people and their development tools that create apps and platforms for cloud computing
IBM is working hard to eliminate all barriers to adoption for Cloud Computing. [Mirage image management] can patch VM images offline to address "Day 0" viruses. [Hybrid Cloud Integrator can help integrate new Cloud technologies to legacy applications. [IBM Systems Director VMcontrol] can manage VM images from z/VM on the mainframe, to PowerVM on UNIX servers, to VMware, Microsoft, Xen and KVM for x86 servers. IBM's [Cloud Service Provider Platform (CSP2)] is designed for Telecoms to offer Cloud Computing services. IBM CloudBurst is a "Cloud-in-a-Can" optimized stack of servers, storage and switches that can be installed in five days and comes in various "tee-shirt sizes" (Small, Medium, Large and Extra Large), depending on how many VMs you want to run.
Ed mentioned that companies trying to build their own traditional IT applications and environments, in an effort to compete against the cost-effective Clouds, reminded him of Thomas Thwaites' project of building a toaster from scratch. You can watch the [TED video, 11 minutes]:
An interesting project is [Reservoir] which IBM is working with other industry leaders to develop a way to seamlessly migrate VMs from one location to another, globally, without requiring shared storage, SAN zones or Ethernet subnets. This is similar to how energy companies buy and sell electricity to each other, as needed, or the way telecommunications companies allow roaming acorss each others networks.
IBM System Networking - Convergence
Jeff Currier, IBM Executive Consultant for the new IBM System Networking group, presented this session on Network Convergence. Storage is expected to grow 44x, from 0.8 [Zettabytes] in 2009, to 35 Zetabytes by the year 2020. The role of the network is growing in importance. IBM refers to this converged loss-less Ethernet network as "Convergence Enhanced Ethernet" (CEE), which Cisco uses the term "Data Center Ethernet" (DCE), and the rest of the industry uses "Data Center Bridging" (DCB).
To make this happen, we need to replace Spanning Tree Protocol [STP] that eliminates walking in circles in a multi-hop network configuration, with a new Layer 2 Multipathing (L2MP) protocol. The two competing for the title are Shortest Path Bridging (IEEE 802.1aq) and Transparent Interconnect of Lots of Links (IETF TRILL).
All roads lead to Ethernet. While FCoE has not caught on as fast as everyone hoped, iSCSI has benefited from all the enhancements to the Ethernet standard. iSCSI works in both lossy and lossless versions of Ethernet, and seems to be the preferred choice for new greenfield deployments for Small and Medium sized Businesses (SMB). Larger enterprises continue to use Fibre Channel (FCP and FICON), but might use single-hop FCoE from the servers to top-of-rack switches. Both iSCSI and FCoE scale well, but FCoE is considered more efficient.
IBM has a strategy, and is investing heavily in these standards, technologies, and core competencies.
Clod Barrera is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. He predicts that by 2015, 10 percent of the servers and storage purchases, as well as 25 percent of the network gear purchases, will be related to Cloud deployments. Cloud Storage is expected to grow at a compound annual growth rate (CAGR) of 32 percent through 2015, compared to only 3.8 percent growth for non-Cloud storage.
Cloud Computing is allowing companies to rethink their IT infrastructure, and reinvent their business. Clod presented an interesting chart on the "Taxonomy" of storage in Cloud environments. On the left he had examples of Storage that was part of a Cloud Compute application. On the right he had storage that was accessed directly through protocols or APIs. Under each he had several examples for transactional data, stream data, backups and archives.
Clod feels the only difference between Private and Public clouds is a matter of ownership. In private clouds, these are owned by the company that uses them via their private Intranet network. Public clouds are owned by Cloud Service providers and are accessed over the public Internet. Clod presented IBM's strategy to deliver Cloud at five levels:
Private Cloud: on-site equipment, behind company firewall, managed by IT staff
Managed Private Cloud: on-site equipment, behind company firewall, managed by IBM or other Cloud Service provider
Hosted Private Cloud: dedicated, off-premises equipment, located and managed by IBM or other Cloud Service Provider, and access through VPN
Shared Cloud Services: shared, off-premises equipment, located at IBM or other Cloud Service Provider, managed by IBM or Cloud Service provider, and access through VPN. The facility is intended for enterprises only, on a contractual basis, and will be auditable for compliance to government regulations, etc.
Public Cloud: shared, off-premises equipment, located and managed by IBM or other Cloud Service provider, targeted to offer cloud compute and storage resources, with standardized platforms of operating systems and middleware, for individuals, small and medium sized businesses.
As with storage in traditional data center deployments, storage in clouds will be tiered, with Tier 0 being the fastest tier, to Tier 4 for "deep and cheap" archive storage. IBM SONAS is an example of Cloud-ready storage that can help make these tiers accessible through standard Ethernet protocols. Cloud Service providers will use metering and Service Level Agreements (SLAs) to offer different rates for different tiers of storage in the cloud.
Clod wrapped up his session explaining IBM's Cloud Computing Reference Architecture (CCRA). This is an all-encompassing diagram that shows how all of IBM's hardware, software and services fit into Cloud deployments.
It's that time again. Every year, IBM hosts the "System Storage Technical University". I have been going to these since they first started in the 1990s. This time we are at the lovely [Hilton Orlando] in Orlando, Florida.
For those who want to relive past events, here are my blog posts from this event in 2010:
As was the case last year, IBM once again will run this conference alongside the [IBM System x Technical University] the same week, in the same hotel. This allows attendees to cross over to the other side to see a few sessions of the other conference. I took advantage of this last year, and plan to do so again this year as well!
For those on Twitter, you can follow my tweets at [@az990tony] or search on the hash tag #ibmtechu.