Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Continuing my coverage of the 30th annual [Data Center Conference]. here is a recap of Wednesday morning sessions.
A Data Center Perspective on MegaVendors
The morning started with a keynote session. The analyst felt that the eight most strategic or disruptive companies in the past few decades were: IBM, HP, Cisco, SAP, Oracle, Apple and Google. Of these, he focused on the first three, which he termed the "Megavendors", presented in alphabetical order.
Cisco enjoys high-margins and a loyal customer base with Ethernet switch gear. Their new strategy to sell UP and ACROSS the stack moves them into lower-margin business like servers. Their strong agenda with NetApp is not in sync with their partnership with EMC. They recently had senior management turn-over.
HP enjoys a large customer base and is recognized for good design and manufacturing capabilities. Their challenges are mostly organizational, distracted by changes at the top and an untested and ever-changing vision, shifting gears and messages too often. Concerns over the Itanium have not helped them lately.
IBM defies simple description. One can easily recognize Cisco as an "Ethernet Switch" company, HP as a "Printer Company", Oracle as a "Database Company', but you can't say that IBM is an "XYZ" company, as it has re-invented itself successfully over its past 100 years, with a strong focus on client relationships. IBM enjoys high margins, sustainable cost structure, huge resources, a proficient sales team, and is recognized for its innovation with a strong IBM Research division. Their "Smarter Planet" vision has been effective in supporting their individual brands and unlock new opportuties. IBM's focus on growth markets takes advantage of their global reach.
His final advice was to look for "good enough" solutions that are "built for change" rather than "built to last".
Chris works in the Data Center Management and Optimization Services team. IBM owns and/or manages over 425 data centers, representing over 8 million square feet of floorspace. This includes managing 13 million desktops, and 325,000 x86 and UNIX server images, and 1,235 mainframes. IBM is able to pool resources and segment the complexity for flexible resource balancing.
Chris gave an example of a company that selected a Cloud Compute service provided on the East coast a Cloud Storage provider on the West coast, both for offering low rates, but was disappointed in the latency between the two.
Chris asked "How did 5 percent utilization on x86 servers ever become acceptable?" When IBM is brought in to manage a data center, it takes a "No Server Left Behind" approach to reduce risk and allow for a strong focus on end-user transition. Each server is evaluated for its current utilization:
Amazingly, many servers are unused. These are recycled properly.
1 to 19 percent
Workload is virtualized and moved to a new server.
20 to 39 percent
Use IBM's Active Energy Manager to monitor the server.
40 to 59 percent
Add more VMs to this virtualized server.
over 60 percent
Manage the workload balance on this server.
This approach allows IBM to achieve a 60 to 70 percent utilization average on x86 machines, with an ROI payback period of 6 to 18 months, and 2x-3x increase of servers-managed-per-FTE.
Storage is classified using Information Lifecycle Management (ILM) best practices, using automation with pre-defined data placement and movement policies. This allows only 5 percent of data to be on Tier-1, 15 percent on Tier-2, 15 percent on Tier-3, and 65 percent on Tier-4 storage.
Chris recommends adopting IT Service Management, and to shift away from one-off builds, stand-alone apps, and siloed cost management structures, and over to standardization and shared resources.
You may have heard of "Follow-the-sun" but have you heard of "Follow-the-moon"? Global companies often establish "follow-the-sun" for customer service, re-directing phone calls to be handled by people in countries during their respective daytime hours. In the same manner, server and storage virtualization allows workloads to be moved to data centers during night-time hours, following the moon, to take advantage of "free cooling" using outside air instead of computer room air conditioning (CRAC).
Since 2007, IBM has been able to double computer processing capability without increasing energy consumption or carbon gas emissions.
It's Wednesday, Day 3, and I can tell already that the attendees are suffering from "information overload'.
Webcast: How to Diagnose and Cure What Ails Your Storage Infrastructure
Wednesday, March 23, 2011 at 11:00 AM PDT / 11:00 AM Arizona MST / 2:00 PM EDT
Storage is the most poorly utilized infrastructure element -- and the most costly part of hardware budgets -- in most IT shops today. And it’s getting worse. Storage management typically involves nightmarish mash-up of tools for capacity management, performance management and data protection management unique to each array deployed in heterogeneous fabrics. Server and desktop virtualization seem to have made management issues worse, and coming on the heels of changing workloads and data proliferation is the requirement to add data management to the set of responsibilities shouldered by fewer and fewer storage professionals. Forecast for Storage in 2012: more pain as long delayed storage infrastructure refresh becomes mandatory.
In this webcast, fellow blogger Jon Toigo, CEO of Toigo Partners International, of [DrunkenData] fame, and I will take turns assessing the challenges and suggesting real-world solutions to the many issues that confound storage efficiency in contemporary IT. Integrating real world case studies and technology insights, our storage experts will deliver a must see webcast that sets down a strategy for fixing storage...before it fixes you.
Don't miss this event, unless you like the stress of knowing that your next disaster may be a data disaster.
Clod Barrera is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. He predicts that by 2015, 10 percent of the servers and storage purchases, as well as 25 percent of the network gear purchases, will be related to Cloud deployments. Cloud Storage is expected to grow at a compound annual growth rate (CAGR) of 32 percent through 2015, compared to only 3.8 percent growth for non-Cloud storage.
Cloud Computing is allowing companies to rethink their IT infrastructure, and reinvent their business. Clod presented an interesting chart on the "Taxonomy" of storage in Cloud environments. On the left he had examples of Storage that was part of a Cloud Compute application. On the right he had storage that was accessed directly through protocols or APIs. Under each he had several examples for transactional data, stream data, backups and archives.
Clod feels the only difference between Private and Public clouds is a matter of ownership. In private clouds, these are owned by the company that uses them via their private Intranet network. Public clouds are owned by Cloud Service providers and are accessed over the public Internet. Clod presented IBM's strategy to deliver Cloud at five levels:
Private Cloud: on-site equipment, behind company firewall, managed by IT staff
Managed Private Cloud: on-site equipment, behind company firewall, managed by IBM or other Cloud Service provider
Hosted Private Cloud: dedicated, off-premises equipment, located and managed by IBM or other Cloud Service Provider, and access through VPN
Shared Cloud Services: shared, off-premises equipment, located at IBM or other Cloud Service Provider, managed by IBM or Cloud Service provider, and access through VPN. The facility is intended for enterprises only, on a contractual basis, and will be auditable for compliance to government regulations, etc.
Public Cloud: shared, off-premises equipment, located and managed by IBM or other Cloud Service provider, targeted to offer cloud compute and storage resources, with standardized platforms of operating systems and middleware, for individuals, small and medium sized businesses.
As with storage in traditional data center deployments, storage in clouds will be tiered, with Tier 0 being the fastest tier, to Tier 4 for "deep and cheap" archive storage. IBM SONAS is an example of Cloud-ready storage that can help make these tiers accessible through standard Ethernet protocols. Cloud Service providers will use metering and Service Level Agreements (SLAs) to offer different rates for different tiers of storage in the cloud.
Clod wrapped up his session explaining IBM's Cloud Computing Reference Architecture (CCRA). This is an all-encompassing diagram that shows how all of IBM's hardware, software and services fit into Cloud deployments.
Continuing my coverage of the Data Center 2010 conference, Monday I attended four keynote sessions.
The first keynote speaker started out with an [English proverb]: Turbulent waters make for skillful mariners.
He covered the state of the global economy and how CIOs should address the challenge. We are on the flat end of an "L-shaped" recovery in the United States. GDP growth is expected to be only 4.7 percent Latin America, 2.3 percent in North America, 1.5 percent Europe. Top growth areas include 8.0 percent India and 8.6 percent China, with an average of 4.7 growth for the entire Asia Pacific region.
On the technical side, the top technologies that CIOs are pursuing for 2011 are Cloud Computing, Virtualization, Mobility, and Business Intelligence/Analytics. He asked the audience if the "Stack Wars" for integrated systems are hurting or helping innovation in these areas.
Move over "conflict diamonds", companies now need to worry about [conflict minerals].
He proposed an alternative approach called Fabric-Based Infrastructure. In this new model, a shared pool of servers is connected to a shared pool of storage over an any-to-any network. In this approach, IT staff spend all of their time just stocking up the vending machine, allowing end-users to get the resources they need.
Crucial Trends You Need to Watch
The second speaker covered ten trends to watch, but these were not limited to just technology trends.
Virtualization is just beginning - even though IBM has had server virtualization since 1967 and storage virtualization since 1974, the speaker felt that adoption of virtualization is still in its infancy. Ten years ago, average CPU utilization for x86 servers of was only 5-7 percent. Thanks to server virtualization like VMware and Hyper-V, companies have increased this to 25 percent, but many projects to virtualized have stalled.
Big Data is the elephant in the room - storage growth is expected to grow 800 percent over the next 5 years.
Green IT - Datacenters consume 40 to 100 times more energy than the offices they support. Six months ago, Energy Star had announced [standards for datacenters] and energy efficiency initiatives.
Unified Communications - Voice over IP (VoIP) technologies, collaboration with email and instant messages, and focus on Mobile smartphones and other devices combines many overlapping areas of communication.
Staff retention and retraining - According to US Labor statistics, the average worker will have 10 to 14 different jobs by the time they reach 38 years of age. People need to broaden their scope and not be so vertically focused on specific areas.
Social Networks and Web 2.0 - the keynote speaker feels this is happening, and companies that try to restrict usage at work are fighting an uphill battle. Better to get ready for it and adopt appropriate policies.
Legacy Migrations - companies are stuck on old technology like Microsoft Windows XP, Internet Explorer 6, and older levels of Office applications. Time is running out, but migration to later releases or alternatives like Red Hat Linux with Firefox browser are not trivial tasks.
Compute Density - Moore's Law that says compute capability will double every 18 months is still going strong. We are now getting more cores per socket, forcing applications to re-write for parallel processing, or use virtualization technologies.
Cloud Computing - every session this week will mention Cloud Computing.
Converged Fabrics - some new approaches are taking shape for datacenter design. Fabric-based infrastructure would benefit from converging SAN and LAN fabrics to allow pools of servers to communicate freely to pools of storage.
He sprinkled fun factoids about our world to keep things entertaining.
50 percent of today's 21-year-olds have produced content for the web. 70 percent of four-year-olds have used a computer. The average teenager writes 2,282 text messages on their cell phone per month.
This year, Google averaged 31 billion searches per month, compared 2.6 billion searches per month in 2007.
More video has been uploaded to YouTube in the last two months than the three major US networks (ABC, NBC, CBS) have aired since 1948.
Wikipedia averages 4300 new articles per day, and now has over 13 million articles.
This year, Facebook reached 500 million users. If it were a country, it would be ranked third. Twitter would be ranked 7th, with 69% of their growth being from people 32-50 years old.
In 1997, a GB of flash memory cost nearly $8000 to manufacture, today it is only $1.25 instead.
The computer in today's cell phone is million times cheaper, and thousand times more powerful, than a single computer installed at MIT back in 1965. In 25 years, the compute capacity of today's cell phones could fit inside a blood cell.
See [interview of Ray Kurzweil] on the Singularity for more details.
The Virtualization Scenario: 2010 to 2015
The third keynote covered virtualization. While server virtualization has helped reduce server costs, as well as power and cooling energy consumption, it has had a negative effect on other areas. Companies that have adopted server virtualization have discovered increased costs for storage, software and test/development efforts.
The result is a gap between expectations and reality. Many virtualization projects have stalled because there is a lack of long-term planning. The analysts recommend deploying virtualization in stages, tackle the first third, so called "low hanging fruit", then proceed with the next third, and then wait and evaluate results before completing the last third, most difficult applications.
Virtualization of storage and desktop clients are completely different projects than server virtualization and should be handled accordingly.
Cloud Computing: Riding the Storm Out
The fourth keynote focus on the pros and cons of Cloud Computing. First they start by defining the five key attributes of Cloud: self-service, scalable elasticity, shared pool of resources, metered and paid per use, over open standard networking technologies.
In addition to IaaS, PaaS and SaaS classifications, the keynote speaker mentioned a fourth one: Business Process as a Service (BPaaS), such as processing Payroll or printing invoices.
While the debate rages over the benefits between private and public cloud approaches, the keynote speaker brings up the opportunites for hybrid and community clouds. In fact, he felt there is a business model for a "cloud broker" that acts as the go-between companies and cloud service providers.
A poll of the audience found the top concerns inhibiting cloud adoption were security, privacy, regulatory compliance and immaturity. Some 66 percent indicated they plan to spend more on private cloud in 2011, and 20 percent plan to spend more on public cloud options. He suggested six focus areas:
Test and Development
Prototyping / Proof-of-Concept efforts
Web Application serving
SaaS like email and business analytics
Select workloads that lend themselves to parallelization
The session wrapped up with some stunning results reported by companies. Server provisioning accomplished in 3-5 minutes instead of 7-12 weeks. Reduced cost of email by 70 percent. Four-hour batch jobs now completed in 20 minutes. 50 percent increase in compute capacity with flat IT budget. With these kind of results, the speaker suggests that CIOs should at least start experimenting with cloud technologies and start to profile their workloads and IT services to develop a strategy.
That was just Monday morning, this is going to be an interesting week!
This is my final post on my coverage of the 30th annual [Data Center Conference]. IBM was a Platinum sponsor, and there were over 2,600 attendees, of which 27 percent were IT Directors or higher. Two thirds of the companies have 5000 employees or more. Here is a recap of the last few sessions I attended.
Best Practices for Data Center consolidation
As if the conference co-chairs aren't already super-busy, here they are presenting one of the breakout sessions. In the 1990s, consolidation was done purely to reduce total cost of ownership (TCO). Today, there are a variety of other reasons, including issues with power and cooling, service level agreements, and security.
Of these, 25 percent plan to have more data centers in three years, and 47 percent plan to consolidate to fewer. The benefits to consolidation include economies of scale, staff reduction, reduced hardware facilities costs, and application retirement. Challenges include dealing with politics, building new facilities to replace the old ones, and bandwidth. Here were some of the primary reasons why data center consolidation projects fail:
Human Resources (HR) issues
Resources not freed available
Lack of Project Management skills
No rationalization at consolidated site
Interactive Polling Results
The last keynote session was Thursday morning. The conference co-chairs present the highlights of the interactive polling that was done during the week at this conference.
The first topic was social media. There was a lot of Twitter activity with hashtag #GartnerDC that I followed throughout the week. Most of the tweets seem to be from people who were not actually at the conference.
Some 45 percent of the attendees have implemented social media initiatives at their companies. What tooling are they using to accomplish this? There are some provided by the major ITSM vendors, tools specific for corporate social media such as Yammer, collaboration tools like Microsoft SharePoint and IBM's Lotus Connections, and public sites like Facebook and Twitter. Here were the poll results:
The next topic was focused on Mobile devices and Cloud Computing. For example, do companies store data in public cloud, or plan to in the future, for mobile devices?
One third of the attendees allow employees to bring their own tablet to work with full IT support. Only 18 percent allow employees to bring their own PC or laptop. Over 40 percent felt that their IT department was not yet ready to support smartphones.
What are the main drivers to adopt private cloud? Some are deploying private clouds as a way to defend their IT jobs from going to the public cloud. Here were the poll results:
What problems are companies trying to solve with cloud computing? Here were the poll results:
A majority of attendees that use VMware are exploring LInux KVM, such as Red Hat Enterprise Virtualization (RHEV) or Microsfot Hyper-V. What storage protocol are attendees using for their server virtualization? Here were the poll results:
The next topic was the process for IT service management. The top three were ITIL, CMMI and DevOps, with the majority using ITIL or ITIL in combination with something else. These are needed for release management, change management, performance management, capacity management and incident management. How collaborative is the relationship between IT operations and application development? Here were the poll results:
How well does IT operations contribute to business innovation? This year 38 percent were satisfied, and 33 percent unsatisfied. This was a big improvement over last year, that found 19 percent satisfied, 64 percent unsatisfied.
Building a Private Storage Cloud: Is It a Science Experiment?
While everyone understands the benefits of private and public cloud computing, there seems to be hesitation about hosted cloud storage. Some people have already adopted some form of cloud storage, and other plan to within 12 months. Here were the poll results:
The top three reasons for considering public cloud storage was to adopt lower-cost storage tier, to benefit from off-site storage, and staff constraints. The top concerns were security and performance.
The IT department will need to start thinking like a cloud provider, and perhaps adopt a hybrid cloud approach. What IT equipment can be re-used? What will the new IT operations look like in a Cloud environment? What were the primary use cases for cloud storage? Here were the poll results:
In addition to the major cloud providers (IBM, Amazon, etc.) there are a variety of new cloud storage startups to address these business needs.
So that wraps up my coverage of this conference. In addition to attending great keynote and breakout sessions, I was able to have great one-on-one discussions with clients at the Solution Showcase booth, during breaks and at meals. IBM's focus on Big Data, Workload-optimized Systems, and Cloud seems to resonate well with the analysts and attendees. I want to give special thinks to Lynda, Dana, Peggy, Hugo, David, Rick, Cris, Richard, Denise, Chloe, and all my colleagues, friends and family from Arizona for their support!
It's that time again. Every year, IBM hosts the "System Storage Technical University". I have been going to these since they first started in the 1990s. This time we are at the lovely [Hilton Orlando] in Orlando, Florida.
For those who want to relive past events, here are my blog posts from this event in 2010:
As was the case last year, IBM once again will run this conference alongside the [IBM System x Technical University] the same week, in the same hotel. This allows attendees to cross over to the other side to see a few sessions of the other conference. I took advantage of this last year, and plan to do so again this year as well!
For those on Twitter, you can follow my tweets at [@az990tony] or search on the hash tag #ibmtechu.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
Data reclamation, rationalization and planning
Virtualization and tiering
Backup, business continuity and disaster recovery
Storage process and governance
Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.
This week, Tuesday, Wednesday and Thursday, I am at the IBM Dynamic Infrastructure Executive Summit at the beautiful Fairmont Resort in Scottsdale, Arizona. This is a mix of indoor and outdoor meetings, one-on-ones with IBM executives, and main-tent sessions.
The Solutions Showcase will cover the following:
As the bar for performance gets higher and the need to manage, store and analyze massive amounts of information escalates, systems must scale to meet the needs of the business. The latest server and storage technology innovations including: POWER7, eX5, XIV, ProtecTIER, SONAS, and System z Solution Editions.
Smarter Data Centers
Today’s data centers are under extreme power and cooling pressures and space constraints. How can you get more out of your existing facility, while planning for future requirements? IBM energy efficiency consultants will tell you how you can reduce both CAPEX and OPEX costs and plan for future growth with consolidation and virtualization, energy efficient (energy star) equipment and modular data center solutions. Be sure to check out the IBM Portable Modular Data Center (PMDC) that fits in a standard shipping crate!
IBM’s Cloud Computing solutions provide you with flexible, dynamic, secure and cost-efficient delivery choices from pay-per-use (by the hour, week or year) at IBM cloud centers around the world, conditioning your infrastructure to build your own private cloud or out-of-the box cloud solutions that are quick and easy to deploy. Which workloads are the best fit for cloud computing? How do you decide which cloud computing is right for your organization? Cloud experts will talk about the options, give you recommendations based on your business objectives and help you get started.
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Monday afternoon sessions:
IBM Watson and your Data Center
Steve Sams, IBM VP of Site and Facilities Services, cleverly used IBM Watson as a way to explain how analytics can be used to help manage your data center. Sadly, most of the people at my table missed the connection between IBM Watson and Analytics. How does answering a single trivia question in under three seconds relate to the ongoing operations of a data center? If you were similarly confused, take a peak at my series of IBM Watson blog posts:
The analyst who presented this topic was probably the fastest-speaking Texan I have met. He covered various aspects of Cloud Computing that people need to consider. Why hasn't Cloud taken off sooner? The analyst feels that Cloud Computing wasn't ready for us, and we weren't ready for Cloud Computing. The fundamentals of Cloud Computing have not changed, but we as a society have. Now that many end users are comfortable consuming public cloud resources, from Facebook to Twitter to Gmail, they are beginning to ask for similar from their corporate IT.
Legal issues - see this hour-long video, [Cloud Law & Order], which discusses legal issues related to Cloud Computing.
Employee staffing - need to re-tool and re-train IT employees to start thinking of their IT as a service provider internally.
Hybrid Cloud - rather than struggle choosing between private and public cloud methodologies, consider a combination of both.
University of Rochester Medical Center (URMC) Cracks Code on Data Growth
Often times, the hour is split, 30 minutes of the sponsor talking about various products, followed by 30 minutes of the client giving a user experience. Instead, I decided to let the client speak for 45 minutes, and then I moderated the Q&A for the remaining 15 minutes. This revised format seemed to be well-received!
University of Rochester is in New York, about 60 miles east of Buffalo, and 90 miles from Toronto across Lake Ontario. Six years ago, Rick Haverty joined URMC as the Director of Infrastructure services, managing 130 of the 300 IT personnel at the Medical Center. I met Rick back in May, when he presented at the IBM [Storage Innovation Executive Summit] in New York City.
URMC has DS8000, DS5000, XIV, SONAS, Storwize V7000 and is in the process of deploying Storwize V7000 Unified. He presented how he has used these for continuous operations and high availability, while controlling storage growth and costs.
The Q&A was lively, focusing on how his team manages 1PB of disk storage with just four storage administrators, his choice of a "Vendor Neutral Archive" (VNA), and his experiences with integration.
This was a great afternoon, and I was glad to get all my speaking gigs done early in the week. I would like to thank Rick Haverty of URMC for doing a great job presenting this afternoon!
Wrapping up my coverage of the [IBM System Storage Technical University 2011], I attended a few sessions on Friday morning. The last session was Glenn Anderson's "IT Game Changers: the IT Professional's Guide to Becoming a Technology Trailblazer." Glenn used to run the Storage University events, but now is the conference manager for the System z mainframe events.
Glenn organized this talk from lessons from the following books:
Glen suggested that IT professionals should understand the dissatisfaction with IT that is driving companies to switch over to Cloud Computing. IT professionals should adopt a service-oriented approach, realize the full potential of new disruptive technologies, and know when to "jump the curve" to the next generation of technology. For example, IT professionals should lead the movement to Cloud. If you build your own private cloud, or purchase some time for instances on a public cloud, you will be in a better position to be the "trusted advisor" to IT management.
CIOs should encourage IT to be part of the corporate strategy, but may have to fix the broken IT funding model. The IT department should be a "value center" not a "cost center" as it has been traditionally treated. When treated as a "cost center", IT departments only focus on cost reductions, and not looking at ways that the IT department can help drive revenues, improve customer service, or enhance employee productivity. A well-orgnized IT department can be a competitive advantage.
Taking a "service-oriented" approach allows IT and Business Process to come together. Often times, IT and business professionals don't communicate well, and this new service-oriented approach can bridge the gap. Service Oriented Architecture [SOA] can help connect existing legacy applications to the new Cloud Computing environment.
IT budgets should consist of two parts. Strategic funding for new IT projects, and an operational budget for keeping current applications running. Roughly 45 percent of capital investment in USA goes toward IT. Too often, the IT department is focused on itself, on technology and reducing costs, and not enough on aligning IT with business transformation. When IT is used in conjunction with a sound business strategy, their can be significant payoff.
After 550 years, the printing press and printed materials are being pushed from center. While other electronic media like radio and television have been around for a while, the internet and digital publishing are constantly available, and represent a shift from traditional printed materials.
When evaluating new technologies, IT professionals should ask themselves a few questions. Is it easy to use? Does it enable people to connect in new ways? Is it more cost-effective, or tap new sources of revenue? Does it shift power from one player to another? A new intellectual ethic is taking hold. Becoming an IT Game Changer can help stay one step ahead as Cloud Computing and other new IT platforms are adopted.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
Of course, EMC isn't the first, and won't be the last, vendor to [hear the sirens] of Cloud Computing and crash their ships on rocky shores. Just because you manufacture hardware or write software does not guarantee your success as a Cloud service provider.
(FTC disclaimer: I work for IBM. IBM is a successful public cloud service provider, as well as offering products that can be used to deploy a private, hybrid or community cloud, and provides technology to other cloud service proviers.)
An amusing excerpt from Steve Duplessie's post:
"Side Note: There is no such thing as a private cloud. A private cloud is called IT. We don’t need more terms for the same stuff."
I have to agree that when vendors like EMC say "Journey to the Private Cloud", skeptics hear "How to keep your IT administrator job by sticking with a traditional IT approach". Butchers, bakers, candlestick makers and the specialty shop "arms dealers" of Cloud Computing IT equipment may not want to see their market shrink down to a dozen or so service providers, and drum up the fear that "Public Cloud" deployments will "disintermediate" the IT staff.
But does that mean the use of term "Private Cloud" should be discontinued? The US National Institute of Standards and Technology [NIST] offers their cloud model composed of five essential characteristics, three service models, and four deployment models. Here's an excerpt:
Broad network access
Cloud Software as a Service (SaaS)
Cloud Platform as a Service (PaaS)
Cloud Infrastructure as a Service (IaaS)
Like traditional IT, a private cloud infrastructure is operated solely for an organization, so I can see how many might consider the term unnecessary. However, unlike traditional IT, a private cloud may be managed by the organization or a third party and may exist on premise or off premise.
How many traditional IT departments meet the five essential characteristics above? Instead of "on-demand self-service", many IT departments have complicated and lengthy procurement and change control procedures. A few might have "measured service" with a charge-back scheme, and a few others prefer to use a "show-back" aproach instead, showing end users or managers how much IT resources are being consumed without assigning a monetary figure or other penalty. Rapid elasticity? Giving any resource you asked for back can be just as painful because re-purposing that equipment follows the same complicated and lengthy change control procedures.
Just like the term "intranet" refers to a private network that employs Internet standards and technologies, I feel the term "private cloud" is useful, representing an infrastructure that meets the above criteria, employing Public Cloud standards and technologies, that can distinguish itself from traditional IT in key ways that provide business value.
What I do hope "vaporizes" is all the hype, and all the misuse of the Cloud terminology out there.
I use two Cloud-Computing based photo-sharing services, [KodakGallery.com] and [Flickr.com], which serve two completely different purposes.
Formerly, this was Ofoto, but was acquired by Kodak. I started using this service back in 2002, and had over 12,000 photos uploaded over the past 8 years. I was able to share all my photos with my friends and family, and they could simply order whichever prints they want and have them shipped directly to them. They have incredibly high-professional photo-based products, like calendars and coffee table books, that you can produce from your own photos.
Sadly, the fine folks at Kodak Gallery decided they did not want my business anymore, and purged my 36GB of files from their system. To be fair, they did hint that they were having financial problems with an "Archive CD" offering, which would have allowed me to get a set of CDs or DVDs holding the high-resolution graphics of all my uploaded photos. This would have cost $150 or so, and if you uploaded more photos, there was no option to get the "delta" of photos uploaded since your last archive, so it would have cost me $150 every year or so to get an updated "backup" of my files. It seemed expensive and unnecessary at the time, given that I was sure that Kodak was not going out of business anytime soon, and that I was sure they took their own backups of all the photos that people put in their charge.
The problem is that Kodak Gallery was a free service, subsidized by people ordering physical prints and other products. As such, I got lots of email from Kodak every month, offering me free shipping, special promotions, and seasonal discounts. It was so much that I had all email from them automatically routed to a different sub-folder, that I would never look at, unless I was about to make a purchase and needed to find the best coupon code or free shipping option currently offered. This also had the unintended consequence that I missed the following series of notes:
Important: From the Gallery's General Manager (April 17)
Second notice: Our storage policy has changed (April 24)
Final notice: Your stored photos may be deleted (May 8)
We don't want to delete your photos (May 22)
All the notes mentioned the new "Storage Policy", here is a quick excerpt:
"The fact is, we store billions of photos for our 75 million members. The quality storage service the Gallery provides is significant in terms of our business costs.
So that we can provide the highest level of service, we're now asking all Gallery customers to make an annual nominal purchase in exchange for photo storage. We've modified our Terms of Service policy accordingly: if your Gallery photo storage equals 2 gigabytes or less, we're asking you to spend $4.99 annually; if more than 2 gigabytes, $19.99 annually.*
One last thought: We value and appreciate your business, and we want to continue our relationship with you in a spirit of mutual support and benefit. That's always been the Kodak way."
Since they had no response from me, nor saw any purchase activity, my 36GB of files were deleted on June 17. I discovered all of this when I contacted Kodak to find out where my files were last weekend during my "Spring Cleaning". I asked if I could at least get the final set of "Archive CDs", but they told me they were purged completely.
I understand the economy is in a recession, and many free cloud-based services are losing money and going under. I can understand they were faced with tough choices, Kodak opted to switch from a free service to fee-based service.
Albert Einstein defined Insanity as "doing the same thing over and over again and expecting different results." In general, if I am trying to get a hold of someone, and email isn't working, then I try something different, try them by phone, try them by snail mail, and so on. With the deluge of emails, people sometimes declare "email bankruptcy" by deleting everything in their inbox after coming back from vacation, or implement filters to automatically route mail to separate folders. I think it is unrealistic to expect that everybody reads every piece of email that you send them.
I would have liked for Kodak to have done at least one or more of the following, given that I had been such a long time customer, and they had earned hundreds of dollars in revenues from all the purchases, over the years, not just directly from me, but from my friends and family, of photos I uploaded to their website:
Send me a letter after not receiving any response from the first three notices. They sent me promotional materials and offers for 20 percent discounts, so they had my active snail mail address on file correctly. With 75 million users, it would have cost $33 million USD to send out snail mail letters to everyone, but for the subset of power-users who have more than 2GB of files, a snail mail letter might have gotten more $19.99 purchases they needed to stay in business.
Called me on the phone. Yes, they also had my phone number in their database.
Go ahead and charged my credit card on file $19.99 without a purchase, and given me a credit towards a future purchase. Something like: "You have not purchased anything in the last 12 months, so we charged your credit card, per our Terms of Service, but you can use this as a credit towards something in the next 60 days."
On the plus side, my "Spring Cleaning" project was done. You can't organize what you don't have anymore.
Flickr from Yahoo
I started using Flickr back in 2008 to hold photos and graphics for this blog. Flickr holds various sizes of photos that I can use directly with HTML tags. Clicking on the photo in the blog will take you to Flickr's service and allow you to see the large size resolution. The "Lotus Connections" that I have on IBM DeveloperWorks only offers 24MB of photo space, so Flickr was a nice alternative.
Unfortunately, Flickr had adopted a new policy that only the most recent 200 pictures would be visible, and I had already reached 170 photos. Rather than start deleting photos from my older blog posts, I opted to upgrade to the "Flickr Pro" account, with a fee of only $24.99 per year.
Hopefully, by paying an annual fee and choosing a successful and profitable Cloud-Computing company, I won't experience another traumatic loss. However, it does remind me that it is my responsibility to keep my own copies of these photos, just in case.
Fortunately, many "photo product" providers are connected to Flickr. For example, my publisher [<a href="http://www.lulu.com/">Lulu.com</a>] can access my Flickr photos to make photo-based coffee table books. As for my last eight years of memories that were lost, I will just have to treat it as if my house burned down. Rebuild and move on.