Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
Continuing my week in Las Vegas for the Data Center Conference 2009, I attended a keynote session on Service Management. There were two analysts that co-presented this session.
One analyst was the wife of a real CEO, and the other was the wife of a real CIO, so the two analysts explained that there was a langauge gap between IT and business. Use the analogy of a clock, business is concerned with the time shown on the front face is correct and ticking properly, but behind the scenes, the gears of the clock, represent IT, finance, supply chain and other operations.
Based on recent surveys, there is a 45 percent "alignment" between the goals of CEO and the goals of a CIO. CEOs are concerned about decision making, workforce productivity, and customer satisfaction. CIOs on the other hand are worried about costs, operations and change initiatives. Both CEOs and CIOs are focused on innovations that can improve business process. Service management strives to shorten the language gap between business and IT, by helping to drive operational excellence that benefits both CEO and CIO goals. Recent surveys found the key drivers for this are controlling costs, improving customer satisfaction, availability, agilty and making better business decisions.
Unfortunately, in this economy, the idea of "transformation" is out, and "restructuring" is in. In much the same way that employees have abandoned career development in favor of simple job preservation, companies are focused on tactical solutions to get through this financial meltdown, rather than launching transformation projects like deploying Service Management tools.
How much influence does the CIO have on running the rest of the business? Various surveys have found the following, ranked from most influential to least:
5-9 percent, Enterprise Leader
15-18 percent, Trusted Ally
25-32 percent, Partner
27-35 percent, Transactional
7-20 percent, At Risk
The bottom rank not only have little or no influence, but are at risk of losing their jobs. Evaluations based on a Maturity model finds many I&O operations in trouble, 11 percent taking some pro-active measures, 59 percent committed to improvement, and 30 percent aware of the problems.
IT Service Management tries to bring a similar discipline as Portfolio Management and Application Lifecycle Management. Why can't IT be treated like any other part of the business portfolio? What is the business value of IT? IT can help a business run, grow and even transform. IT can help consolidate and centralize shared services to help leverage resources and offer cost optimizations not just for itself, but for the business as a whole.
CIOs that can adopt IT Service Management can have a "Jacks or Better" chance for a seat at the executive table to help drive the business forward.
Jeff Garten, a professor of International Trade at the Yale School of Management covered the Post-Crisis Global Economy. How well did the world's governments do? Here was his "scorecard" of the five "R's":
Jeff gives world governments an "A", pumping about $20 trillion US dollars onto the world stage to stave off the worst impacts.
Jeff gives an "I" (Incomplete). Not quite an "F" as government regulations just have not been adopted to address situations like this.
Jeff gives this one an "I" also. The major inbalance is US borrowing so much from China, and China keeping its currency artificially low.
Jeff gives this a "B". Banks and other financial services have changed the way they do business and have taken some corrective actions on their own, often because strings attached to bailout funds.
Jeff gives this one a "C+", in that he is not hopeful for a quick recovery. Economists have five recovery models. A quick recovery has a "V" shape. A slower full recovery has a "U" shape. Some recoveries have premature upticks followed by a second crash, representing a "W" shape. Japan still has not recovered from their crash from last decade, like an "L" shape. Jeff feels that the United States will probably have a "reverse J" where it looks like a slow "U" shaped recovery over the next three years, but we never get back to our original prominence.
Jeff did not give the impression the worst was over. Rather, he felt there were still problems ahead, banks are still carrying a lot of bad debt and real estate industry may take a while to recover. He feels the era of a dollar-centric world that started circa 1945 is over, and that the dollar will continue to decline for several decades. Replacing this will be a combination of the Euro, Japanese Yen and Chinese Yuan.
What can we look forward to? There is a definite shift to Asia and other large emerging markets like Brazil. The "Global Commons" like food and energy are under severe stress. Global rules will go under a sort of remission. A resurgence of National governments to protect citizens is underway. Finally, there will be a return of Industrial policy.
This week several IBM executives will present at the 28th Annual Data Center Conference here in Las Vegas. Here is a quick recap:
Steve Sams: Data Center Cost Saving Actions Your CFO Will Love
A startling 78 percent of today's data centers were built in the last century, before the "dot com" era and the adoption of high-density blade servers. IBM Vice President of Global Site and Facility Services, Steve Sams, presented actions that can help extend the life of existing data centers, help rationalize the infrastructure across the company, and design a new data center that is flexible and responsive to changing needs.
In one example, an 85,000 square foot datacenter in Lexington had reached 98 percent capacity based on power/cooling requirements. They estimated it would take $53 million US dollars to either upgrade the facility or build a new facility to meet projected growth. Instead, IBM was able to consolidate servers six-to-one, an 85 percent reduction. IBM also was able to make changes to the cooling equipment, redirect airflow and changed out the tiles, re-oriented the servers for more optimal placement, and implement measurement and management tools. The end result? The facility now has eight times the compute capability and enjoys 15 percent headroom for additonal growth. All this for only 1.5 million US dollar investment, instead of 53 million.
IBM builds hundreds of data centers for clients large and small. In addition to the "Portable Modular Data Center"(PMDC) shipping container on display at the Solution Showcase, IBM offers the "Scalable Modular Data Center", a turn-key system with a small 500 to 2500 square foot size for small customers. For larger deployments, the "Enterprise Modular Data Center" offers standardized deployments in 5000 square foot increments. IBM also offers "High Density Zones" which can be perfect way to avoid a full site retrofit.
Helene Armitage: IT-wide Virtualization
Helene is IBM General Manager of the newly formed IBM System Software division. A smarter planet will require more dynamic infrastructures, which is IBM's approach to helping clients through the virtualization journey. The virtualization of resources, workloads and business processes will require end-to-end management. To help, IBM offers IBM Systems Director.
Helene indicated that there are four stages of adoption:
Physical consolidation - VMware and Hyper-V are the latest examples of running many applications on fewer physical servers. Of course, IBM has been doing this for decades with mainframes, and has had virtualization on System i and System p POWER systems as well. A quick survey of the audience found that about 20 percent were doing server virtualization on non-x86 platforms (for example, PowerVM or System z mainframe z/VM)
Pools of resources - SAN Volume Controller is an example solution to manage storage as a pool of disparate storage resources. Supercomputers manage pools of servers.
Integrated Service Management - in the past, resources were managed by domain, resulting in islands of management. Now, with IBM Systems Director, you can manage AIX, IBM i, Linux and Windows servers, including non-IBM servers running Linux and Windows.
Service management can provide monitoring, provisioning, service catalog, self-service, and business-aligned processes.
Cloud computing - Helene agreed that not everyone will get to this stage. Some will adopt cloud computing, whether public, private or some kind of hybrid, and others may be fine at stage 3.
For those clients that want assistance, IBM offers three levels of help:
Help me decide what is best for me
Help me implement what I have decided to do
Help me manage and run my operations
With IBM's compelling vision for the future, best of breed solutions, leadership in management software, extensive experience in services, and solid business industry knowledge, it makes sense to tap IBM to help with your next IT transformation.
Continuing my coverage of the Data Center Conference, Dec 1-4, 2009 here in Las Vegas, this post focused on data protection strategies.
Two analysts co-presented this session which provided an overview of various data protection techniques. A quick survey of the audience found that 27 percent have only a single data center, 13 percent have load sharing of their mission critical applications across multiple data centers, and the rest use a failover approach to either development/test resources, standby resources or an outsourced facility.
There are basically five ways to replicate data to secondary locations:
Array-based replication. Many high-end disk arrays offer this feature. IBM's DS8000 and XIV both have synchronous and asynchronous mirroring. Data Deduplication can help in this regard to reduce the amount of data transmitted across locations.
NAS-based replication. I consider this just another variant of the first, but this can be file-based instead of block-based, and can often be done over the public internet rather than dark fiber.
Network-based replication. This is the manner that IBM SAN Volume Controller, EMC RecoverPoint, and others can replicate. The analysts liked this approach as it was storage vendor-independent.
Host-based replication. This is often done by the host's Operating System, such as through a Logical Volume Manager (LVM) component.
Application/Database replication. There are a variety of techniques, including log shipping of transactions, SQL replication, and active/active application-specific implementations.
The analysts felt that "DR Testing" has become a lost art. People are just not doing it as often as they should, or not doing it properly, resulting in surprises when a real disaster strikes.
A question came up about the confusion between "Disaster Recovery Tiers" and Uptime Institute's "Data Center Facilities Tiers". I agree this is confusing. Many clients call their most mission critical applications as Tier 1, less critical as Tier 2, and least critical as Tier 3. In 1983, IBM User Group GUIDE came up with "Business Continuity Tiers" where Tier 1 was the slowest recovery from manual tape, and Tier 7 was the fastest recovery with a completely automated site, network, server and storage failover. However, for Data Center facility tiers, Uptime has the simplest least available (99.3 percent uptime) data center as Tier 1, and the most advanced, redundant, highest available (99.995 percent) data center as Tier 4. This just goes to show that when one person starts using "Tier 1" or "Tier 4" terminology, it can be misinterpreted by others.
Continuing my coverage of the Data Center Conference 2009, we had a keynote session on Wednesday, Dec 2 (Day 3) that focused on the key technologies to watch for the data center.
It seems like every session this week mentioned Cloud Computing. It is service- based, scalable and elastic both upwards and downwards, uses shared resources and internet standards, and can be metered by use. There are three focal points related to Cloud Computing:
Consuming Cloud Services offered by other providers
Developing cloud-enabled applications and solutions
Implementing an internal "Prviate Cloud"
The analyst used the term "service boundary" to distinguish between IaaS, PaaS and SaaS cloud service models. For those still confused, here is how I explain Cloud Computing, using that analogy of transportation as an example.
You buy a car to get around town. You need to have a drivers license, carry liability insurance, and have a place to park your vehicle. You get to pick the make, model and color. You need to come up with thousands of dollars up front, or arrange some form of financing for monthly payments. It could take days or weeks to purchase, as you test drive different ones, research online, and check out feature comparisons between car dealers. You can drive wherever you want, whenever you want.
The same is done in the data center, you buy servers, storage and network gear, build a data center floor to hold it all, and hire server, storage and network administrators to manage it.
Infrastructure as a Service (IaaS)
You rent a car from a local Car Rental Agency. You still need a drivers license and carry liability insurance, but often you can get the insurance for the days or weeks that you are renting the car. You have limited choices of make, model and color. You don't need thousands of dollars, just enough to cover the daily or weekly rate. The rental process can be done in minutes.
IaaS providers have their own data centers, so you don't need your own. They can rent you floorspace and equipment on a monthly basis. Your server, storage and network administrators manage these remotely. Your OS choices are limited to the types of hardware available, typically x86 servers, SAN and NAS storage.
Platform as a Service (PaaS)
You take a taxi. Since you are not driving, you do not need a drivers license nor need liability insurance. The vehicle is typically a yellow four-door sedan. You don't need thousands of dollars, just enough to cover the ride, often metered by the distance traveled. Getting a taxi takes minutes, just a matter of calling the cab company, or hailing one streetside. Depending on the cab company, you can tell the taxi driver where to go, how to get there, and that you are in a hurry.
PaaS providers have data centers with servers, storage and networking gear. Your options are often Linux or Windows with some middleware web serving and database already running. You may still need some of your own server, storage and network admins to manage things remotely. Usage is metered, you pay for bandwidth, CPU and storage used. Typical rates for Cloud Storage, for example, is 25 cents per GB per month.
Software as a Service (SaaS)
You take public transportation, like the subway. You are not driving, so no need for license or insurance. The vehicle holds hundreds of passengers, and you have no options on the make, model or color. You only need enough to cover the cost of the ticket, which is often based on the distance traveled. You have to get to the subway station nearest you, and it takes you to the subway station nearest your eventual destination, so other forms of transportation may be required if this does not completely meet your requirements.
SaaS providers offer you the application already running in their data center on their servers. You are charged per employee per month that uses this application. You won't need server, storage or network administrators, but you might need your own software developers to customize the application, or compensate for its lack of functionality with surrounding applications if it does not exactly meet your needs. Google Gmail and IBM LotusLive are two examples of this.
Virtualization for Availability and Business Continuity
No surprise here, virtualization has proven quite useful to improve both high availability and continuous operations within the data center, as well as multiple site configurations for disaster recovery and business continuity. P-to-V is used to refer to the concept of running applications on physical servers at the primary location, but have these as virtual servers under VMware or Hyper-V at the disaster site secondary location to minimize the cost of standby equipment.
Reshaping the Data Center
Data Center facilities design is going modular, with design for server/storage/network "pod" and contained "power zones".
IT for Green
This is not making the IT department itself more environment-friendly, but using IT to make the entire company more environment-friendly, including using sensors to monitor input and output, reduce carbon footprint and monitor energy consumption per employee.
Virtual Desktop Infrastructure (VDI) is changing the way employees use IT services. Rather than having to maintain a full OS and application stack on each employees PC, using VDI and browser-based applications can help centralize and take back control, minimizing help desk costs.
Business Intelligence and Operational Analytics is taking off. In the past, decision support systems were limited to just the highest levels of executives and analysts that work for them, but now the technology is reaching a broader portion of the company, allowing knowledge workers to have more information to make better business decisions. We have seen this transition from employees working off fixed rules of thumb that apply to all situations, to decisions supported by market data, to now a more predictive analysis.
FLASH memory (Solid State Drives, SSD)
Solid State Drives and advances in memory will impact the storage world in the data center, much as it has in consumer electronics.
Reshaping the Server
This last prediction seemed far-fetched. The analyst felt that we will begin to see server components to be separated between CPU, memory and I/O support, so that you can seemlessly add or remote each from running servers. Some of this has happened with blade servers, with some components shraed by multiple servers that are hot-swappable.
Certainly, an interesting list of technologies to watch.
Continuing my coverage of the Data Center Conference, held Dec 1-4 in Las Vegas, an analyst presented the challenges of managing the rapid growth in storage capacity. Administrators ability to manage storage is not keeping up with the growth. His recommendations:
Aim to just meet but not exceed service level agreements (SLAs)
Revisit past IT decisions. This includes evaluating your SAN to NAS ratio.
Embrace new technologies when they are effective, this includes cloud storage, solid state drives, and interconnect technologies like FCoCEE.
Follow vendor management best practices, update your vendor "short list".
A survey of the audience found:
20 percent have a single external storage vendor
39 percent have two external storage vendors
18 percent have three external storage vendors
23 percent have four or more external storage vendors
Throughout the industry, storage vendors are following IBM's example of using commodity hardware parts. This is because custom ASICs are expensive, and changes take a minimum of three months development time. Software-based implementations can be updated more quickly.
In terms of technologies deployed of SAN, NAS, Compliance Archive (such as the IBM Information Archive), and Virtual Tape Library (VTL) such as the IBM TS7650 ProtecTIER data deduplications solution, here was the survey of the audience:
8 percent: SAN only
14 percent: SAN and NAS
23 percent: SAN, NAS and Compliance Archive
9 percent: SAN and VTL
14 percent: SAN, NAS and VTL
32 percent: SAN, NAS, Compliance Archive and VTL
Cost reduction techniques including thin provisioning, compression, data deduplication, Quality of Service tiers, and archiving. To reduce power and cooling requirements, switch from FC to SATA disk wherever possible, and move storage out of the data center, such as on tape cartridges or cloud storage.
For emerging technologies, the following survey:
16 percent have already implemented a new emerging technology (IBM XIV, Pillar, 3PAR, etc.)
30 percent plan to do so in 12-24 months
4 percent plan to do so in 24-48 months
50 percent have no plans, and will continue to stick with traditional storage technologies
As for adopting Cloud storage, here was the survey:
14 percent already have
31 percent plan to use Cloud storage in 12-24 months
13 percent plan to use Cloud storage in 24-48 months
42 percent have no plans to adopt Cloud storage
My take-away from this is that many companies are still "exploring" into different options available to them. Fortunately, IBM offers a broad portfolio of complete end-to-end solutions to make acquiring the right mix of technologies that are optimized for your workloads possible.
Continuing my coverage of the Data Center Conference 2009, held Dec 1-4 in Las Vegas, the title of this session refers to the mess of "management standards" for Cloud Computing.
The analyst quickly reviewed the concepts of IaaS (Amazon EC2, for example), PaaS (Microsoft Azure, for example), and SaaS (IBM LotusLive, for example). The problem is that each provider has developed their own set of APIs.
(One exception was [Eucalyptus], which adopts the Amazon EC2, S3 and EBS style of interfaces. Eucalyptus is an open-source infrastrcture that stands for "Elastic Utility Computing Architecture Linking Your Programs To Useful Systems". You can build your own private cloud using the new Cloud APIs included Ubuntu Linux 9.10 Karmic Koala termed Ubuntu Enterprise Cloud (UEC). See these instructions in InformationWeek article [Roll Your Own Ubuntu Private Cloud].)
The analyst went into specific Virtual Infrastructure (VI) and public cloud providers.
Private Clouds can be managed by VMware tools. For remote management of public IaaS clouds, there is [vCloud Express], and for SaaS, a new service called [VMware Go].
Citrix is the Open Service Champion. For private clouds based on Xen Server, they have launched the [Xen Cloud Project] to help manage. For public clouds, they have [Citrix Cloud Center, C3], including an Amazon-based "Citrix C3 Labs" for developing and testing applications. For SaaS, they have [GoToMyPC and [GoToAssist].
Amazon offers a set of Cloud computing capabilities called Amazon Web Services [AWS]. For virtual private clouds, use the AWS Management Console. For IaaS (Amazon EC2), use [CloudWatch] which includes Elastic Load Balancing.
If you prefer a common management system independent of cloud provider, or perhaps across multiple cloud providers, you may want to consider one of the "Big 4" instead. These are the top four system management software vendors: IBM, HP, BMC Software, and Computer Associates (CA).
A survey of the audience found the number one challenge was "integration". How to integrate new cloud services into an existing traditional data center. Who will give you confidence to deliver not tools for remote management of external cloud services? Survey shows:
28 percent: VI Providers (VMware, Citrix, Microsoft)
19 percent: Big 4 System Management software vendors (IBM, HP, BMC, CA)
13 percent: Public cloud providers (Amazon, Google)
40 percent: Other/Don't Know
For internal private on-promise Clouds, the results were different:
40 percent: VI Providers (VMware, Citrix, Microsoft)
21 percent: Big 4 System Management software vendors (IBM, HP, BMC, CA)
13 percent: Emerging players (Eucalyptus)
26 percent: Other/Don't Know
Some final thoughts offered by the analyst. First, nearly a third of all IT vendors disappear after two years, and the cloud will probably have similar, if not worse, track record. Traditional server, storage and network administrators should not consider Cloud technologies as a death knell for in-house on-premises IT. Companies should probably explore a mix of private and public cloud options.
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I attended an interesting session related to the battles between Linux, UNIX, Windows and other operating systems. Of course, it is no longer between general purpose operating systems, there are also thin appliances and "Meta OS" such as cloud or Real Time Infrastructure (RTI).
One big development is "context awareness". For the most part, Operating Systems assume they are one-to-one with the hardware they are running on, and Hypervisors like PowerVM, VMware, Xen and Hyper-V have worked by giving OS guests the appearance that this is the case. However, there is growing technology for OS guests to be "aware" they are running as guests, and to be aware of other guests running on the same Hypervisor.
The analyst divided up Operating Systems into three categories:
Operating systems that are typically used to support other OS by offering Web support or other infrastructure. Linux on POWER was an example given.
DBMS/Industry Vertical Applications
Operating systems that are strong for Data Base Management Systems (DBMS) and vertical industry applications. z/OS, AIX, HP-UX, HP NonStop, HP OpenVMS were given as examples.
General Purpose for a variety of applications
Operating systems that can run a range of applications, from Web/Infrastructure, DBMS/Vertical Apps, to others. Windows, Linux x86 and Solaris were offered as examples.
The analyst indicated that what really drove the acceptance or decline of Operating Systems were the applications available. When Software Development firms must choose which OS to support, they typically have to evaluate the different categories of marketplace acceptance:
For developing new applications: Windows-x86 and Linux-x86 are must-haves now
Declining but still valid are UNIX-RISC and UNIX-Itanium platforms
Viable niche are Non-x86 Windows (such as Windows-Itanium) and non-x86 Linux (Linux on POWER, Linux on System z)
Entrenched Legacy including z/OS and IBM i (formerly known as i5/OS or OS/400)
For the UNIX world, there is a three-legged stool. If any leg breaks, the entire system falls apart.
The CPU architecture: Itanium, SPARC and POWER based chipsets
Operating System: AIX, HP-UX and Solaris
Software stacks: SAP, Oracle, etc.
Of these, the analyst consider IBM POWER running AIX to be the safest investment. For those who prefer HP Integrity, consider waiting until "Tukwilla" codename project which will introduce new Itanium chipset in 2Q2010. For Sun SPARC, the European Union (EU) delay could impact user confidence in this platform. The future of SPARC remains now in the hands of Fujitsu and Oracle.
What platform will the audience invest in most over the next 5 years?
45 percent Windows
14 percent UNIX
37 percent Linux
4 percent z/OS
A survey of the audience about current comfort level of Solaris:
10 percent: still consider Solaris to be Strategic for their data center operations and will continue to use it
25 percent: will continue to use Solaris, but in more of a tactical way on a case-by-case basis
30 percent: have already begun migrating away
35 percent: Do not run Solaris
The analyst mentioned Microsoft's upcoming Windows Server 2008 R2, which will run only on 64-bit hardware but support both 32-bit and 64-bit applications. It will provide scalability up to 256 processor cores. Microsoft wants Windows to get into the High Performance Computing (HPC) marketplace, but this is currently dominated by Linux and AIX. The analyst's advice to Microsoft: System Center should manage both Windows and Linux.
Has Linux lost its popularity? The analyst indicated that companies are still running mission critical applications on non-Linux platforms, primarily z/OS, Solaris and Windows. What does help Linux are old UNIX Legacy applications, the existence of OpenSolaris x86, Oracle's Enterprise Linux, VMware and Hyper-V support for Linux, Linux on System z mainframe, and other legacy operating systems that are growing obsolete. One issue cited with Linux is scalability. Performance on systems with more than 32 processor cores is unpredictable. More mature operating systems like z/OS and AIX have stronger support for high-core environments.
A survey of the audience of which Linux or UNIX OS were most strategic to their operations resulted in the following weighted scores:
140 points: Red Hat Linux
71 points: AIX
80 points: Solaris
40 points: HP-UX
41 points: Novell SUSE Linux
19 points: Oracle Enterprise Linux
29 points: Other
The analyst wrapped up with an incredibly useful chart that summarizes the key reasons companies migrate from one OS platform to another:
Reduce Costs, Adopt HPC
DBMS, Complex projects
Availability of Admin Skills
Performance, Mission Critical Applications
Availability of Apps, leave incumbent UNIX server vendor
Consolidation, Reduce Costs
Certainly, all three types of operating system have a place, but there are definite trends and shifts in this marketspace.
Continuing my coverage of last week's Data Center Conference 2009, held Dec 1-4 in Las Vegas, I find some of the best sessions are those "user experiences" by the CIO or IT directors that successfully completed a project and showed the benefits and pitfalls. Matt Merchant, CTO of General Electric (GE), gave an awesome presentation on tapping Cloud Storage to reduce their backup and archive costs.
They were concerned over their lack of e-Discovery tools, the high fixed cost and large administrator personnel load of their Veritas NetBackup software environment, the possibility of corrupted tape media, new compliance and regulatory issues, and the risk of moving unencrypted cartridges to remote vaulting facilities like Iron Mountain. I found it interesting their backup/archive approach is that backups are re-classified as archive after they are 35 days old.
GE's Disk-to-Disk-to-Tape (D2D2T) approach was costing them 50 cents per GB/month. Changing to a D2D with remote replication addressed some of their concerns over tape, but was more costly at 79 centers per GB/month. Given that Backup and Archive represent 30 percent of their IT budget, the largest non-application expense, they reviewed their options:
Continue with their Traditional BU/Archive approach
Adopt Internal DAS using cheaper SATA disk drives
Implement an Internal Cloud
Use External Cloud services
General Electric had a long list of requirements:
99.99 percent Availability
99.999 percent Reliability and data integrity of the data
Location independent access
Meets HIPAA, SAS70, PCI compliance requirements
Secure 3rd party access
Eliminate GE operations management personnel
Large file size uploads and resumable uploads (GE owns NBC Universal and some files are very large, movies can be 1.5 TB in size)
Encryption at rest
Multi-node capable, in other words, GE uploads it once and the Cloud Storage provider ensures that it is stored in two or more designated locations.
Child-level billing/management. Here child relates to department, division or other sub-division for reporting and management purposes.
Data integrity verification, such as with MD5 hash codes
GE evaluated Nirvanix, Amazon S3 and EMC and chose Nirvanix. They found Cloud storage worked best for backup, archive and large files, but was not a good fit for production/transactional data. However, they were not happy with proprietary APIs and vendor lock-in, so they wrote their own internal "Data Mover" called CloudStorage Manager that works with five different cloud storage providers through an abstraction layer. It is able to handle up to 8.8 GB per minute upload, has a policy engine that does encryption, compression and single-instance storage data deduplication at the file level. Some lessons learned include:
Challenge the skeptics
Run small pilot projects to get familiar with the technology and provider
Socialize (have a beer or coffee with) your Security and Legal teams early and often
Consider using multiple cloud providers
Test many different scenarios
The end result? They now have Cloud-based backups and archive for their GE Corp, NBC Universal and GE Asset Management divisions running at only 32 cents per GB/month, representing a 40-60 percent savings over their previous methods. This includes backups of their external Web sites, archives of their digital and production assets, RMAN backups including development/staging databases. They plan to add out-of-region compliance archive in 2010. They also plan to monetize their intellectual property by offering "CloudStorage Manager" as a software offering for others.
Continuing my coverage of last week's Data Center Conference 2009, I attended another "User Experience" that was very well received. This time, it was Henry Sienkiewicz of the Department Information Systems Agency (DISA) presenting a real-world example of the business model behind a private cloud implementation. DISA is the US government agency that develops and runs software for the Army, Navy and Air Force.
Being part of the military presents its own unique set of challenges:
Acquisition of hardware to develop and test software is difficult
Budgets fluctuate so an elastic pay-for-use was desirable
End user access had to be secure and meet government regulations
It had to meet the technical aspects of scalable, elastic, dynamic, multi-tenant using shared resources
Using Cloud Computing simplifies provisioning, encourages the use of standards, and provides self-service. DISA has several solutions.
Rapid Access Computing Environment (RACE)
RACE is an internal private cloud with 24-hour provisioning for development and test requests, and 72 hour provisioning for production requests. The amount used is billed on a month-to-month basis, and offers a self-service portal so that developers and administrators can just pick and choose what they need. The result is a hosted server, similar to what you get from 1and1.com or GoDaddy.
Global Content Delivery Service (GCDS)
This provides long-term storage of data. An internal version of "Cloud Storage" for archive and fixed content.
This provides a place to maintain source code, basically their internal version of "SourceForge" used by Open Source projects.
In their traditional approach, a software project would take six months to procure the hardware, another 6-12 months code and test, and then another 6 months in certification, for a total of 18-24 months. With the new Cloud Computing approach that DISA adopted, procurement was down to 24-72 hours with RACE, code test took only 2-6 months with Forge.Mil, and certification could be done in days on RACE, resulting in a new total of only 3-6 months. Some challenges they found:
Service Level management and continuing the use of ITIL best practices
Balancing Military-level Security with Self-service Usability
Internal Funding and Chargeback, they had even adopted a way for developers to pay with their credit card
Cultural inertia, developers don't like to change or do things in a different way
Some lessons learned from this two-year experience:
It's a journey. Most of the user experiences for cloud adoption took two or more years to complete
Infrastructure Fundamentals continue to matter
Know your "marketplace", in this case, software development for military applications
Engage in your end-users early. In this case, Henry had wished he had involved input from software developers that would be using RACE, GCDS and Forge.MIL earlier in the process.
Return on Value analysis, this is different than Return on Investment, as many of the benefits of cloud like higher morale are intangible at first
Avoid fixed costs in negotiations with vendors. For example, he cited they use a lot of IBM because of IBM's pay-for-use billing model. They pay for MIPS used on IBM mainframes, and their IBM Tivoli software pricing is usage-based.
Continuing my coverage of last week's Data Center Conference 2009, my last breakout session of the week was an analyst presentation on Solid State Drive (SSD) technology. There are two different classes of SSD, consumer grade multi-level cell (MLC) running currently at $2 US dollars per GB, and Enterprise grade single-level cell (SLC) running at $4.50 US dollars per GB. Roughly 80 to 90 percent of the SSD is used in consumer use cases, such as digital cameras, cell phones, mobile devices, USB sticks, camcorders, media players, gaming devices and automotive.
While the two classes are different, the large R&D budgets spent on consumer grade MLC carry forward to help out enterprise grade SLC as well. SLC means there is a single level for each cell, so each cell can only hold a single bit of data, a one or a zero. MLC means the cell can hold multiple levels of charge, each representing a different value. Typically MLC can hold 3 to 4 bits of data per cell.
Back in 1997, SLC Enterprise Grade SSD cost roughly $7870 per GB. By 2013, Consumer Grade 4-bit MLC is expected to be only 24 cents per GB. Engineers are working on trade-offs between endurance cycles and retention periods. FLASH management software is the key differentiator, such as clever wear-leveling algorithms.
SSD is 10-15 times more expensive than spinning hard disk drives (HDD), and this price difference is expected to continue for a while. This is because of production volumes. In 4Q09, manufacturers will manufacturer 50 Exabytes of HDD, but only 2 Exabytes of SSD. The analyst thinks that SSD will only be roughly 2 percent of the total SAN storage deployed over the next few years.
How well did the audience know about SSD technology?
4 percent not at all
30 percent some awareness
30 percent enough to make purchase decision
21 percent able to quantify benefits and trade-offs
15 percent experts
SSD does not change the design objectives of disk systems. We want disk systems that are more scalable and have higher performance. We want to fully utilize our investment. We want intelligent self-management similar to caching algorithms. We want an extensible architecture.
What will happen to fast Fibre Channel drives? Take out your Mayan calendar. Already 84mm 10K RPM drives are end of life (EOL) in 2009. The analyst expects 67mm and 70mm 10K drives will EOL in 2010, and that 15K will EOL by 2012. A lot of this is because HDD performance has not kept up with CPU advancements, resulting in an I/O bottleneck. SSD is roughly 10x slower than DRAM, and some architectures use SSD as a cache extension. The IBM N series PAM II card and Sun 7000 series being two examples.
Let's take a look at a disk system with 120 drives, comparing 73GB HDD's versus 32GB SSD's.
per HDD drive
per SSD drive
There are various use cases for SSD. These include internal DAS, stand-alone Tier 0 storage, replace or complement HDD in disk arrays, and as an extension of read cache or write cache. The analyst believes there will be mixed MLC/SLC devices that will allow for mixed workloads. His recommendations:
Use SSD to eliminate performance and throughput bottlenecks
Consolidate workloads to maximize value
Use SLAs to identify workload candidates
Evaluate emerging technologies along with established vendors
Do not expect SSD to drastically reduce power/cooling
SSD will continue to complement HDD, primarily SATA disk
Trust but verify, check out customer references offered by storage vendors
Well, I'm back in Tucson, and thought I would close out my coverage of this year's Data Center Conference 2009 with some pictures. These first few are from the Solution Showcase.
There were four stations at the IBM booth. I had the "Information Infrastructure" station, you can see here I had my blook (blog-based book) on display "Inside System Storage: Volume I", a solid-state drive (in clear plexiglas to show all the chips inside), and the GUI panel for XIV.
What really stole the show was the IBM Portable Mobile Data Center (PMDC), which is a shipping crate with a fully running data center inside. In the one shown here, we had iDataPlex servers connected to an IBM XIV Storage System. Here is David Bricker striking a pose.
Inside, Monica Martinez shows off the iDataPlex servers. These are 1U servers that are only half as deep as regular servers, so you can pack 84 servers in the floorspace of 42 traditional 1U servers.
Two of these fit into a 2U chassis to share a common power supply and fan set. The trouble with traditional 1U servers is that fans do not have enough radius, so putting wider 2U fans for two servers gives you much better airflow.
Monica Martinez, Ruth Weinheimer, and Tamara Rice.
Wrapping up my coverage of the Data Center Conference 2009, the week ends with a celebration. This year we had six "Hospitality Suites" sponsored by various different vendors. Each suite has its own theme, decorations and entertainment. The first suite was VMware's "Cloud 9 Ultra Lounge" which offered blue cotton candy martinis. IBM is the leading reseller of VMware.
When the red martini liquid was poured on top of the blue cotton candy, the result was a nasty muddish brown grey color. The guy on the left chose to get the martini without the blue cotton candy. We joked that this is perhaps a good metaphor for cloud computing in general. It looks good on paper, until you actually put it all together and realize it does not look as blue and puffy as you were expecting. However, it tasted good!
Next suite was sponsored by Cisco, one of IBM's storage networking partners. Cisco also decorated in blue, as the guy Jake in the middle demonstrates.
Next suite was sponsored by Brocade, our supplier for IBM-branded networking gear. They went with a red-and-black color scheme. Sadly, many of my pictures inside involved straight jackets and unicycles, so not appropriate for this blog. However, it was easy to remember that they were talking about their "extraordinary networks". Makes you want to help out Brocade by contacting your nearest IBM storage sales rep and buy yourself a SAN768B or two.
Somewhere along the way, we picked up Hawaiian leis at the "Margaritaville" Hospitality Suite, compliments of sponsor APC by Schneider Electric. We had the best "Filet Mignon" appetizers at "Club Dedupe" by our competitor DataDomain, and some fun with my friends over at Computer Associates' "Top Gun" suite. Pictured at right are Paula Koziol with Christian Barrera from Argentina. A good time was had by all.