Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Before we started, we asked the first survey question: "How is storage planning conducted in your shop?" Of the various responses, nearly four out of ten responded "Part of an overall IT infrastructure strategy".
Jon Toigo went first, and spent 20 minutes or so laying out the problem as he sees it. Jon travels all over visiting customers struggling with their storage infrastructures, so he gets to hear a lot of this first hand.
I then spent 20 minutes or so presenting IBM's vision, strategy and offerings to help solve these problems. I could speak for hours on this topic, but we kept it short for this one-hour webcast. To learn more, request a visit to the Tucson Executive Briefing Center.
At the end of my talk, we put out the second survey, asking the audience "What is your number one priority with respect to storage operations today?" Over one fourth of the attendees were focused on reducing storage infrastructure cost of ownership by any means possible.
I am glad we saved the last 15 minutes for Q&A, as there were a lot of questions.
The replay is now available. If you attended the event and want to hear it again, or want to share it with your colleagues, or you missed it and want to hear it, then [Register for the Replay].
To make true advances in any industry or field requires forward thinking—as well as industry insight and experience. It can't be done just by packaging a bag of piece parts and putting a new label on it. But forward thinkers are putting smarter, more powerful technology to uses that were once unimaginable -- either in scale or in progress.
The graphics developed for the IBM Smarter Planet vision are interesting. This one for Infrastructure includes images relating to public utilities, like gas, water and electricity, clouds representing cloud computing, green forests representing the need for energy efficiency and reducing carbon footprint to fight global warming, roads, representing the intricate transportation and traffic systems, highways and city streets that connect us all together, and a printed circuit board, representing the Information Technology that makes all of this possible.
Ironically, I didn't even know I made the final cut until I got three, yes three, separate requests for interviews about it. I already reached the "million hits" milestone. Other people track these things for me, so it will be interesting how much additional traffic my latest [15 minutes of fame] will generate.
Infrastructure is just one of the 25 different areas that IBM's vision for a Smarter Planet is trying to address, including the need for smarter buildings, smarter cities, smarter transportation systems, smarter energy grids, smarter healthcare and public safety, and smarter governments.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
Eventually, there comes a time to drop support for older, outdated programs that don't meet the latest standards. I had several complain that they could not read my last post on Internet Explorer 6. The post reads fine on more modern browsers like Firefox 3 and even Google's Chrome browser, but not IE6.
Google confirms that warnings are appearing:
[Official: YouTube to stop IE6 support].
My choice is to either stop embedding YouTube videos, some of which are created by my own marketing team specifically on my behalf, or drop support for IE6. I choose the latter. If you are still using IE6, please consider switching to Firefox 3 or Google Chrome instead.
Over on his Backup Blog, fellow blogger Scott Waterhouse from EMC has a post titled
[Backup Sucks: Reason #38]. Here is an excerpt:
Unfortunately, we have not been able to successfully leverage economies of scale in the world of backup and recovery. If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data.
If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale.
I suspect that where Scott mentions we in the above excerpt, he is referring to EMC in general, with products like
Legato. Fortunately, IBM has scalable backup solutions, using either a hardware approach, or one purely with software.
The hardware approach involves using deduplication hardware technology as the storage pool for IBM Tivoli Storage Manager (TSM). Using this approach, IBM Tivoli Storage Manager would receive data from dozens, hundreds or even thousands
of client nodes, and the backup copies would be sent to an IBM TS7650 ProtecTIER data deduplication appliance, IBM TS7650G gateway, or IBM N series with A-SIS. In most cases, companies have standardized on the operating systems and applications used on these nodes, and multiple copies of data reside across employee laptops. As a result, as you have more nodes backing up, you are able to achieve benefits of scale.
Perhaps your budget isn't big enough to handle new hardware purchases at this time, in this economy. Have no fear,
IBM also offers deduplication built right into the IBM Tivoli Storage Manager v6 software itself. You can use sequential access disk storage pool for this. TSM scans and identifies duplicate chunks of data in the backup copies, and also archive and HSM data, and reclaims the space when found.
If your company is using a backup software product that doesn't scale well, perhaps now is a good time to switch over to IBM Tivoli Storage Manager. TSM is perhaps the most scalable backup software product in the marketplace, giving IBM an "irrefutable advantage" over the competition.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended several sessions intended to answer the questions of the audience.
In an effort to be cute, the System x team have a "Meet the xPerts" session at their System x and BladeCenter Technical Conference, so the storage side decided to do the same. Traditionally, these have been called "Birds of a Feature", "Q&A Panel", or "Free-for-All". They allow anyone to throw out a question, and have the experts in the room, either
IBM, Business Partner or another client, answer the question from their experience.
Meet the Experts - Storage for z/OS environments
Here were some of the questions answered:
I've seen terms like "z/OS", "zSeries" and "System z" used interchangeably, can you help clarify what this particular session is about?
IBM's current mainframe servers are all named "System z", such as our System z9 or System z10. These replace the older zSeries models of hardware. z/OS is one of the six operating systems that run on this hardware platform. The other five are z/VM, z/VSE, z/TPF, Linux and OpenSolaris. The focus of this session will be storage attached and used for z/OS specifically, including discussions of Omegamon and DFSMS software products.
What can we do to reduce our MIPS-based software licensing costs from our third party vendors?
Consider using IBM System z Integrated Information Processor
What about 8 Gbps FICON?
IBM has already announced
[FICON Express8] host bus adapter (HBA) cards, that will auto-negotiate to 4Gbps and 2Gbps speeds. If you don't need full 8Gbps speed now, you can
still get the Express8 cards, but put 4/2/1 Gbps SFP ports instead. Currently, LongWave (LW) is only supported to 4km at 8Gbps speed.
I want to use Global Mirror for my DS8100 to my remote DS8100, but also make test copies of my production data to
an older ESS 800 I have locally. Any suggestions? Yes, consider using FlashCopy to simplify this process.
I have Global Mirror (GM) running now successfully with DSCLI, and now want to deploy IBM Tivoli Storage Productivity Center for Replication. Is that possible? Yes, Productivity Center for Replication will detect existing GM relationships, and start managing them.
I have already deployed HyperPAV and zHPF, is there any value in getting Solid-State Drives as well?
HyperPAV and zHPF impact CONN time, but SSD impacts DISC time, so they are mutually complementary.
How should I size my FlashCopy SE pool? SE refers to "Space Efficient", which stores only the changes
between the source and destination copies of each LUN or CKD volume involved. General recommendation is to start with 20 percent and adjust accordingly.
How many RAID ranks should I configure per DS8000 extent pool? IBM recommends 4 to 8 ranks per pool.
Meet the Experts: Storage for Linux, UNIX and Windows distributed systems
This session was focused on storage systems attached to distributed servers, as well as products from Tivoli used to manage them. Here were some of the questions answered:
When we migrated from Tivoli Storage Manager v5 to v6, we lost our favorite "Operational Reporting" tool. How can we get TOR back? You now get the new Tivoli Common Reporting tool.
How can we identify appropriate port distribution for multiple SVC node pairs for load balancing?
IBM Tivoli Storage Productivity Center v4.1 has hot-spot analysis with recommendations for Vdisk migrations.
We tried TotalStorage Productivity Center way back when, but the frequent upgrades were killing us. How has it been lately? It has been much more stable since v3.3, and completely renamed to Tivoli Storage Productivity Center to avoid association with versions 1 and 2 of the predecessor product. The new "lightweight agents" feature of v4.1 resolve many of the problems you were experiencing.
We have over 1600 SVC virtual disks, how do we handle this in IBM Tivoli Storage Productivity Center? Use the Filter capability in combination with clever naming conventions for your virtual disks.
How can we be clever when we are limited to only 15 characters? Ok. We understand.
We are currently using an SSPC with Windows 2003 and 2GB memory, but we are only using the Productivity Center for Replication feature of it. Can we move the DB2 database over to a Windows 2008 server with 4GB of memory?
Consider using the IBM Tivoli Storage Productivity Center for Replication software instead of SSPC for special
circumstances like this.
We love the XIV GUI, how soon will all other IBM storage products have it also? As with every acquisition,
IBM evaluates if there are technologies from new products that can be carried back to existing products.
We are currently using 12 ports on our existing XIV, and love it so much we plan to buy a second frame, but are concerned about consuming another 12 ports on our SAN switch. Any suggestions? Yes, use only six ports per frame. Just because you have more ports, doesn't mean you are required to use them.
We have heard there are concerns from the legal community about using deduplication technology, any ideas how to address that?
Nobody here in the room is a lawyer, and you should consult legal counsel for any particular situation.
None of the IBM offerings intended for non-erasable, non-rewriteable (NENR) data retention records (DR550, WORM tape, N series SnapLock) support dedupe today, and none of IBM's deduplication offerings (TS7650,N series A-SIS,TSM) make any claims for fit-for-purpose for compliance regulatory storage. However, be assured that all of IBM's dedupe technology involves byte-for-byte comparisons so that you never lose any data due to false hash collisions. For all IBM compliance storage, what you write will be read back in the correct sequence of ones and zeros.
Continuing my week in Chicago, I decided to attend some of the presentations from the System x side. This is the advantage of running both conferences in the same hotel, attendees can choose how many of each they want to participate in.
Wayne Wigley, IBM Advanced Technical Support (ATS), presented a series of presentations on different server virtualization offerings available for System x and BladeCenter servers. I am very familiar with virtualization implemented on System z mainframes, as well as IBM's POWER systems, and have working knowledge of Linux KVM and Xen, so I was well prepared to handle hearing the latest about Microsoft's Hyper-V and VMware's Vsphere version 4.
Microsoft Hyper-V 2008
Hyper-V can run as part of Windows 2008, are standalone on its own.Different levels of Windows 2008 include licenses for different number of Windows virtual machines (VMs).Windows Server 2008 Standard includes 1 Windows VM, Enterprise includes 4 Windows VMs, and the Datacenter edition includes unlimited number of Windows VMs. If you want to run more Window VMs than come included, you need to pay extra for each additional one. For example, to run 10 Windows VMs on a 2-socket server would cost about $9000 US dollars on Standard but only $6000 US dollars on Datacenter edition (list prices from Microsoft Web site).
Unlike VMware, which takes a monolithic approach as hypervisor, Hyper-V is more like Xen with a microkernelized approach. This means you need a "parent" guest OS image, and the rest of the Guest OS images are then considered "child" images.These child images can be various levels of Windows, from Windows XP Pro to Windows Server 2008, Xen-enabled Linux, or even a non-hypervisor-aware OS.The "parent" guest OS image provides networking and storage I/O services to these "child" images.For the hypervisor-aware versions of Windows and Linux, Hyper-V allows optimized access to the hypervisor, "synthetic devices", and hypercalls. Synthetic devices present themselves as network devices, but only serve to pass data along the VMBus to other networking resources. This process does not require software emulation, and therefore offers higher performance for virtual machines and lower host system overhead.For non-hypervisor-aware OS images, Hyper-V provides device emulation through the "parent" image, which is slower.
Microsoft System Center Virtual Machine Manager (SCVMM) can manage both Hyper-V and VMware VI3 images.Wayne showed various screen shots of the GUI available to manage Hyper-V images.In standalone mode, you lose the nice GUI and management console.
Hyper-V supports external, internal and private virtual LANs (VLAN). External means that VMs can communicate with the outside world over standard ethernet connections. Internal means that VMs can communicate with "parent" and "child" guest images on the same server only. Private means that only "child" guests can communicate with other "child" images.
Hyper-V supports disk attached via IDE, SATA, SCSI, SAS, FC, iSCSI, NFS and CIFS. One mode is "Virtual Hard Disk" (VHD) similar to VMware VMDK files. The other is "pass through" mode, which are actual disk LUNs accessed natively. VHDs can be dynamic (thin provisioned), fixed (fully allocated), or differencing. The concept of differencing is interesting, as you start with a base read-only VHD volume image, and have a separate "delta" file that contains changes from the base image.
Some of the key features of Hyper-V 2008 are:
Being able to run concurrently 32-bit and 64-bit versions of Linux and Windows guest images
Support for 64 GB of memory and 4-way symmetric multiprocessing (SMP) per VM
Clustering for High Availability and Quick Migration of VM images
Live backup with integration with Microsoft's Volume Shadow Copy Services (VSS)
Virtual LAN (VLAN) support, and Virtual and Pass-through physical disk support
A clever VMbus, virtual service parent/client approach to sharing hardware
Optimized performance options for hypervisor-aware versions of Windows and Linux, and emulated supportfor non-hypervisor-aware OS images.
VMware Vsphere v4.0
This was titled as an "Overview" session, but really was an "Update" session on the newest features of this release. The big change appears to be that VMware added "v" in front of everything.
Under vCompute, there are some new features on VMware's Distributed Resource Scheduler (DRS) which includes recommended VM migrations. Dynamic Power Management (DPM) will move VMs during periods of low usageto consolidate onto fewer physical servers so as to reduce energy consumption.
Under vStorage, vSphere introduces an enhanced Plugable Storage Architecture (PSA), with supportfor Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP). This vStorage API allows forthird party plugins for improved fault-tolerance and complex I/O load balancing algorithms. This releasealso has improved support for iSCSI, including Challenge-Handshake Authentication Protocol (CHAP) support.Similar to Hyper-V's dynamic VHD, VMware supports "thin provisioning" for their virtual disk VMDK files.A feature of "Storage Vmotion" allows conversion between "thick" and "thin" provisioning formats.
The vStorage API for Data Protection provide all the features of VMware Consolidated Backup (VCB). The APIprovides full, incremental and differential file-level backups for Windows and Linux guests, including supportfor snapshots and Volume Shadow Copy Services (VSS) quiescing.
VMware introduces direct I/O pass-through for both NIC and HBA devices. While thisallows direct access to SAN-attached LUNs similar to Hyper-V, you lose a lot of features like Vmotion, High Availability and Fault Tolerance. Wayne felt that these restrictions are temporary, that hopefully VMwarewill resolve this over the next 12 months.
Under vNetwork, VMware has virtual LAN switches called vSwitches. This includes support for IPv6and VLAN offloading.
The vSphere server can now run with up to 1TB of RAM and 64 logical CPUs to support up to 320 VM guest images.Each VM can have up to 255GB RAM and up to 8-way SMP.Vsphere ESX 4 introduces a new virtual hardware platform called VM Hardware v7. While Vsphere 4.0 can run VMs from ESX 2 and ESX 3, the problem is if you have new VMs based on this newer VM Hardware v7, you cannot run them on older ESX versions.
Vsphere comes in four sizes: Standard, Advanced, Enterprise, and Enterprise Plus, ranging in list price from $795 US dollars to $3495 US dollars.
While IBM is the #1 reseller of VMware, we also are proud to support Hyper-V, Xen, KVM and other similar products.Analysts expect most companies will have two or more server virtualization solutions in their data center, and it is good to see that IBM supports them all.
Continuing my week in Chicago for the IBM Storage and Storage Networking Symposium and System x and BladeCenter Technical Conference, I presented a variety of topics.
Hybrid Storage for a Green Data Center
The cost of power and cooling has risen to be a #1 concern among data centers. I presented the following hybrid storage solutions that combine disk with tape. These provide the best of both worlds, the high performance access time of disk with the lower costs and reduced energy consumption of tape.
IBM [System Storage DR550] - IBM's Non-erasable, Non-rewriteable (NENR) storage for archive and compliance data retention
IBM Grid Medical Archive Solution [GMAS] - IBM's multi-site grid storage for PACS applications and electronic medical records[EMR]
IBM Scale-out File Services [SoFS] - IBM's scalable NAS solution that combines a global name space with a clustered GPFS file system, serving as the ideal basis for IBM's own[Cloud Computing and Storage] offerings
Not only do these help reduce energy costs, they provide an overall lower total cost of ownership (TCO) thantraditional WORM optical or disk-only storage configurations.
The Convergence of Networks - Understanding SAN, NAS and iSCSI in the Data Center Network
This turned out to be my most popular session. Many companies are at a crossroads in choosing data and storage networking solutions in light of recent announcements from IBM and others. In the span of 75 minutes, I covered:
Block storage concepts, storage virtualization and RAID levels
File system concepts, how file systems map files to block storage
Network Attach Storage, the history of the NFS and CIFS protocols, Pros and Cons of using NAS
Storage Area Networks, the history of SAN protocols including ESCON, FICON and FCP, Pros and Cons of using SAN
IP SAN technologies, iSCSI and Fibre Channel over Ethernet (FCoE), Pros and Cons of using this approach
Network Convergence with Infiniband and Fibre Channel over Convergence Enhanced Ethernet (FCoCEE), why Infiniband was not adopted historically in the marketplace as a storage protocol, and the features and enhancements of Convergence Enhanced Ethernet (CEE) needed to merge NAS, SAN and iSCSI traffic onto a single converged data center network [DCN]
Yes, it was a lot of information to cover, but I managed to get it done on time.
IBM Tivoli Storage Productivity Center version 4.1 Overview and Update
In conferences like these, there are two types of product-level presentations. An "Overview" explains howproducts work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. I decided to combine these into one sessionfor IBM's new version of [Tivoli Storage Productivity Center].I was one of the original lead architects of this product many years ago, and was able to share many personalexperiences about its evolution in development and in the field at client facilities.Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performingILM assessments for clients, and now there are dozens of IBM practitioners in Global Technology Services andSTG Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center[EBC], because it addressesseveral top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
I did not get as many attendees as I had hoped for this last one, as I was competing head-to-head in the same time slot as Lee La Frese covering IBM's DS8000 performance with Solid State Disk (SSD) drives, John Sing covering Cloud Computing and Storage with SoFS, and Eric Kern covering IBM Cloudburst.
I am glad that I was able to make all of my presentations at the beginning of the week, so that I can then sit back and enjoy the rest of the sessions as a pure attendee.
This week I am in Chicago for the IBM Storage and Storage Networking Symposium, which coincides with the System x and BladeCenter Technical Conference. This allows the 800 attendees to attend both storage or server presentations at their convenience. There were hundreds of sessions, over 20 time slots, so for each time slot, you have 15 or so topics to choose from.Mike Kuhn kicked off the series of keynote sessions. Here's my quick recap of each one:
Curtis Tearte, General Manager, IBM System Storage
Curtis replaced Andy Monshaw as General Manager for IBM System Storage. His presentation focused on how storage fits into IBM's Dynamic Infrastructure strategy. Some interesting points:
a billion camera-enabled cell phones were sold in 2007, compared to 450 million in 2006.
IBM expects that there will be 2 billion internet users by 2011, as well as trillions of "things".
In the US, there were 2.2 million medical pharmacy dispensing errors resulting for handwritten prescriptions.
Time wasted looking for parking spaces in Los Angeles consumed 47,000 gallons of gasoline, and generated 730 tons of carbon dioxide.
In the US, 4.2 billion hours are lost, and 2.9 billion gallons of gas consumed, due to traffic congestion.
Over the past decade, servers went from 8 watts to 100 watts per $1000 US dollars.
Data growth appears immune to the economic recession. The digital footprint per person is expected to grow from 1TB today to over 15TB by 2020.
10 hours of YouTube videos are uploaded every minute.
Bank of China manages 380 million bank accounts, processing over 10,000 transactions per second.
At the end of the session, Curtis transitioned from demonstrating his knowledge and passion of storage to his knowledge and passion in his favorite sport: baseball. Chicago is home to both the Cubs and the White Sox.
Roland Hagan, Vice President Business Line Executive, System x
IBM sets the infrastructure agenda for the entire industry. The Dynamic Infrastructure initiative is not just IT, but a complete end-to-end view across all of the infrastructures in play, including transportation, manufacturing, services and facilities.Companies spent over $60 billion US dollars on servers last year. Of these, 53 percent for x86-based servers, 9 percent for Itanium-based, 26 percent for RISC-based (POWER6, SPARC, etc.), and 11 percent mainframe. Theeconomic downturn has impacted revenues, but the percentages continue about the same.
The dominant deployment model remains one application per server. As a result, power, cooling and management costs have grown tremendously. There are system admins opposed to consolidating server images with VMware, Hyper-V, Xen or other server virtualizaition technologies. Roland referred to these admins as "server huggers".To help clients adopt cloud computing technologies, IBM introduced [Cloudburst] appliances. IBM plans to offer specialized versions for developers, for service providers, and for enterprises.
IBM's Enterprise-X Architecture is what differentiates IBM's x86-based servers from all the competitors, surrounding Intel and AMD processors with technology that provides distinct advantages. For example, to support server virtualization, IBM's eX4 provides support for more memory, which often is more critical than CPU resources when deploying large number of guest OS images. IBM System x servers have an integrated management module (IMM) and was the first to change over from BIOS to the new Unified Extensible Firmware Interface [UEFI] standard.
IBM servers offer double the performance, consume half the power, and cost a third less to manage, than comparably priced servers from competitors. Of the top 20 more energy efficient server deployments, 19 are from IBM. Roland cited customer reference SciNet, a 4000-server supercomputer with 30,000 cores based on IBM [iDataPlex] servers. At 350 TeraFLOPs it is ranked #16 fastest supercomputer in the world, and #1 in Canada. With apower usage effectiveness (PUE) less than 1.2, it also is very energy efficient. This means that for every 12 watts of electricity going in to the data center, 10 watts are used for servers, storage and networking gear, andonly 2 watts used for power and cooling. Traditional data centers have PUE around 2.5, consuming 25 watts total for every 10 watts used by servers, storage and networking gear.
Clod Barrera, Distinguished Engineer, Chief Technical Strategist for IBM System Storage
Clod presented trends and directions for disk and tape technology, disk and tape systems, and the direction towards cloud computing.
Ideally, every airline would use the most experienced seasoned professional airline pilots money could buy, but some airlines, in an effort to compete on ticket price, may elect instead to have less experienced pilots.Here's a great excerpt:
Airline history lesson 101: It used to be, up until the mid 1980’s, that a young pilot would be hired on at a major carrier, become a flight engineer (FE), and then spend a few years managing the systems of the older-generation airplanes. But he or she was learning all the while. These new “pilots” sat in the FE seat and did their job, all the while observing the “pilots” doing the flying, day in and day out.
The FE’s learned from the seasoned pilots about the real world of flying into the Chicago O’Hares and New York LaGuardias. They learned decision making, delegation, and the reality of “captain’s final authority” as confirmed in the law. When they got the chance to upgrade, they became a copilot. The copilot’s duty was to assist the captain in flying; but even during their time as the new copilot, they had the luxury of the FE looking over their shoulders — i.e., more learning. This three-man-crew concept, now a fond memory in the domestic markets but used predominately in international flying, was considered one more layer of protection. But it’s gone.
To become the public speaker I am today, IBM put me through a variety of speaking classes. I taught high school and college classes to practice in front of groups. But most importantly, I traveled with seasoned colleagues and watched them in action from the front row.I learned how to handle tough questions, how to react to hecklers causing trouble, and how to deal with the unexpected before, during and after each presentation. In addition to speaking skills, I ended up having to learn travel skills, foreign language skills, and a variety of cultural social skills. All part of the job in my line of work.
Likewise, being a storage administrator is an important job, and for some data centers, not something to give lightly to a fresh college graduate. Unless they have had format IT Infrastructure Library [ITIL] certification coursework, I doubt they would understand the processes and disciplines demanded by the typical data center. I have been to accounts where new hires are not allowed to touch production systems for the first two years. Instead they watch the seasoned professionals do their jobs, and are given only access to "sand box" systems that are used for application testing or Quality Assurance (QA). Sadly, I have also been to other accounts where people with no storage experience whatsoever were tossed into the admin pool and let loose with superuser passwords, all in an effort to save money during times of exponential data growth rates, only to pay the price later with outages or lost data.
The parallels between the airline industry and the IT industry are eerie.
As I mentioned in my post [Moving Over to MyDeveloperWorks], those of us bloggers on IBM's DeveloperWorks are moving over to a new system called "MyDeveloperWorks" which has a host of new features.
Fortunately for me, I missed the note to volunteer to be one of the first bloggers on the block to volunteer to move over. I was traveling and decided not to deal with it until I got back.However, fellow IBM Master Inventor, Barry Whyte, was not so lucky. It is safe to say he was stupid enough to volunteer, and is probably regretting the decision every day since. In case you lost his RSS feed, or can't find him anymore on Google or whatever search engine, here is his[new blog].
As for my blog, I have asked to postpone the move until all the problems that Barry has encountered are resolved. That might be a awhile, but if you lose access to mine sometime in the near future, hopefully at least you have been warned as to what might have happened.
Jon Toigo has a funny cartoon on his post, [As I Listen to EMC Brag on “New” Functionality…]. Basically, it pokes fun that many of us bloggers argue which vendor was first to introduce some technology or another. We all do this, myself included.
Recently, Claus Mikkelsen's, currently with HDS, [brought up accurately some past history from the 1990s], which is before many storage bloggers hired on with their current employers. Claus and I worked together for IBM back then, so I recognized many of the events he mentions that I can't talk about either. In many cases, IBM or HDS delivered new features before EMC.
I've been reading with some amusement as fellow blogger Barry Burke asked Claus a series of questions about Hitachi's latest High Availability Manager (HAM) feature. Claus was too busy with his "day job" and chose to shut Barry down. Sadly, HDS set themselves up for ridicule this round, first by over-hyping a function before its announcement, and then announcing a feature that IBM and EMC have offered for a while. The problem and confusion for many is that each vendor uses different terminology. Hitachi's HAM is similar to IBM's HyperSwap and EMC's AutoSwap. The implementations are different, of course, which is often why vendors are often asked to compare and contrast one implementation to another.
In his latest response,[how to mind the future of a mission-critical world], Barry reports that several HDS bloggers now censor his comments.That's too bad. I don't censor comments, within reason, including Barry's inane questions about IBM's products, and am glad that he does not censor my inane questions to him about EMC products in return. The entire blogosphere benefits from these exchanges, even if they are a bit heated sometimes.
We all have day jobs, and often are just too busy, or too lazy, to read dozens or hundreds of pages of materials, if we can even find them in the first place. Not everyone has the luxury of a "competitive marketing" team to help do the research for you, so if we can get an accurate answer or clarification about a product that is generally available directly from a vendor's subject matter expert, I am all for that.
This week, I have been presenting how to do important things without travel. Of course, there are times where you need some boots on the ground, while your support team remains remote.
Last month, fellow co-worker Liz Goodman reached out to me. She was part of a ten-person team that went to Tanzania as part of IBM's[Corporate Service Corps]. Other teams went to Brazil, China, Ghana, Romania, South Africa, The Philippines, Turkey and Vietnam.(I've been to half these other countries, but the closest I have ever been toTanzania was a safari I took in Kenya that included the Masai Mara national park which runsalong the border with Tanzania's Serengheti national park).
Liz was one of the lucky[200 candidates chosen among over 5000 applications] IBM reviews each year for this program. IBM does business in over170 countries, so learning to work in or with emerging growth markets requires a bit of "cultural intelligence".Liz and three others worked with the University of Dudoma [UDOM] to lead some students in adopting a [Moodle] infrastructure based on Linux, Apache, PHP and MySQL [LAMP] platform. She noticed that I had experience with both Moodle and LAMP from [my work with OLPC], and reached out to me for help.I was able to provide some insight, things to watch out for, and how to tackle not just the technical challenges, but a few that many don't consider:
Educational content. Digitizing materials already available in hardcopy, or obtaining digital rights to existing content.
Business Process. Getting the teachers and students to adopt new process and procedures enabled by these new capabilities.
Project Management. Fortunately, Liz is already [PMP-certified], and knows well the importance of managing even a small 4-person, 4-week project like this.
How well did her team do? Liz blogged before, during and after her trip. Read all about iton her blog [Liz Goes To Tanzania]!
Jim Stallings, IBM General Manager for Global Markets, will explain why a smarter planet needs a dynamicinfrastructure. I used to work for Jim, when he was in charge of the IBM Linux initiative and I was on the Linux forS/390 mainframe team.
Erich Clementi, IBM Vice President, Strategy & General Manager Enterprise Initiatives, will explain how to best leverage opportunities with cloud computing.
Steve Forbes, Chairman and CEO of Forbes Inc. and Editor-in-Chief of Forbes Magazine, will presentGlobal Outlooks and the Challenge of Change.
Rich Lechner, IBM Vice President, Energy & Environment, will explain the importance of Building an Energy-Efficient Dynamic Infrastructure. I also worked for Rich, back when he was the VP of Marketing for IBM System Storage, and Iwas back then the "Technical Evangelist". See my post [The Art of Evangelism] to better understand why I don't carry that title anymore.
In addition to these presentations, you will be able to "walk" around to different booths and have on-line chats with subject matter experts and download resources. Don't worry, this is not based on [Second Life], but rather using "On24" much simpler visual interface.Of course, you can follow on [Twitter] or join the fan club at[Facebook].
This is a worldwide event, with translated resource materials and on-line subject matter experts in six different languages (English, French, Italian, German, Mandarin and Japanese). Those in North, Central and South Americas can participate June 23, and those in Europe, Asia and the rest of the world on June 24. [Register Today] and mark your calendars!
Continuing this week's theme of doing important things without leaving town, I present our results foran exciting project I started earlier this year.
For seven weeks, my coworker Mark Haye and I voluntarily led a class of students here in Tucson, Arizona in an after-school pilot project to teach the ["C" programming language] using [LEGO® Mindstorms® NXT robots]. The ten students, boys and girls ages 9 to 14 years old, were already part of the FIRST [For Inspiration and Recognition of Science and Technology] program, and participated in FIRST Lego League[FLL] robot competitions.Since the students were already familiar building robots, and programming them with a simple graphical system of connecting blocks that perform actions. However, to compete in the next level of robot competitions, FIRST Tech Challenge [FTC],we need to leave this simple graphical programming behind, and upgrade to more precise "C" programming.
Mark is a software engineer for IBM Tivoli Storage Manager and has participated in FLL competitions over the past nine years. This week, he celebrates his 25th anniversary at IBM, and I celebrate my 23rd. The teacher, Ms. Ackerman, and the students referred to us as "Coach Mark" and "Coach Tony".
This was the first time I had worked with LEGO NXT robots. For those not familiar with these robots, you can purchase a kit at your localtoy store. In addition to regular LEGO bricks, beams, and plates, there are motors, wheels, and sensors. A programmable NXT brick has three outputs (marked A,B, and C) to control three motors, and four inputs (marked 1,2,3,4) to receive values from sensors. Programs are written and compiled on laptops and then downloaded to the NXT programmable brick through an USB cable, or wirelessly via Bluetooth.
In the picture shown, an image of the Mars planetary surface is divided into a grid with thick black lines.A light sensor between the front two wheels of the robot is over the black line.
We used the [RobotC programming firmware] and integrated development environment (IDE) from [Carnegie Mellon University].The idea of this pilot was to see how well the students could learn "C". With only a few hours after class on each Wednesday, could we teach young students "C" programming in just seven weeks?
My contribution? I have taught both high school and college classes, and spent over 15 years programming for IBM, so Mark asked me to help.We started with a basic lesson plan:
A brief history of the "C" language
Understanding statements and syntax
Setting motor speed and direction
Compiling and downloading your first program
Understanding the "while" loop
Retrieving input sensor values
Understanding the "if-then-else" statement
Defining variables with different data types
Manipulating string variables
Writing a program for the robot to track along a black line on a white background.
Understanding local versus global scope variables
Writing a program for a robot to count black lines as it crosses them.
Perform left turns, right turns, and to cross a specific number of lines on a grid pattern to move the robot to a specific location.
Weeks 6 and 7
Mission Impossible: come up with a challenge to make the robot do something that would be difficult to accomplish using the previous NXT visual programming language.
At the completion of these seven weeks, I sat down to interview "Coach Mark"on his thoughts on this pilot project.
This is a practical programming skill. The "C" language is used throughout the world to program everything from embedded systems to operating systems, and even storage software. This would allow the robots to handle more precise movements, more accurate turns, and more complicated missions.
Can kids learn "C" in only seven weeks?
Part of the pilot project was to see how well the students could understand the material. They were already familiar with building the robots, and understood the basics of programming sensors and motors, so we were hoping this was a good foundation to work from. Some kids managed very well, others struggled.
Did everything go according to plan?
The first two weeks went well, turning on motors and having robots move forward and backward were easy enough. We seemed to lose a few students on week 3, and things got worse from there. However, several of the students truly surprised us and managed to implement very complicated missions. We were quite pleased with the results.
What kind of problems did the kids encounter?
Touch sensor required loops waiting for pressing. Motors did not necessarily turn as expected until more advanced methods were used. Making 90 degree left and right turns accurately was more difficult than expected.
Any funny surprises?
Yes, we had a Challenge Map representing the Mars planetary surface from a previous FLL competition that was dark red and divided into squares with thick black lines. An active light sensor returns a value of "0" (complete darkness) to "100" (bright white).However, the Mars surface had craters that were dark enough to be misinterpreted as a black line causing some unusual results. This required some enhanced programming techniques to resolve.
Did robots help or hurt the teaching process?
I think they helped. Rather than writing programs that just display "Hello World!" on a computer screen, the students can actually see robots move, and either do what they expect, or not!
And when the robots didn't do what they were expected to?
The students got into "debug" mode. They were already used to doing this from previous FLL competitions, but with RobotC, you can leave the USB cable connected (or use wireless Bluetooth) and actually gather debugging information while the robot is running, to see the value of sensors and other variables and help determine why things are not working properly.
Any applicability to the real world of storage?
We have robots in the IBM System Storage TS3500 tape library. These robots scan bar code labels, pull tapes out of shelves and mount them into drives.The programming skills are the same needed for storage software, suchas IBM Tivoli Storage Manager or IBM Tivoli Storage Productivity Center.
The world is becoming smarter, instrumented with sensors, interconnected over a common network, and intelligent enough to react and respond correctly. The lessons of reading sensor values and moving motors can be considered the first step in solutions that help to make a smarter planet.
Spend twenty hours a week running a project for a non-profit.
Teach yourself Java, HTML, Flash, PHP and SQL. Not a little, but mastery. [Clarification: I know you can't become a master programmer of all these in a year. I used the word mastery to distinguish it from 'familiarity' which is what you get from one of those Dummies type books. I would hope you could write code that solves problems, works and is reasonably clear, not that you can program well enough to work for Joel Spolsky. Sorry if I ruffled feathers.]
Volunteer to coach or assistant coach a kids sports team.
Start, run and grow an online community.
Give a speech a week to local organizations.
Write a regular newsletter or blog about an industry you care about.
Learn a foreign language fluently.
Write three detailed business plans for projects in the industry you care about.
Self-publish a book.
Run a marathon.
In 2007, 51 percent of graduating college students could find jobs in their field, and this year it has dropped to only 20 percent. If you find yourself with some time on your hands, either recently graduated or recently unemployed, consider volunteerism.Last year, I chose to donate my time and money to an innovative project called "One Laptop per Child" [OLPC]. It was one of my [New Years Resolutions] for 2008. I was actually "recruited" by folks from the OLPC after they read my [series of blog posts] on things that can be done with their now famous green-and-white XO laptop.
The first half of the year, I spent helping "Open Learning Exchange Nepal" [OLE Nepal], a non-government organization (NGO) to help education in that country. XO laptops were provided to second and sixth graders at several schools, and my assignment was to help with the school "XS" server. This would be the server that all the laptops connect to. My blog posts on this included:
Rather than [Move to Nepal], I was able to help by building an identical XS server in Tucson, and provide support remotely. This included getting the "Mesh Antennas"to be properly recognized, having an internet filter using [DansGuardian] software, and working out backup procedures.
For the second half of the year, I was asked to mentor a college student inHyderabad, India as part of the ["Google Summer of Code"] to develop an[Educational Blogger System]on the XS server. We called it "EduBlog" and based it on the popular [Moodle] educational software platform.This was going to be tested with kids from Uruguay, but sending a serverdown to this country proved politically-challenging, so instead, I [builta server and shipped it] to a co-location facility in Pennsylvania that agreed to donate the cost and expenses needed to run the server there with full internet connection. I acted as "system admin" for the box, was able to connect remotely via SSH, while Tarun, the college student I was mentoring, developed the EduBlog software. Twice the system washacked, but I was able to restore the system remotely thanks to a multi-boot configuration that allowedme to reboot to a read-only operating system image and restore the operating system and data.
The students and teachers in Uruguay were helped locally by [Proyecto Ceibal]. We were able to translate the system into Spanish, and the project was a big success, enough to convince local government to provideXO laptops to their students to further the benefits.
I get a lot of suggestions for what to put on my blog.I realize that tweets are limited to 140 characters, so pointing to a video URL without muchexplanation or warning can be dangerous. An email can at least add appropriate warnings,such NSFW (Not Safe For Work) or "sorry if this offends you". The only warning I got fora video posted to YouTube by "StorageNetworkDud" was this short email:
"Sorry about the language they have used in some translations, but not sure who put this. It was on twitter."
Fortunately, I have my browser set up not to automatically play YouTube videos. The titlehelped warn me of the content, which turned out to be a [fan-subbed] scene from a World War II movie with brown-shirted tyrannical leader of an evil empire talking to his top generals. He dismisses all but threewith "Hollis, Burke, and Twomey stay in here" followed by a lengthy recap of EMC's recent troublesin the marketplace. At least in the video, the fuhrer correctly follows Tim Sander's advice:"if you have to tell someone bad news, say it in person."
While I understand that many people don't like EMC, the #3 storagevendor in the world, this type of "geek humor" hits a new low. The video was posted over amonth ago, but in light of the recent [shooting in Washington DC], I felt it was just notappropriate to post it here.
Readers, I appreciate all the suggestions, but give me some better warning next time!
This week I am in Minneapolis, MN, so was hoping that the complicated process of moving this blog over to "MyDeveloperWorks" would happen while I was gone, but alas, that does not appear to be the case.
Meanwhile, my partner in crime, Barry Whyte, has moved his blog [Storage Virtualization]successfully over to the new server.
Perhaps next week. If all goes well, the URL links should redirect correctly, but those of you out there using feed readers might require you to re-subscribe to get the right RSS feeds.
Continuing my blog coverage of the [Forrester IT Forum 2009 conference],I will group a bunch of topics related to Cloud Computing into one post. Cloud Computing was a big topichere at the IT Forum, and probably was also in the other two conferences IBM participated in this week inLas Vegas:
The CIOs and IT professionals at this Forrester IT Forum seemed to be IT decision makers with a broader view. There was a lot of interest in Cloud Computing. What is Cloud Computing? Basically, it is renting IT capability on an as-needed basis from a computing service provider. The different levels of cloud computing depends on what the computing service provider actually provides. How do these compare with traditional co-location facilities or your own in-house on-premises computing? Here's my handy-dandy quick-reference guide:
Cloud Software-as-a-Service [SaaS], Examples: SalesForce and Google Apps.
Cloud Infrastructure-as-a-Service [IaaS], such as Amazon EC2, RackSpace.
Tradtional Co-Location facility, you park your equipment on rented floorspace, power, cooling and bandwidth.
Traditional On-Premises, what most people do today, build or buy your own data center, buy the hardware, write or buy the software, then install and manage it.
A main tent session had a moderated Q&A panel of three Forrester Analysts titled "Saving, Making and Risking Cash with Cloud Computing." Here are some key points from this panel:
Is Cloud Computing just another tool in the IT toolbox, or does it represent a revolution? The panel gave arguments for both. As a set of technology, protocols and standards, it is an evolutionary progression of other standards already in place, and an extension of methods used in co-location and time-share facilities. However,from a business model perspective, Cloud Computing represents a revolutionary trend, eliminating in some cases huge up-front capital expenses and/or long-term outsourcing contracts. PaaS and IaaS offerings can be rented by the hour, for example.
An example of using Cloud Computing for a one-time batch job: The New York Times decided to build an archive of 11 million articles, but this meant having to convert them all from TIFF to PDF format. The IT person they put in charge of this rented 100 machines on [Amazon Elastic Compute Cloud (EC2)] for 24 hours and was able to convert all 4TB of data for only $240 US dollars.
Cloud Computing can make it easier for companies to share information with clients, suppliers and business partners, eliminating the need to punch holes through firewalls to provide access.
Since it is relatively cheap for companies to try out different cloud computing offerings with little or no capital investment, the spaghetti model applies--"throw it on the wall, and see what sticks!"
What application areas should you consider running in the cloud? Employee self-service portals-Yes, ERP-Mixed, On-time batch jobs-Mixed, Email-Yes, Access Control-No, Web 2.0-Mixed, Testing/QA-Mixed, Back Office Transactions-No, Disaster Recovery-Mixed.
Different IT roles will see varying benefits and risks with cloud computing. However, by 2011, every new IT project must answer the question "Why not run in the cloud?"
There were a variety of track sessions that explored different aspects of cloud computing:
Software-as-a-Server: When and Why
This session had three Forrester analysts in a Q&A panel format. SaaS can provide much-needed relief from application support, maintenance and upgrade chores. The choice and depth of offerings is improving from SaaS providers. However, when comparing TCO between SaaS and on-premises deployments, can yield different results for different use cases. For example, a typical SaaS rate of $100 US dollars per user per month, with discounts, could be $1000 per year, or $10,000 over a 10-year period. Compare that to the total 10-year costs of an on-premises deployment, and you have a good ball-park comparison. SaaS can provide faster time-to-value, and you can easily just try-before-you-buy several alternative offerings before making a decision.
The downside to SaaS is that you need to understand their data center, where it is located, and how it is protected for backup and disaster recovery. Some SaaS providers have only a single data center, so it mightbe disruptive if it experiences a regional disaster.
Cloud IT Services: The Next Big Thing or Just Marketing Vapor?
Economic pressures are forcing companies to explore alternatives, and Cloud IT services are providingadditional options over traditional outsourcing. Only 70-80 percent of companies are satisfied with traditionaloutsourcing, so there is opportunity for Cloud IT services to address those not satisfied. Scalable, consumption-based billing with Web-based accessibility and flexibility is an attractive proposition. Tenyears ago, you could not buy an hour on a mainframe with your credit card, now you can.
Cloud technologies are mature, and there is interest in using these services. About 10 percent of companies are piloting SaaS offerings, 16 percent piloting PaaS offerings, and 13 percent investing in deploying "private clouds" within their data center. This week Aneesh Chopra, who is Barack Obama's pick as the first CTO for the US Federal Government, [stated to congressional leaders]: “The federal government should be exploring greater use of cloud computing where appropriate.”
IBM is betting heavily on their Cloud Computing strategy, has already gone through the reorganizations needed to be positioned well, and claims to have thousands of clients already. HP has some cloud offerings focused on their enterprise customers. Dell is investing and reorganizing for cloud as well.
Network Strategic Planning for Challenging Times
While not limited to Cloud Computing, companies are seeing WAN traffic doubling every 18 months, but withoutthe corresponding increases in budget to cover it. The Forrester analyst covered WAN optimization management services, hybrid Ethernet-MPLS offerings to help people transition from MPLS VPNs to Carrier-grade Ethernet.
Who should you hire for WAN optimization? Do you trust your own Telco that provides your bandwidth to help you figure out ways to use less of it? Alternatives include System Integrators and Service providers like IBM and EDS.Or, you could try to do it yourself, but this requires capital investment in gear and performance monitoring software.
New workloads like Voice over IP (VoIP) and digital surveillance can help cost-justify upgrading your MPLS VPNs to Carrier-grade Ethernet. The possibility of converging this with iSCSI and/or Fibre Channel (FC) over Ethernet (FCoE) and this can help reduce costs as well. Both MPLS and Ethernet will co-exist for awhile, and hybrid offerings from Telcos will help ease the transition. In the meantime, switching some workloads to Cloud Computing can provide immediate relief to in-house networks now. Converging voice, video, LAN, WAN and SAN traffic may require the IT departments to reorganize how the IT role of "network administrator" is handled.
Navigating the Myriad New Sourcing Models
The landscape of outsourcing has changed with the introducing of new Cloud Computing offerings. However, adapting these new offerings to internal preferences may prove challenging. The Forrester analyst suggesting being ready to try to influence their companies to adopt Cloud Computing as a new sourcing option.
Traditional outsourcing just manages your existing hardware and software, often referred to as "Your mess for less!" However, outsourcing contract law is mature and many outsource providers are large, well-established providers. In contrast, some SaaS providers are small, and the few that are largemay be fairly new to the outsourcing business. Here are some things to consider:
Where will the data physically be located? There are government regulations, such as the US Patriot Act, that can influence this decision.Many Canadian and European customers are avoiding providers where datais stored in the United States for this reason.
What is the service delivery chain? Some cloud providers in turn useother cloud providers. For example a SaaS provider might develop the software and then rent the platform it runs on from a PaaS, which in turn mightbe using offshore or co-location facilities to actually house their equipment.Knowing the service delivery chain may prove important on contractnegotiations. Clarify "cloud" terminology and avoid mixed metaphors.
What is their contingency plan? What is your contingency plan if the system is slow or inaccessible. What is their plan to protect against data loss during disasters? What if they go out of business? Source Code Escrow has proven impractical in many cases. SLAs should provide for performance, availability and other key metrics. However, service level penalties are not a cure-all for major disruptions, loss of revenues or reputation.
How will they handle security, compliance and audits? Heavy regulatory requirements may favor dedicated resources to be used.
Who has "custodianship" of the data? Will you get the data back if you discontinue the contract? If so, what format will it be in, and will it make any sense if you are not running the same application as the cloud provider?
Will they provide transition assistance? Moving from on-premises to cloud may involve some effort, including re-training of end users.
Are the resources shared or dedicated? For shared resource environments, is the capacity "fenced off" in any way to prevent having other clients impact your performance or availability.
I am glad to see so much interest in Cloud Computing. To learn more, here is IBM's [Cloud Computing] landing page.
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
Some CEOs still consider IT departments as "cost centers". Rather than exploiting technology to help drive the rest of the business, they are seen as a necessary evil, an extension of the accounting department, for example.
Some CEOs consider IT's role as basically "keeping the lights on". They only notice IT when the lights go out, or other business outages caused by disruptions in IT.
IT departments measure themselves in technology terms, not business terms. CEOs and the rest of the senior management team may not be "tech savvy", and the CIO and IT directors may not be "business savvy", resulting in failure to communicate IT's role and value to the rest of the business.
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Bob Moffat, Senior VP and Group Executive, IBM
Bob Moffat (my fifth-line manager, or if you prefer, my boss's boss's boss's boss's boss) is the Senior VP and Group Executive of IBM's Systems and Technology Group that manufactures storage and other hardware. He presented how IBM is helping our clients deploy smarter solutions. Globalization has changed world business markets, has changed the reach of information technology, and has changed our client's needs.To support that, IBM is focused on making the world a smarter planet, instrumented with appropriate sensors, interconnected over converging networks, and intelligent to provide visibility, control and automation.
It's time to rethink IT in light of these new developments, to think about IT in client terms, with business metrics. Bob gave several internal and customer examples, here's one from the City of Stockholm:
Covering nine square miles of Stockholm Sweden, IBM led [the largest project of its kind] for traffic congestion in Europe. To reduce congestion caused by 300,000 vehicles, the City of Stockhold enacted a "congestion fee" with real-time recognition of license plates and a Web infrastructure to collect payments. The analytics, metrics and incentives have paid off. Since August 2007, traffic is reduced 18 percent, a reduction of travel time on inner streets, and a 9 percent increase in "green" vehicles.
In addition to smarter traffic, IBM has initiatives for smarter water, smarter energy, smarterhealthcare, smarter supply chain, and smarter food supply.
Dave Barnes, Senior VP and CIO, United Postal Service (UPS)
Dave Barnes must act as the "trusted advisor" to the rest of the senior management team. UPS delivers packages worldwide. They put sensors on all of the vehicles, not just to know how fast they were driving,but also how often they drove in reverse gear, and sensors on the engines to determine maintenance schedules.Analytics found that driving in reverse was the most dangerous, and by providing this information to the drivers themselves, the drivers were able to come up with their own innovative ways to minimize accidents.This is one role of IT, to provide employees the information they need to enable them to be better at their own jobs.
Dave also mentioned the importance of collaborating across business units. Their "Information Technology Steering Committee (ITSC)" has 15 members, of which only three are from the IT department. This helped deploy social media initiatives within UPS. For example, Twitter has been adopted so that senior management can get unfiltered customer feedback. This is perhaps another key role of IT, to flatten an organization from cultural hierarchies that prevent top brass up in the ivory tower from hearing what is going wrong down on the street. Too often, a customer or client complains to the nearest employee, and this may or may not get passed up accurately along the chain of command. Twitter allowed executives to see what was going on for themselves.
Dave also covered the "Best Neighbor" approach. If you were going to build a deck in your back yard, you might ask your neighbors that have already done this, and learn from their experience. Sadly, this does not happen enough in IT. To address this, UPS has a "Tech Governance Group" that focused on business process across the organization. For example, they improved "package flow", reducing 100 million miles in the past few years.
Lastly, he mentioned that many technologists are "loners". They have a few like that, but try to hire techies who look to team across business units instead. Likewise, they try to hire business people who are somewhat tech savvy. For example, they have encouraged business employees to write their own reports, rather than requesting new reports to be developed by the IT department. The end result, the business people get exactly the reports they want, faster than waiting for IT to do it. Another role for IT is to provide end-users the tools to make their own reports.
(Dave didn't mention what tools these were, but it sounded like the Business Intelligence and Reporting Tools [BIRT] that IBM uses.)
These two sessions were a great one-two punch to the audience of 600 CIOs and IT professionals. First, IBM sets the groundwork for what needs to be done. Then, UPS shows how they did exactly that, adopting a dynamic infrastructure and got great results. This is going to be an interesting week!
Recently, IBM and the University of Texas Medical Branch (UTMB) [launched an effort] using IBM's World Community Grid "virtual supercomputer" to allow laboratory tests on drug candidates for drug-resistant influenza strains and new strains, such as H1N1 (aka "swineflu"), in less than a month.
Researchers at the University of Texas Medical Branch will use [World Community Grid] to identify the chemical compounds most likely to stop the spread of the influenza viruses and begin testing these under laboratory conditions. The computational work adds up to thousands of years of computer time which will be compressed into just months using World Community Grid. As many as 10 percent of the drug candidates identified by calculations on World Community Grid are likely to show antiviral activity in the laboratory and move to further testing.
According to the researchers, without access to World Community Grid's virtual super computing power, the search for drug candidates would take a prohibitive amount of time and laboratory testing.
This reminded me of an 18-minute video of Larry Brilliant at the 2006 Technology, Entertainment and Design [TED] conference. Back in 2006, Larry predicted a pandemic in the next three years, and here it is 2009 and we have the H1N1 virus.
His argument was to have "early detection" and "early response" to contain worldwide diseases like this.
A few months after Larry's "call to action" in 2006, IBM and over twenty major worldwide public health institutions, including the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC], [announced the Global Pandemic Initiative], a collaborative effort to help stem the spread of infectious diseases.
One might think that with our proximity to Mexico that the first cases would have been the border states, such as Arizona, but instead there were cases as far away as New York and Florida. The NYT explains in an article [Predicting Flu With the Aid of (George) Washington] that two rival universities, Northwestern University and Indiana University, both predicted that there would be about 2500 cases in the United States, based on air traffic control flight patterns, and the tracking data from a Web site called ["Where's George"] which tracks the movement of US dollar bills stamped with the Web site URL.
The estimates were fairly close. According to the Centers for Disease Control and Prevention [H1N1 Flu virus tracking page], there are currently 3009 cases of H1N1 in 45 states, as of this writing.
This is just another example on how an information infrastructure, used properly to provide insight, make predictions, and analyze potential cures, can help the world be a smarter planet. Fortunately, IBM is leading the way.
Wrapping up this week's theme on Cloud Computing, I finish with an IBM announcement for two new products to help clients build private cloud environments from their existing Service Oriented Architecture (SOA) deployments.
IBM WebSphere CloudBurst Appliance -- a new hardware appliance that provides access to software virtual images and patterns that can be used as is or easily customized, and then securely deployed, managed and maintained in a private cloud.
IBM WebSphere Application Server Hypervisor Edition -- a version of IBM WebSphere Application Server software optimized to run in a virtualized hardware server environments such as VMware, and comes preloaded in WebSphere Cloudburst.
With more than 7,000 customer implementations worldwide, IBM is the SOA market leader. Of course, both of these products above can be used with IBM System Storage solutions, including Cloud-Optimized Storage offerings like Grid Medical Archive Solution (GMAS), Grid Access Manager software, Scale-Out File Services (SoFS), and the IBM XIV disk system.
IBM is part of the "Cloud Computing 5" major vendors pushing the envelope (the other four are Google, Microsoft, Amazon and Yahoo). In fact, IBM has a number of initiatives that allow customers to leverage IBM software in a cloud. IBM is working in collaboration with Amazon Web Services (AWS), a subsidiary of Amazon.com, Inc. to make IBM software available in the Amazon Elastic Compute Cloud (Amazon EC2). WebSphere sMash, Informix Dynamic Server, DB2, and WebSphere Portal with Lotus Web Content Management Standard Edition are available today through a "pay as you go" model for both development and production instances. In addition to those products, IBM is also announcing the availability of IBM Mashup Center and Lotus Forms Turbo for development and test use in Amazon EC2, and intends to add WebSphere Application Server and WebSphere eXtreme Scale to these offerings.
For more about IBM's leadership in Cloud Computing, see the IBM [Press Release].
Continuing this week's theme on Cloud Computing, Dynamic Infrastructure and Data Center Networking, IBM unveiled details of an advanced computing system that will be able to compete with humans on Jeopardy!, America’s favorite quiz television show. Additionally, officials from Jeopardy! announced plans to produce a human vs. machine competition on the renowned show.
For nearly two years, IBM scientists have been working on a highly advanced Question Answering (QA) system, codenamed "Watson" after IBM's first president, [Thomas J. Watson]. The scientists believe that the computing system will be able to understand complex questions and answer with enough precision and speed to compete on Jeopardy!Produced by Sony Pictures Television, the trivia questions on Jeopardy! cover a broad range of topics, such as history, literature, politics, film, pop culture, and science. It poses a grand challenge for a computing system due to the variety of subject matter, the speed at which contestants must provide accurate responses, and because the clues given to contestants involve analyzing subtle meaning, irony, riddles, and other complexities at which humans excel and computers traditionally do not. Watson will incorporate massively parallel analytical capabilities and, just like human competitors, Watson will not be connected to the Internet or have any other outside assistance.
If this all sounds familiar, you might remember some of the events that have led up to this:
In 1984, the movie ["The Terminator"] introduced the concept of [Skynet], a fictional computer system developed by the militarythat becomes self-aware from its advanced artificial intelligence.
In 1997, an IBM computer called Deep Blue defeated World Chess Champion [Garry Kasparov] in a famous battle of human versus machine. To compete at chess, IBM built an extremely fast computer that could calculate 200 million chess moves per second based on a fixed problem. IBM’s Watson system, on the other hand, is seeking to solve an open-ended problem that requires an entirely new approach – mainly through dynamic, intelligent software – to even come close to competing with the human mind. Despite their massive computational capabilities, today’s computers cannot consistently analyze and comprehend sentences, much less understand cryptic clues and find answers in the same way the human brain can.
In 2005, Ray Kurzweil wrote [The Singularity is Near] referring to the wonders that artificial intelligence will bring to humanity.
The research underlying Watson is expected to elevate computer intelligence and human-to-computer communication to unprecedented levels. IBM intends to apply the unique technological capabilities being developed for Watson to help clients across a wide variety of industries answer business questions quickly and accurately.
This week's theme is alternative sourcing through Cloud Computing.
I thoughtI would start off the week interviewing an owner at a Small or Medium-sized Business [SMB] that recently adopted this approach.
Meet Fred, one of the new co-owners of my singles activities club, TucsonFun and Adventures, known affectionately as [TFA]. TFA recentlyadopted a new "Software-as-a-Service" [SaaS] for the company's Web site.
While the experience is still fresh in his mind, I thought this would be a goodopportunity to illustrate some of the concepts of alternative sourcing through Cloud Computing byusing a local example.
Give me some background on the company. How long has it been around? How many employees?
TFA has been in business since 1997, and has six employees, including an office manager, event planners and event coordinators.
How critical is "Web presence" to the business?
It's very important in several ways.First, the TFA staff plans 25 events per month, and our hundreds of members register for these events mostly through the Web site. Second, we have it connected to our bank accounts, so that it can process credit cards to collect the funds for renewals and event registrations.Third, it serves as a way to communicate upcoming events to our members, especially trips, so they can save the date on their own calendars. And fourth, the Web site serves as a "landing page"for all of our radio spots, newspaper ads, and other marketing efforts.
TFA had a Web site before, and now you have helped launch this new Web site. What motivated this change?
Our members were complaining about our 1999-era Web site. The pages were written in HTML, ASP (Active Service Page) and SQL (Structured Query Language) connected to a Microsoft SQL Server 2005 database. It was mostly text-based, with the only animation being text scrolling horizontally across the screen. The Web hostingprovider offered reliable access, but was located in New York state on East Coast time. If a member signed up for an event after 9pm or 10pm here in Tucson, it was marked as the next date, which could change the price of the event, or indicate the deadline was missed.If there were any changes to the pages or logic needed, or new columns required in the database, it gotexpensive. The TFA employees don't know how to program in ASP or SQL, so we hadto hire outside professionals each time.
Does this new Software-as-a-Service (SaaS) Web site address these problems you were trying to solve?
Yes. The new Web site is hosted by [Memberize] which provides a hosted membership management application. The TFA staff can nowmanage its membership, plan events, and communicate them with graphics, videos,and links to maps. They don't need to know ASP or SQL programming, because a built-in[WYSIWYG] editor is simple enough for anyone with standard word-processing skills. The database allowed the optionto add customized fields for each member we have in our club.
Was it difficult to switch over?
Not at all. Memberize gave us a 60-day free trial, and we needed all that time totransfer over our membership records, customize the style of the overall templatefor all pages, and then copy over the content from our old Web site. Wehad to transfer over our e-commerce service over, and contact GoDaddy to transfer the domain. The employee training required was fairly minimal.Cost-wise, it was only a few hundred dollars one-time setup fee, and then we pay a monthly fee,based on a tiered pricing structure based on the count of our active members.
How has the reaction been from your membership?
I've gotten a lot of positive feedback. The learning curve was minimal. Ourmembers found the new Web site intuitive and interactive. For example, thecalendar of events can be shown in a single month-at-a-glance format, with greendots showing the events you are signed up for.
And from your perspective, Fred, is the new Web site easy to administer?
Yes, I can now easily generate standard reports, and create my own ad-hoc reports as needed. This wasn't possible with the old system unless I hired an ASP programmer.
Hopefully, this provides some insight on how even the smallest SMB enterprises can adopt a Dynamic Infrastructure through alternative sourcing. Cloud Computing takes many forms, including Software-as-a-Service managed offerings.
People are confused over various orders of magnitude. News of the economic meltdownoften blurs the distinction between millions (10^6), billions (10^9), and trillions (10^12).To show how different these three numbers are, consider the following:
A million seconds ago - you might have received your last paycheck (12 days)
A billion seconds ago - you were born or just hired on your current job (31 years)
A trillion seconds ago - cavemen were walking around in Asia (31,000 years)
That these numbers confuse the average person is no surprise, but that it confuses marketing people in the storage industry is even more hilarious. I am often correcting people who misunderstandMB (million bytes), GB (billion bytes) and TB (trillion bytes) of information.Take this graph as an example from a recent presentation.
At first, it looks reasonable, back in 2004, black-and-white 2D X-Ray images were only 1MBin size when digitized, but by 2010 there will be fancy 4D images that now take 1TB, representinga 1000x increase. What?When I pointed out this discrepancy, the person who put this chart together didn't know what to fix.Were 4D images only 1GB in size, or was it really a 1000000x increase.
If a 2D image was 1000 by 1000 pixels, each pixel was a byte of information, then a 3D imagemight either be 1000 by 1000 by 1000 [voxels], or 1000 by 1000 at 1000 frames per second (fps). Thefirst being 3D volumetric space, and the latter called 2D+time in the medical field, the rest of us just say "video".4D images are 3D+time, volumetric scans over time, so conceivably these could be quite large in size.
The key point is that advances in medical equipment result in capturing more data, which canhelp provide better healthcare. This would be the place I normally plug an IBM product, like the Grid Medical Archive Solution [GMAS], a blended disk and tape storage solution designed specifically for this purpose.
So, as government agencies look to spend billions of dollars to provide millions of peoplewith proper healthcare, choosing to spend some of this money on a smarter infrastructure can result in creating thousands of jobs and save everyone a lot of money, but more importantly, save lives.
Short 2-minute [video] argues the case for Smarter Healthcare
For more on this, check out Adam Christensen's blog post on[Smarter Planet], which points to a podcast byDr. Russ Robertson, chairman of the Counsel of Medical Education at Northwestern University’s Feinberg School of Medicine, and Dan Pelino, general manager of IBM's Healthcare and Life Sciences Industry.
Wrapping up this week's theme on IBM's Dynamic Infrastructure® strategic initiative, we have a few more goodies in the goody bag.
First item: Dave Bricker shows off the XIV cloud-optimized storage at Pulse 2009
Second item: Rodney Dukes discusses the latest features of the DS8000 disk system at Pulse 2009
Third item: IBM launches the [Dynamic Infrastructure Journal]. You can read the February 2009 edition online, and if you find it useful and interesting, subscribe to learn from IBM's transformation experts how to reduce cost, manage risk and improve service.
Whether or not you attended the IBM Pulse 2009 conference, you might enjoy looking at the rest of the series of videos on [YouTube] and photographs on [Flickr].
It seems like [only yesterday] I was talking about IBM's strategic initiatives for the New Enterprise Data Center, including the launch of asset and service management at [Pulse 2008] in Orlando, Florida.
This week, my colleagues are at [Pulse 2009] in Las Vegas, Nevada. (I'm not there this time, so stop asking all my colleagues where I am!)Obviously, a lot has change in the last 12 months: the world's financial economy has collapsed, our delicate environment continues to unravel, and a new US President was elected to fix all that was broken by the former occupant. As a result, IBM's strategy has evolved beyond just data centers for large enterprises.
I can't think of a better time to emphasize the need for a more dynamic infrastructure. And this is not just focused on IT operations, but smarter business infrastructure as well, as the two now are very much intertwined. Everything from smarter healthcare, smarter telecom, smarter retail, smarter distribution, smarter transportation, and smarter financial services. IBM's [Dynamic Infrastructure@reg;] is one of four strategic initiatives to help build a smarter planet.
Let's take a quick look at the key benefits:
Do you remember back to the days that the IT department was like the accounting department in the back office, merely recording what happened in a series of transactions? Not anymore! Today, IT is front and center of most businesses, helping to generate revenue, drive innovation, and provide better customer service. We are finding a convergence between the physical world of running business with the digital world of IT. Intelligence is everywhere, embedded in systems and operations throughout, not just in a data center.
Imagine only 10-15 years ago the primary concern for IT operations was the cost of hardware. Now, thanks to[Moore's law], hardware is cheaper, but other IT budget costs like labor, management software, power and cooling costs are growing faster and becoming more predominant factors. IBM recognizes that you must consider thetotal cost of ownership, not just the acquisition cost of new hardware. But again, this isn't just reducing the costs of IT, but making more effective use of IT resources to reduce costs everywhere else, in schedulingtransportation, in managing manufacturing assets, and so on.
While the world feels much safer now that Barack Obama has taken over, there are still risks and threats out there, and businesses large and small have to manage them. Economic swings like we have experienced lately help weed out those companies that had fixed costs and static infrastructures, in favor of those with more variable costs and dynamic infrastructures. When the marketplace slows down, can your business "dial down" its operations to match? And when the recession is over and business is booming again, can your business "ramp up" fast enough to take on new opportunity? With IBM's Cloud Computing, companies can minimize their fixed investments and use a variable amount of computing as business needs change dynamically.
To learn more about Dynamic Infrastructure, read the IBM [Press Release].
When I was a kid, I used to love old spy movies where they would hide a small microchip or microfiche behind the stamp on a letter or postcard. "Yeah right," I would think to myself, "how much information could that little thing possibly hold."On their post[Bringing the "New Intelligence" Down to Earth: Intro to Semantic Web, Internet-of-Thing], My fellow IBM bloggers Jack Mason and Adam Christensen pointed me to a crazy new product called "Mir:ror" that connects to your PC or laptop.
At first, I thought it was a another product spoof, like Onion News Networks'video of the [Apple MacBook Wheel] that eliminatesthe need for a keyboard.But no, this product is real, from a company called [Violet]. The mir:ror, the internet-connected rabbits, and the tiny postage stamps called "ztamps" with embedded RFID chips that allow everything to be interconnected.I can see a lot of interesting uses for the ztamps. Squishing CD-romsor memory sticks inside presentation folders was always awkward. Butthese are small, flat and discrete. I don't know how many GBs of storage each ztamp holds, but they look cool, don't they?
Just another example of becoming a smarter planet!
IBM's emphasis on "Information Infrastructure" is to help organizations get the right information, to the right people at the right time. This helps them to have the right insights, make the right decisions, and develop the right innovations needed for the challenges at hand.
As the planet got smaller and flatter, IBM led the way. Now, as the planet needs to get smarter--with more efficient health care, energy distribution, financial institutions, and IT infrastructures--IBM will once again take the lead.
In the post [Flowing Workflow], the folks over at Eightbar point to the latest 3D work being done with IBM Lotus Sametime.
IBM Sametime is IBM's instant messaging facility, which has been extended to include Voice over IP (VOIP) capability similar to Skype, and now is being developed as a launch point for 3D impromptu meetings "in-world", similar to [Second Life].
With many companies facing hard times and considering travel restrictions for face-to-faceinternal meetings, an information infrastructure that adopts this technology might be a reasonable alternative.
Today we watched Barack Obama get inaugurated as the 44th President of the United States, and he reminded all Americans that the power and strength of this country comes through its diversity.To some extent, this is also what gives IBM its power and strength as well. While not quite the orator of President Obama, IBM's own CFO, Mark Loughridge, gave a rousing speech about IBM's 4Q08 and year-end financial results.
In 2008, IBM was not just successful because it had a wide diversity of servers and storage hardware products, but also a diversity of software, and a diversity of service offerings.And lastly, IBM sells to a diversity of clients in different industries, throughout a diversity of markets. While the current economic meltdown might have affected businesses focused on the US and other major markets, IBM did particularly well last year in growth markets, including the so-called BRIC countries (Brazil, Russia, India and China).
IBM's approach to invest in R&D and its nearly 400,000 employees for long-term success continues to pay off. Where "Cash is King", IBM can also afford all those acquisitions and strategic initiatives, positioning the company for a brighter future.
Where there are challenges, IBM finds opportunity.
Wrapping up this week's theme on ways to make the planet smarter, and less confusing, I present IBM's third annual [five in five]. These are five IBM innovations to watch over the next five years, all of which have implications on information storage. Here is a quick [3-minute video] that provides the highlights:
It's Thursday here at the [Data Center Conference] here in Las Vegas. Trying to keep up with all the sessions and activities has been quite challenging. As is often the case, there are more sessions that I want to attend than I physically am able to, so have to pick and choose.
Making the Green Data Center a Reality
The sixth and final keynote was an expert panel session, with Mark Bramfitt from Pacific Gas and Electric [PG&E], and Mark Thiele from VMware.
Mark explained PG&E's incentive program to help data centers be more energyefficient. They have spent $7 million US dollars so far on this, and he has requested another$50 million US dollars over the next three years. One idea was to put "shells" aroundeach pod of 28 or so cabinets to funnel the hot air up to the ceiling, rather than havingthe hot air warm up the rest of the cold air supply.
The fundamental disconnect for a "green" data center is that the Facilities team pay for the electricity, but it is the IT department that makes decisions that impact its use. The PG&E rebates reward IT departments for making better decisions. The best metric available is"Power Usage Effectiveness" or [PUE], which is calculated by dividing total energy consumed in the data center, divided by energy consumed by the IT equipment itself.Typical PUE runs around 3.0 which means for every Watt used for servers, storage or network switches, another 2 Watts are used for power, cooling, and facilities. Companies are tryingto reduce their PUE down to 1.6 or so. The lower the better, and 1.0 is the ideal.The problem is that changing the data center infrastructure is as difficult as replacingthe phone system or your primary ERP application.
While California has [Title 24], stating energy efficiency standards for both residential and commercial buildings, it does notapply to data centers. PG&E is working to add data center standards into this legislation.
The two speakers also covered Data Center [bogeymans], unsubstantiated myths that prevent IT departments fromdoing the right thing. Here are a few examples:
Power cycles - some people believe that x86 servers can typically only handle up to 3000 shutdowns, and so equipment is often left running 24 hours a day to minimize these. Most equipment is kept less than 5 years (1826 days), so turning off non-essential equipment at night, and powering it back on the next morning, is well below this 3000 limit and can greatly reduce kWh.
Dust - many are so concerned about dust that they run extra air-filters which impactsthe efficiency of cooling systems air flow. New IT equipment tolerates dust much betterthan older equipment.
Humidity - Mark had a great story on this one. He said their "de-humidifier" broke,and they never got around to fixing it, and they went years without it, realizing they didn't need to de-humidify.
The session wrapped up with some "low hanging fruit", items that can provide immediate benefit with little effort:
Cold-aisle containment--Why are so few data centers doing this?
Colocation providers need to meter individual clients' energy usage -- IBM offers the instrumentation and software to make this possible
Air flow management--Simply organizing cables under the floor tiles could help this.
Virtualization and Consolidation.
High-efficiency power supplies
Managing IT from a Business Service Perspective
The "other" future of the data center is to manage it as a set of integrated IT services,rather than a collection of servers, storage and switches.IT Infrastructure Library (ITIL) is widely-accepted as a set of best practices to accomplish this "service management" approach. The presenter from ASG Software Solutions presented their Configuration Management Data Base (CMDB) and application dependency dashboard. Theyhave some customers with as many as 200,000 configuration items (CIs) in their CMDB.
The solution looked similar to the IBM Tivoli software stack presented earlier this yearat the [Pulse conference].Both ASG and IBM "eat their own dog food", or perhaps more accurately "drink their own champagne", using these software products to run their own internal IT operations.
For many, the future of a "green" data center managed as a set of integrated service are years away, but the technologies and products are available today, and there is no reasonto postpone these projects any longer than necessary. For more about IBM's approach togreen data center, see [Energy EfficiencySolutions]. You can also take IBM's[IT Service Management self-assessment] to help determine whichIBM tools you need for your situation.
Continuing my coverage of the 27th annual[Data Center Conference], the weather here in Las Vegas has been partly cloudy,which leads me to discuss some of the "Cloud Computing" sessions thatI attended on Wednesday.
The x86 Server Virtualization Storm 2008-2012
Along with IBM, Microsoft is recognized as one of the "Big 5" of Cloud Computing. With theirrecent announcements of Hyper-V and Azure, the speaker presented pros-and-cons between thesenew technologies versus established offerings from VMware. For example, Microsoft's Hyper-Vis about three times cheaper than VMware and offers better management tools. That could beenough to justify some pilot projects. By contrast, VMware is more lightweight, only 32MB,versus Microsoft Hyper-V that takes up to 1.5GB. VMware has a 2-3 year lead ahead of Microsoft, and offers some features that Microsoft does not yet offer.
Electronic surveys of the audience offered some insight. Today, 69 percent were using VMware only, 8 percent had VMware plus other, including Xen-based offerings from Citrix,Virtual Iron and others. However, by 2010, the audience estimated that 39 percent would be VMware+Microsoft and another 23 percent VMware plus Xen, showing a shift away from VMware'scurrent dominance. Today, there are 11 VMware implementations to Microsoft Hyper-V, and thisis expected to drop to 3-to-1 by 2010.
Of the Xen-based offerings, Citrix was the most popular supplier. Others included Novell/PlateSpin,Red Hat, Oracle, Sun and Virtual Iron. Red Hat is also experimenting with kernel-based KVM.However, the analyst estimated that Xen-based virtualization schemes would never get past8 percent marketshare. The analyst felt that VMware and Microsoft would be the two dominant players with the bulk of the marketshare.
For cloud computing deployments, the speaker suggested separating "static" VMs from "dynamic" ones. Centralize your external storage first, and implement data deduplicationfor the OS load images. Which x86 workloads are best for server virtualization? The speaker offered this guidance:
The "good" are CPU-bound workloads, small/peaky in nature.
The "bad" are IO-intensive, those that exploit the features of native hardware
The "ugly" refers to workloads based on software with restrictive licenses and those not fully supported on VMs. If you have problems, the software vendor may not help resolve them.
Moving to the Cloud: Transforming the Traditional Data Center
IBM VP Willie Chiu presented the various levels of cloud computing.
Software-as-a-Service (SaaS) provides the software application, operating system and hardware infrastructure, such as SalesForce.com or Google Apps. Either the software meets your needs or it doesn't, but has the advantage that the SaaS provider takes care of all the maintenance.
Platform-as-a-Service (PaaS) provides operating system, perhaps some middleware like database or web application server, and the hardware infrastructure to run it on. The PaaS provider maintains the operating system patches, but you as the client must maintain your own applications. IBM has cloud computing centers deployed in nine different countries across the globe offering PaaS today.
Infrastructure-as-a-Service (IaaS) provides the hardware infrastructure only. The client must maintain and patch the operating system, middleware and software applications. This can be very useful if you have unique requirements.
In one case study, Willie indicated that moving a workload from a traditional data center to the cloud lowered the costs from $3.9 million to $0.6 million, an 84 percent savings!
We've Got a New World in Our View
Robert Rosier, CEO of iTricity, presented their "IaaS" offering. "iTricity" was coined from the concept of "IT as electricity". iTricity is the largest Cloud Computing company in continental Europe, hosting 2500 servers with 500TB of disk storage across three locations in the Netherlands and Germany.
Those attendees I talked to that were at this conference before commented that this year's focus on virtualization and cloud computing is noticeably more than in previous years. For more on this, read this 12-page whitepaper:[IBM Perspective on Cloud Computing]
Continuing this week's coverage of the 27th annual [Data Center Conference] I attended some break-out sessions on the "storage" track.
Effectively Deploying Disruptive Storage Architectures and Technologies
Two analysts co-presented this session. In this case, the speakers are using the term "disruptive" in the [positive sense] of the word, as originally used by Clayton Christensen in hisbook[The Innovator's Dilemma], andnot in the negative sense of IT system outages. By a show of hands,they asked if anyone had more storage than they needed. No hands went up.
The session focused on the benefits versus risks of new storage architectures, and which vendors they felt would succeed in this new marketplace around the years 2012-2013.
By electronic survey, here were the number of storage vendors deployed by members of the audience:
14 percent - one vendor
33 percent - two vendors, often called a "dual vendor" strategy
24 percent - three vendors
29 percent - four or more storage vendors
For those who have deployed a storage area network (SAN), 84 percent also have NAS, 61 percent also have some form or archive storage such as IBM System Storage DR550, and 18 percent also have a virtual tape library (VTL).
The speaker credited IBM's leadership in the now popular "storage server" movement to the IBM Versatile Storage Server [VSS] from the 1990s, the predecessor to IBM's popular Enterprise Storage Server (ESS). A "storage server" is merely a disk or tape system built using off-the-shelf server technology, rather than customized [ASIC] chips, lowering thebarriers of entry to a slew of small start-up firms entering the IT storage market, and leading to newinnovation.
How can a system designed for now single point of failure (SPOF) actually then fail? The speaker convenientlyignored the two most obvious answers (multiple failures, microcode error) and focused instead on mis-configuration. She felt part of the blame falls on IT staff not having adequate skills to deal with the complexities of today's storage devices, and the other part of the blame falls on storage vendors for making such complicated devices in the first place.
Scale-out architectures, such as IBM XIV and EMC Atmos, represent a departure from traditional "Scale-up" monolithic equipment. Whereas scale-up machines are traditionally limited in scalability from their packaging, scale-out are limited only by the software architecture and back-end interconnect.
To go with cloud computing, the analyst categorized storage into four groups: Outsourced, Hosted, Cloud, and Sky Drive. The difference depended on where servers, storage and support personnel were located.
How long are you willing to wait for your preferred storage vendor to provide a new feature before switching to another vendor? A shocking 51 percent said at most 12 months! 34 percent would be willing to wait up to 24 months, and only 7 percent were unwilling to change vendors. The results indicate more confidence in being able to change vendors, rather than pressures from upper management to meet budget or functional requirements.
Beyond the seven major storage vendors, there are now dozens of smaller emerging or privately-held start-ups now offering new storage devices. How willing were the members of the audience to do business with these? 21 percent already have devices installed from them, 16 percent plan to in the next 12-24 months, and 63 percent have no plans at all.
The key value proposition from the new storage architectures were ease-of-use and lower total cost of ownership.The speaker recommended developing a strategy or "road map" for deploying new storage architectures, with focus on quantifying the benefits and savings. Ask the new vendor for references, local support, and an acceptance test or "proof-of-concept" to try out the new system. Also, consider the impact to existing Disaster Recovery or other IT processes that this new storage architecture may impact.
Tame the Information Explosion with IBM Information Infrastructure
Susan Blocher, IBM VP of marketing for System Storage, presented this vendor-sponsored session, covering theIBM Information Infrastructure part of IBM's New Enterprise Data Center vision. This was followed by BradHeaton, Senior Systems Admin from ProQuest, who gave his "User Experience" of the IBM TS7650G ProtecTIER virtual tape library and its state-of-the-art inline data deduplication capability.
Best Practices for Managing Data Growth and Reducing Storage Costs
The analyst explained why everyone should be looking at deploying a formal "data archiving" scheme. Not just for "mandatory preservation" resulting from government or industry regulations, but also the benefits of "optional preservation" to help corporations and individual employees be more productive and effective.
Before there were only two tiers of storage, expensive disk and inexpensive tape. Now, with the advent of slower less-expensive SATA disks, including storage systems that emulate virtual tape libraries, and others that offer Non-Erasable, Non-Rewriteable (NENR) protection, IT administrators now have a middle ground to keep their archive data.
New software innovation supports better data management. The speaker recalled when "storage management" was equated to "backup" only, and now includes all aspects of management, including HSM migration, compliance archive, and long term data preservation. I had a smile on my face--IBM has used "storage management" to refer to these other aspects of storage since the 1980s!
The analyst felt the best tool to control growth is the "Delete" the data no longer needed, but felt that nobody uses Storage Resource Management (SRM) tools needed to make this viable. Until then, people willchose instead to archive emails and user files to less expensive media.The speaker also recommended looking into highly-scalable NAS offerings--such as IBM's Scale-Out File Services (SoFS), Exanet, Permabit, IBRIX, Isilon, and others--when fast access to files is worth the premium price over tape media.The speaker also made the distinction between "stub-based" archiving--such as IBM TSM Space Manager, Sun's SAM-FS, and EMC DiskXtender--from "stub-less" archive accomplished through file virtualization that employes a global namespace--such as IBM Virtual File Manager (VFM), EMC RAINfinity or F5's ARX.
She made the distinction between archives and backups. If you are keeping backups longer than four weeks, they are not really backups, are they? These are really archives, but not as effective. Recent legal precedent no longer considers long-term backup tapes as valid archive tapes.
To deploy a new archive strategy, create a formal position of "e-archivist", chose the applications that will be archived and focus on requirements first, rather than going out and buying compliance storage devices. Try to get users to pool their project data into one location, to make archiving easier. Try to have the storage admins offer a "menu" of options to Line-of-Business/Legal/Compliance teams that may not be familiar with subtle differences in storage technologies.
While I am familiar with many of these best practices already, I found it useful to see which competitiveproducts line up with those we have already within IBM, and which new storage architectures others find mostpromising.
I did not register soon enough to get into the MGM Grand itself, so I am staying at a Hiltonat the other end of the Las Vegas strip, but am able to hop on the "Monorail" to get to the MGM,just in time for the breakfast and first welcome session.
This conference has a familiar set up: six keynote sessions, 62 break-out sessions, and fourtown hall meetings. Thanks to electronic survey devices on the seats, speakers were able to gatherreal-time demographics. A large portion of attendees, including myself, are attending this conference for theirfirst time. Here's my recap of the first three keynote sessions:
The Future of Infrastructure and Operations: The Engine of Cloud Computing
How much do companies spend just to keep current? As much as 70 percent! The speaker noted thatthe best companies can get this down to 10 to 30 percent, leaving the rest of the IT budget to facilitate transformation. He predicts that companies are transforming their data centers fromsprawled servers to virtualization, towards a fully automated, service-oriented, real-time infrastructure.
Whereas the original motivation for IT virtualization was to reduce costs, companies now recognizethat they greatly improve agility, the ability to rapidly provision resources for new workloads, and that this will then lead to opportunites for alternative sourcing, such as cloud computing.
The operating system is becoming commoditized, focusing attention instead to a new concept: the"Meta OS". VMware's Virtual Data Center and Microsoft's Azure Fabric Controller are just two examples.Currently, analysts estimate only about 12 percent of x86 workloads are running virtualized, but thatthis could be over 50 percent by 2012.In this same time frame, year 2012, storage Terabytes is expected to increase 6.5x fold, and WAN bandwidthgrowing 35 percent per year.
Virtualization is not just for business applications. There are opportunities to eliminate the mostcostly part of any business: the Personal Computer, poster child of the skyrocketing costs of the client/server movement. Remote hosting of applications, streaming of applications,software as a service (SaaS) and virtual machines for the desktop can greatly reduce costs of customizedPC images and help desk support.
Cloud computing not only reduces per costs per use, but provides a lower barrier of entry and somemuch needed elasticity.Draw a line anywhere along the application-to-hardware software/hardware stack, and you can define acloud computing platform/service. About 65 percent of the attendees surveyed indicated that they were already doing something with CloudComputing, or were planning to in the next four years.
To help get there, the speaker felt that Value-added Resellers (VAR) and System Integrators (SI) wouldevolve into "service brokers", providing Small and Medium sized Businesses (SMB) "one throat to choke" in mixedmultisourced operations. The term "multisource" caught me a bit off-guard, referring to having someworkloads run internally (insourced) while other workloads run out on the Cloud (outsourced). Largerenterprises might have a "Dynamic Sourcing Team", a set of key employees serving as decision makers, employing both business and IT skills to determine the best sourcing for each application workload.
What are the biggest obstacles to getting there? The speaker felt it was the IT staff. People and cultureare the most difficult to change. The second are lack of appropriate metrics. Here were the survey resultsof the attendees:
41 percent had metrics for infrastructure economic attributes
49 percent had metrics for qualities of service (QoS)
12 percent had metrics to measure agility, speed of resource provisioning
The Data Center Scenario: Planning for the Future
This second keynote had two analyst "co-presenters". The focus was on the importance of having a documented Data Center strategy and architecture. Unfortunately, most Data Centers "happen on their own", with a majoroverhaul every 5 to 10 years. The speakers presented some "best practices" for driving this effort.
The first issue was to identify tiers of criticality, similar to those by the[Uptime Institute]. In their example, the most criticalworkloads would have perhaps recovery point objectives (RPO) of zero, and recover time objectives of lessthan 15 minutes. This is achievable using synchronous mirroring with fully automation to handle the failover.
The second issue was to recognize that many applications were designed for local area networks (LAN), butmany companies have distributed processing over a wide area network (WAN). Latency over these longer distancescan kill distributed performance of these applications.
The third issue was that different countries offer different levels of security, privacy and law enforcement.Canada and Ireland, for example, had the lowest risk, countries like India had medium risk, and countries likeChina and Russia had the highest risk, based on these factors.
The speakers suggested the following best practices:
Get a better understanding of the costs involved in providing IT services
Centralize applications that are not affected by latency, but regionalize those that are affected toremote locations to minimize distance delays.
Work towards a "lights out" data center facility, with operations personnel physically separated fromdata center facilities.
For the unfortunate few that are trying to stretch out more life from their existing aging data centers,the speakers offered this advice:
Build only what you need
Decommission orphaned servers and storage, which can be 1 to 12 percent of your operations
Target for replacement any hardware over five years old, not just to reduce maintenance costs, butalso to get more energy-efficient equipment.
Consider moving test workloads, and as much as half of your web servers, off UPS and onto the nativeelectricity grid. In the event of an outage, this reduces UPS consumption.
Implement power-capping and load-shedding, especially during peak times.
Enacting these changes can significantly improve the bottom line. Archaic data centers, those typically over 10 years old with power usage effectiveness (PUE) over 3.0 can cost over twice as much as a moreefficient data center. To learn more about PUE as a metric, see the Green Grid's whitepaper[Data Center power efficiency metrics:PUE and DCiE].
While virtualization can help with these issues, it also introduces new problems, such as VM sprawl anddealing with antiquated licensing schemes of software companies.
The Four Traits of the World's Best-Performing Business Leaders
Best-selling author Jason Jennings presented his findings in researching his various books:
It's Not the Big That Eat the Small... It's the Fast That Eat the Slow : How to Use Speed as a Competitive Tool in Business
Less Is More : How Great Companies Use Productivity As a Competitive Tool in Business
Think Big, Act Small
Hit the Ground Running : A Manual for New Leaders
Jason identified the best companies and interviewed their leaders, including such companies as Koch Industries, Nucor Steel, and IKEA furniture. The leaders he interviewed felt a calling to serveas stewards of their companies, not just write mission and vision statements, and be willingto let go of projects or people that aren't working out.
Jasonindicated a 2007 Gallup poll on the American workplace indicates that 70 percent of employees do notfeel engaged in their jobs.The focus of these leaders isto hire people with the right attitudes, rather than the right aptitudes, and give those people with the knowledge and the right to make business decisions. If done well,employees will think and act as owners, and hold themselves accountable for their economic results. Jason found cases where 25-year-olds were givenresponsibility to make billion-dollar decisions!
I found his talk inspiring! The audience felt motivated to do their jobs better, and be more engagedin the success of their companies.
These keynote sessions set the mood for the rest of the week. I can tell already that the speakers willtoss out a large salad of buzzwords and IT industry acronyms. I saw several people in the audience confusedon some of the terminology, and hopefully they will come over to IBM booth 20 at the Solutions Expofor straight talk and explanation.
This week is Thanksgiving holiday in the USA, so I thought a good theme would be things I am thankful for.
I'll start with saying that I am thankful EMC has finally announcedAtmos last week. This was the "Maui" part of the Hulk/Maui rumors we heard over a year ago. To quickly recap, Atmos is EMC's latest storage offeringfor global-scale storage intended for Web 2.0 and Digital Archive workloads. Atmos can be sold as just software, or combined with Infiniflex,EMC's bulk, high-density commodity disk storage systems. Atmos supports traditionalNFS/CIFS file-level access, as well as SOAP/REST object protocols.
I'm thankful for various reasons, here's a quick list:
It's hard to compete against "vaporware"
Back in the 1990s, IBM was trying to sell its actual disk systems against StorageTek's rumored "Iceberg" project. It took StorageTek some four years to get this project out,but in the meantime, we were comparing actual versus possibility. The main feature iswhat we now call "Thin Provisioning". Ironically, StorageTek's offering was not commercially successful until IBM agreed to resell this as the IBM RAMAC Virtual Array (RVA).
Until last week, nobody knew the full extent of what EMC was going to deliver on the many Hulk/Maui theories. Severalhinted as to what it could have been, and I am glad to see that Atmos falls short of those rumored possibilities. This is not to say that Atmos can't reach its potential, and certainly some of the design is clever, such as offering native SOAP/REST access.
Instead, IBM now can compare Atmos/Infiniflex directly to the features and capabilities of IBM's Scale Out File Services [SoFS], which offers a global-scale multi-site namespace with policy-based data movement, IBM System Storage Multilevel Grid Access Manager[GAM] that manages geographical distrubuted information,and IBM [XIV Storage System] that offers high-density bulk storage.
Web 2.0 and Digital Archive workloads justify new storage architectures
When I presented SoFS and XIV earlier this year, I mentioned they were designed forthe fast-growing Web 2.0 and Digital Archive workloads that were unique enough to justify their own storage architectures. One criticism was that SoFS appeared to duplicate what could be achieved with dozens of IBM N series NAS boxes connected with Virtual File Manager (VFM). Why invent a new offering with a new architecture?
With the Atmos announcement, EMC now agrees with IBM that the Web 2.0 and DigitalArchive workloads represent a unique enough "use case" to justify a new approach.
New offerings for new workloads will not impact existing offerings for existing workloads
I find it amusing that EMC is quickly defending that Atmos will not eat into its DMXbusiness, which is exactly the FUD they threw out about IBM XIV versus DS8000 earlier this year. In reality, neither the DS8000 nor the DMX were used much for Web 2.0 andDigital Archive workloads in the past. Companies like Google, Amazon and others hadto either build their own from piece parts, or use low-cost midrange disk systems.
Rather, the DS8000 and DMX can now focus on the workloads they were designed for,such as database applications on mainframe servers.
Cloud-Oriented Storage (COS)
Just when you thought we had enough terminology already, EMC introduces yet another three-letter acronym [TLA]. Kudos to EMC for coining phrases to help move newconcepts forward.
Now, when an RFP asks for Cloud-oriented storage, I am thankful this phrase will help serve as a trigger for IBM to lead with SoFS and XIV storage offerings.
Digital archives are different than Compliance Archives
EMC was also quick to point out that object-storage Atmos was different from theirobject-storage EMC Centera. The former being for "digital archives" and the latter for"compliance archives". Different workloads, Different use cases, different offerings.
Ever since IBM introduced its [IBM System Storage DR550] several years ago, EMC Centera has been playing catch-up to match IBM'smany features and capabilities. I am thankful the Centera team was probably too busy to incorporate Atmos capabilities, so it was easier to make Atmos a separate offering altogether. This allows the IBM DR550 to continue to compete against Centera's existingfeature set.
Micro-RAID arrays, logical file and object-level replication
I am thankful that one of the Atmos policy-based feature is replicating individualobjects, rather than LUN-based replication and protection. SoFS supports this forlogical files regardless of their LUN placement, GAM supports replication of files and medical images across geographical sites in the grid, and the XIV supports this for 1MBchunks regardless of their hard disk drive placement. The 1MB chunk size was basedon the average object size from established Web 2.0 and DigitalArchive workloads.
I tried to explain the RAID-X capability of the XIV back in January, under muchcriticism that replication should only be done at the LUN level. I amthankful that Marc Farley on StorageRap coined the phrase[Micro-RAID array] to helpmove this new concept further. Now, file-level, object-level and chunk-level replication can be considered mainstream.
Much larger minimum capacity increments
The original XIV in January was 51TB capacity per rack, and this went up to 79TB per rack for the most recent IBM XIV Release 2 model. Several complained that nobody would purchase disk systems at such increments. Certainly, small and medium size businessesmay not consider XIV for that reason.
I am thankful Atmos offers 120TB, 240TB and 360TB sizes. The companies that purchasedisk for Web 2.0 and Digital Archive workloads do purchase disk capacity in these large sizes. Service providers add capacity to the "Cloud" to support many of theirend-clients, and so purchasing disk capacity to rent back out represents revenue generating opportunity.
Renewed attention on SOAP and REST protocols
IBM and Microsoft have been pushing SOA and Web Services for quite some time now.REST, which stands for [Representational State Transfer] allows static and dynamic HTML message passing over standard HTTP.SOAP, which was originally [Simple Object Access Protocol], and then later renamed to "Service Oriented Architecture Protocol", takes this one step further, allowingdifferent applications to send "envelopes" containing messages and data betweenapplications using HTTP, RPC, SMTP and a variety of other underlying protocols.Typically, these messages are simple text surrounded by XML tags, easily stored asfiles, or rows in databases, and served up by SOAP nodes as needed.
It's hard to show leadership until there are followers
IBM's leadership sometimes goes unnoticed until followerscreate "me, too!" offerings or establish similar business strategies. IBM's leadership in Cloud and Grid computing is no exception.Atmos is the latest me-too product offering in this space, trying pretty muchto address the same challenges that SoFS and XIV were designed for.
So, perhaps EMC is thankful that IBM has already paved the way, breaking throughthe ice on their behalf. I am thankful that perhaps I won't have to deal with as much FUD about SoFS, GAM and XIV anymore.
Well it's Tuesday, and ["election day"] here in the USA, and again IBM has more announcements.
IBM announced [IBM Tivoli Key Lifecycle Manager v1.0] (TKLM) to manage encryption keys. This provides a graphical interface to manage encryption keys, including retention criteria when sharing keys with other companies.
TKLM is supported on AIX, Solaris, Windows, Red Hat and SUSE Linux. IBM plans to offer TKLM forz/OS in 2009. TKLM can be used with Firefox or Internet Explorer web browser. This will include the Encryption Key Manager (EKM) that IBM offered initially to support encryption keys for the TS1120, TS1130, and LTO-4 drives.
While this is needed today for tape, IBM positions this software to also manage the encryption keys for "Full Drive Encryption" (FDE) disk drive modules (DDM) in IBM disk systems in 2009.
This is page 34 of Sequoia Capital's[56-slide presentation] about the current financial meltdown. In the past, IT spending tracked closely to the rest of the economy, but the latest downturn has not yet reflected in IT spend.
The rest of the deck is worth going through, with interesting stats presented in a clear manner.
For a while now, IBM has been trying to explain to clients that focusingon just storage hardware acquisition costs is not enough. You need toconsider the "Total Cost of Ownership" or TCO of a purchase decision.For active data, a 3-5 year TCO assessment can give you a better comparison of costs between IBM and competitive choices. For long-term archive retention, 7-10 year TCO assessment may be necessary.
Now, IBM has a cute [2-minute video] that brings anappropriate analogy to help IT and non-IT executives understand.
While some might be familiar with mashups that combine public Web 2.0 sources of information, enterprise mashups go one step further, integrating withthe "information infrastructure" of your data center. It's not just enough to deliver theright information to the right person at the right time, it has to bein the right format, in a manner that can be readily understood andacted upon. Enterprise mashups can help.
Last month, HP and Oracle jointly announced their new "Exadata Storage Server".This solution involves HP server and storage paired up with Oracle software, designed for Data Warehouse andBusiness Intelligence workloads (DW/BI).
I immediately recognized the Exadata Storage Server as a "me too" product, copying the idea from IBM's [InfoSphere Balanced Warehouse]which combines IBM servers, IBM storage and IBM's DB2 database software to accomplish this, but from a singlevendor, rather than a collaboration of two vendors.The Balanced Warehouse has been around for a while. I even blogged about this last year, in my post[IBMCombo trounces HP and Sun] when IBM announced its latest E7100 model. IBM offers three different sizes: C-class for smaller SMB workloads, D-class for moderate size workloads, and E-class for large enterprise workloads.
One would think that since IBM and Oracle are the top two database software vendors, and IBM and HP are the toptwo storage hardware vendors, that IBM would be upset or nervous on this announcement. We're not. I would gladlyrecommend comparing IBM offerings with anything HP and Oracle have to offer. And with IBM's acquisition of Cognos,IBM has made a bold statement that it is serious about competing in the DW/BI market space.
But apparently, it struck a nerve over at EMC.
Fellow blogger Chuck Hollis from EMC went on the attack, and Oracle blogger Kevin Closson went on the defensive.For those readers who do not follow either, here is the latest chain of events:
When it comes to blog fights like these, there are no clear winners or losers, but hopefully, if done respectfully,can benefit everyone involved, giving readers insight to the products as well as the company cultures that produce them.Let's see how each side fared:
Chuck implies that HP doesn't understand databases and Oracle doesn't understand server and storage hardware, socobbling together a solution based on this two-vendor collaboration doesn't make sense to him. The few I know who work at HP and Oracle are smart people, so I suspect this is more a claim againsteach company's "core strengths". Few would associate HP with database knowledge, or Oracle with hardware expertise,so I give Chuck a point on this one.
Of course, Chuck doesn't have deep, inside knowledge of this new offering, nor do I for that matter, and Kevin is patient enough to correct all of Chuck's mistaken assumptions and assertions. Kevin understands that EMC's "core strengths" isn't in servers or databases, so he explains things in simple enough terms that EMC employees can understand, so I give Kevin a point on this one.
If two is bad, then three is worse! How much bubble gum and bailing wire do you need in your data center? The better option is to go to the one company that offers it all and brings it together into a single solution: IBM InfoSphere Balanced Warehouse.
Continuing this week's theme on dealing with the global economic meltdown, recession and financial crisis, I found a great video that recaps IBM CEO Sam Palmisano's recommendations to being more competitive in thisenvironment.
In a recent speech to business leaders, Sam outlined what he sees as the four most importantsteps to thriving in the global economy. The highlights can be seen here in this [2-minute video]on IBM's "Forward View" eMagazine.
As financial firms focus on costs, the IT departments will have an opportunity to consolidate their servers, networks and storage equipment. Consolidating disk and tape resources, implementing storage virtualization, and reducingenergy costs might get a boost from this crisis. Consolidating disparate storage resources to a big SoFS, XIV,DS8000 disk system, or TS3500 tape library might greatly help reduce costs.
Having mixed vendor environments that result from such mergers and acquisitions can be complicated to manage. Thankfully, IBM TotalStorage Productivity Centermanages both IBM and non-IBM equipment, based on open industry standards like SMI-S and WBEM.Merged companies might let go IT people with limited vendor-specific knowledge, but keep the ones familiar withcross-vendor infrastructure management skills and ITIL certification.
Comparing different vendor equipment
It seems that often times when there is a merger or acquisition, the two companies were using different storage gear from different vendors. IBM has made some incredible improvements over the past three years, in both performance enhancements and energy efficiency, but many companies with non-IBM equipment may not be aware of them.If there was ever a time to perform a side-by-side comparison between IBM and non-IBM equipment, here isyour chance.
For more on the impact of the financial meltdown on IT, see this InfoWorld[Special Report].
Wrapping up my week on successful uses of information, I thought I would discuss the visualization of data.Not just bar charts and pie charts, but how effective visual information can be on multi-variable plots.
IBM's [Many Eyes] recognizes that 70 percentof our sensory input neurons in our brain our focused on visual inputs, and so we might recognize patternsif only data was presented in more interesting and visual representations.
In addition to X/Y axis, variables can be presented by size of circle and color. Here's an example plot of the past US bailouts, with variables representing amount, year, company andindustry. This plot does not include the current 700 Billion US Dollar bailout currently under discussion.
This is part of IBM's Collaborative User Experience (CUE) research lab. The software is available Web2.0style at no charge, just upload your data set, and choose one of 16 different presentation styles.
These plots get even more interesting when you animate them over time. In 2006, Hans Rosling presenteddata he gathered from the United Nations and other publicly funded sources and presented his findings atthe TED conference. Here is the 20-minute video of that presentation (click on play at right), titled ["Debunking third-world myths with the best stats you've ever seen"], in which he debunks the myth that all countries fall into two distinct categories: Industrialized and Developing.
Amazingly, the data--as well as the software to analyze it--is available at[GapMinder.org] Web site.
For more information on how you can deploy an information infrastructure that allows you to search, visualize and leverage the most value from your information, contact your local IBM representative or IBM Business Partner.
This post will focus on Information Compliance, the fourth and final part of the four-part series this week.I have received a few queries on my choice of sequence for this series: Availability, Security, Retention andCompliance.
Why not have them in alphabetical order? IBM avoids alphabetizing in one language, because thenit may not be alphabetized when translated to other languages.
Why not have them in a sequence that spells outan easy to remember mnemonic, like "CARS"? Again, when translated to other languages, those mnemonics no longerwork.
Instead, I worked with our marketing team for a more appropriate sequence, based on psychology and the cognitive bias of [primacy and recency effects].
Here's another short 2-minute video, on Information Compliance
Full disclosure: I am not a lawyer. The following will delveinto areas related to government and industry regulations. Consultyour risk officer or legal counsel to make sure any IT solution is appropriatefor your country, your industry, or your specific situation.
IBM estimates there are over 20,000 regulations worldwide related to information storage and transmission.
For information availability, some industry regulations mandate a secondary copy a minimum distance away toprotect against regional disasters like hurricanes or tsunamis.IBM offers Metro Mirror (up to 300km) and Global Mirror (unlimited distance) disk mirroring to support theserequirements.
For information security, some regulations relate to privacy and prevention of unauthorized access. Twoprominent ones in the United States are:
Health Insurance Portability and Accountability Act (HIPAA) of 1996
HIPAA regulates health care providers, health plans, and health care clearinghouses in how they handle the privacy of patient's medical records. These regulations apply whether the information is on film, paper, or storedelectronically. Obviously, electronic medical records are easier to keep private. Here is an excerpt froman article from [WebMD]:
"There are very good ways to protect data electronically. Although it sounds scary, it makes data more protected than current paper records. For example, think about someone looking at your medical chart in the hospital. It has a record of all that is happening -- lab results, doctor consultations, nursing notes, orders, prescriptions, etc. Anybody who opens it for whatever reason can see all of this information. But if the chart is an electronic record, it's easy to limit access to any of that. So a physical therapist writing physical therapy notes can only see information related to physical therapy. There is an opportunity with electronic records to limit information to those who really need to see it. It could in many ways allow more privacy than current paper records."
GLBA regulates the handling of sensitive customer information by banks, securities firms, insurance companies, and other financial service providers. Financial companies use tape encryption to comply with GLBA when sending tapes from one firm to another. IBM was the first to deliver tape drive encryption withthe TS1120, and then later with LTO-4 and TS1130 tape drives.
For information retention, there are a lot of regulations that deal with how information is stored, in some casesimmutable to protect against unethical tampering, and when it can be discarded. Two prominent regulations inthe United States are:
U.S. Securities and Exchange Commission (SEC) 17a-4 of 1997
In the past, the IT industryused the acronym "WORM" which stands for the "Write Once, Read Many" nature of certain media, like CDs, DVDs,optical and tape cartridges. Unfortunately, WORM does not apply to disk-based solutions, so IBM adopted the languagefrom SEC 17a-4 that calls for storage that is "Non-Erasable, Non-Rewriteable" or NENR. This new umbrella term applies to disk-based solutions, as well as tape and optical WORM media.
SEC 17a-4 indicates that broker/dealers and exchange members must preserve all electronic communications relating to the business of their firmm a specific period of time. During this time, the information must not be erased or re-written.
Sarbanes-Oxley (SOX) Act of 2002
SOX was born in the wake of [Enron and other corporate scandals]. It protects the way that financial information is stored, maintained and presented to investors, as well as disciplines those who break its rules. It applies onlyto public companies, i.e. those that offer their securities (stock shares, bonds, liabilities) to be sold to the publicthrough a listing on a U.S. exchange, such as NASDAQ or NYSE.
SOX focuses on preventing CEOs and other executives from tampering the financial records.To meet compliance, companies are turning to the [IBM System Storage DR550] which providesNon-erasable, Non-rewriteable (NENR) storage for financial records. Unlike competitive products like EMC Centera thatfunction mostly as space-heaters on the data center floor once they filled up, the DR550 can be configured as a blended disk-and-tape storage system, so that the most recent, and most likely to be accessed data, remains on disk, but the older, least likely to be accessed data, is moved automatically to less expensive, more environment-friendly "green" tape media.
Did SOX hurt the United States' competitiveness? Critics feared that these new regulations would discourage newcompanies from going public. Earnst & Young found these fears did not come true, and published a study [U.S. Record IPO Activity from 2006 Continues in 2007]. In fact, the improved confidence that SOX has given investors has given rise to similarlegislation in other parts of the world: Euro-Sox for the European Union Investor Protection Act, and J-SOX Financial Instruments and Exchange Law for Japan.
For those who only read the first and last paragraphs of each post, here is my recap:Information Compliance is ensuring that information is protected against regional disasters, unauthorizedaccess, and unethical tampering, as required to meet industry and government regulations. Such regulationsoften apply if the information is stored on traditional paper or film media, but can often be handled more cost-effectively when stored electronically. Appropriate IT governance can help maintain investor confidence.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. The launch was presented at the IBM Storage and Storage Networking Symposium to over 400 attendeesin Montpelier, France, with corresponding standing-room-only crowds in New York and Tokyo.
This post will focus on Information Retention, the third of the four-part series this week.
Here's another short 2-minute video, on Information Retention
Let's start with some interesting statistics.Fellow blogger Robin Harris on his StorageMojo blog has an interesting post:[Our changing file workloads],which discusses the findings of study titled"Measurement and Analysis of Large-Scale Network File System Workloads"[14-page PDF]. This paper was a collaborationbetween researchers from University of California Santa Cruz and our friends at NetApp.Here's an excerpt from the study:
Compared to Previous Studies:
Both of our workloads are more write-oriented. Read to write byte ratios have significantly decreased.
Read-write access patterns have increased 30-fold relative to read-only and write-only access patterns.
Most bytes are transferred in longer sequential runs. These runs are an order of magnitude larger.
Most bytes transferred are from larger files. File sizes are up to an order of magnitude larger.
Files live an order of magnitude longer. Fewer than 50 percent are deleted within a day of creation.
Files are rarely re-opened. Over 66 percent are re-opened once and 95% fewer than five times.
Files re-opens are temporally related. Over 60 percent of re-opens occur within a minute of the first.
A small fraction of clients account for a large fraction of file activity. Fewer than 1 percent of clients account for50 percent of file requests.
Files are infrequently shared by more than one client. Over 76 percent of files are never opened by more than one client.
File sharing is rarely concurrent and sharing is usually read-only. Only 5 percent of files opened by multiple clients are concurrent and 90 percent of sharing is read-only.
Most file types do not have a common access pattern.
Why are files being kept ten times longer than before? Because the information still has value:
Provide historical context
Gain insight to specific situations, market segment demographics, or trends in the greater marketplace
Help innovate new ideas for products and services
Make better, smarter decisions
National Public Radio (NPR) had an interesting piece the other day. By analyzing old photos, a researcher for Cold War Analysis was able to identify an interesting [pattern for Russian presidents]. (Be sure to listen to the 3-minute audio to hear a hilarious song about the results!)
Which brings me to my own collection of "old photos". I bought my first digital camera in the year 2000,and have taken over 15,000 pictures since then. Before that,I used 35mm film camera, getting the negatives developed and prints made. Some of these date back to my years in High School and College. I have a mix of sizes, from 3x5, 4x6 and 5x7 inches,and sometimes I got double prints.Only a small portion are organized intoscrapbooks. The rest are in envelopes, prints and negatives, in boxes taking up half of my linen closet in my house.Following the success of the [Library of Congress using flickr],I decided the best way to organize these was to have them digitized first. There are several ways to do this.
This method is just too time consuming. Lift the lid place 1 or a few prints face down on the glass, close the lid,press the button, and then repeat. I estimate 70 percent of my photos are in [landscape orientation], and 30 percent in [portrait mode]. I can either spend extra time toorient each photo correctly on the glass, or rotate the digital image later.
I was pleased to learn that my Fujitsu ScanSnap S510 sheet-feed scanner can take in a short stack (dozen or so) photos, and generate JPEG format files for each. I can select 150, 300 or 600dpi, and five levels of JPEG compression.All the photos feed in portrait mode, which I can then rotate later on the computer once digitized.A command line tool called [ImageMagick] can help automate the rotations.While I highly recommend the ScanSnap scanner, this is still a time-consuming process for thousands of photos.
"The best way to save your valuable photos may be by eliminating the paper altogether. Consider making digital images of all your photos."
Here's how it works:You ship your prints (or slides, or negatives) totheir facility in Irvine, California. They have a huge machine that scans them all at 300dpi, no compression, andthey send back your photos and a DVD containing digitized versions in JPEG format, all for only 50 US dollars plusshipping and handling, per thousand photos. I don't think I could even hire someone locally to run my scanner for that!
The deal got better when I contacted them. For people like me with accounts on Facebook, flickr, MySpace or Blogger,they will [scan your first 1000 photos for free] (plus shipping and handling). I selected a thousand 4x6" photos from my vast collection, organized them into eight stacks with rubber bands,and sent them off in a shoe box. The photos get scanned in landscape mode, so I had spent about four hours in preparing what I sent them, making sure they were all face up, with the top of the picture oriented either to the top or left edge.For the envelopes that had double prints, I "deduplicated" them so that only one set got scanned.
The box weighed seven pounds, and cost about 10 US dollars to send from Tucson to Irvinevia UPS on Tuesday. They came back the following Monday, all my photos plus the DVD, for 20 US dollars shipping and handling. Each digital image is about 1.5MB in size, roughly 1800x1200 pixels in size, so easily fit on a single DVD. The quality is the sameas if I scanned them at 300dpi on my own scanner, and comparable to a 2-megapixel camera on most cell phones.Certainly not the high-res photos I take with my Canon PowerShot, but suitable enough for email or Web sites. So, for about 30 US dollars, I got my first batch of 1000 photos scanned.
ScanMyPhotos.com offers a variety of extra priced options, like rotating each file to the correct landscape or portrait orientation, color correction, exact sequence order, hosting them on their Web site online for 30 days to share with friends and family, and extra copies of the DVD.All of these represent a trade-off between having them do it for me for an additional fee, or me spending time doing it myself--either before in the preparation, or afterwards managing the digital files--so I can appreciate that.
Perhaps the weirdest option was to have your original box returned for an extra $9.95? If you don't have a hugecollection of empty shoe boxes in your garage, you can buy a similarly sized cardboard box for only $3.49 at the local office supply store, so I don't understand this one. The box they return all your photos in can easily be used for the next batch.
I opted not to get any of these extras. The one option I think they should add would be to have them just discardthe prints, and send back only the DVD itself. Or better yet, discard the prints, and email me an ISO file of the DVD that I can burn myself on my own computer.Why pay extra shipping to send back to me the entire box of prints, just so that I can dump the prints in the trash myself? I will keep the negatives, in case I ever need to re-print with high resolution.
Overall, I am thoroughlydelighted with the service, and will now pursue sending the rest of my photos in for processing, and reclaim my linen closet for more important things. Now that I know that a thousand 4x6 prints weighs 7 pounds, I can now estimate how many photos I have left to do, and decide on which discount bulk option to choose from.
With my photos digitized, I will be able to do all the things that IBM talks about with Information Retention:
Place them on an appropriate storage tier. I can keep them on disk, tape or optical media.
Easily move them from one storage tier to another. Copying digital files in bulk is straightforward, and as new techhologies develop, I can refresh the bits onto new media, to avoid the "obsolescence of CDs and DVDs" as discussed in this article in[PC World].
Share them with friends and family, either through email, on my Tivo (yes, my Tivo is networked to my Mac and PC and has the option to do this!), or upload themto a photo-oriented service like [Kodak Gallery or flickr].
Keep multiple copies in separate locations. I could easily burn another copy of the DVD myself and store in my safe deposit box or my desk at work.With all of the regional disasters like hurricanes, an alternative might be to backup all your files, including your digitized photos, with an online backup service like [IBM Information Protection Services] from last year's acquisition of Arsenal Digital.
If the prospect of preserving my high school and college memories for the next few decades seems extreme,consider the [Long Now Foundation] is focused on retaining information for centuries.They areeven suggesting that we start representing years with five digits, e.g., 02008, to handle the deca-millennium bug which will come into effect 8,000 years from now. IBM researchers are also working on [long-term preservation technologies and open standards] to help in this area.
For those who only read the first and last paragraphs of each post, here is my recap:Information Retention is about managing [information throughout its lifecycle], using policy-based automation to help with the placement, movement and expiration. An "active archive" of information serves to helpgain insight, innovate, and make better decisions. Disk, tape, and blended disk-and-tape solutions can all play a part in a tiered information infrastructure for long-term retention of information.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For you podcast fans, IBM Vice Presidents Bob Cancilla (Disk Systems), Craig Smelser (Storage and Security Software), and Mike Riegel (Information Protection Services), highlight some of the new products and offerings in this 12-minute recording:
This post will focus on Information Security, the second of the four-part series this week.
Here's another short 2-minute video, on Information Security
Security protects information against both internal and external threats.
For internal threats, most focus on whether person A has a "need-to-know" about information B. Most of the time, thisis fairly straightforward. However, sometimes production data is copied to support test and development efforts. Here is the typical scenario: the storage admin copies production data that contains sensitive or personal informationto a new copy and authorizes software engineers or testers full read/write access to this data.In some cases, the engineers or testers may be employees, other times they might be hired contractors from an outside firm.In any case, they may not be authorized to read this sensitive information. To solve this IBM announced the[IBM Optim Data Privacy Solution] for a variety of environments, including Siebel and SAP enterprise resource planning (ERP)applications.
I found this solution quite clever. The challenge is that production data is interrelated and typically liveinside [relational databases].For example, one record in one database might have a name and serial number, and then that serial number is used to reference a corresponding record in another database. The IBM Optim Data Privacy Solution applies a range of"masks" to transform complex data elements such as credit card numbers, email addresses and national identifiers, while retaining their contextual meaning. The masked results are fictitious, but consistent and realistic, creating a “safe sandbox” for application testing. This method can mask data from multiple interrelated applications to create a “production-like” test environment that accurately reflects end-to-end business processes.The testers get data they can use to validate their changes, and the storage admins can rest assured theyhave not exposed anyone's sensitive information.
Beyond just who has the "need-to-know", we might also be concerned with who is "qualified-to-act".Most systems today have both authentication and authorization support. Authentication determines that youare who you say you are, through the knowledge of unique userid/passwords combinations, or other credentials. Fingerprint, eye retinal scans or other biometrics look great in spy movies, but they are not yetwidely used. Instead, storage admins have to worry about dozens of different passwords on differentsystems. One of the many preview announcements made by Andy Monshaw on Monday's launch was that IBM isgoing to integrate the features of [Tivoli Access Manager for Enterprise Single Sign-On] into IBM's Productivity Center software, and be renamed "IBM Tivoli Storage Productivity Center".You enter one userid/password, and you will not have to enter the individual userid/password of all the managedstorage devices.
Once a storage admin is authenticated,they may or may not be authorized to read or act on certain information.Productivity Center offers role-based authorization, so that people can be identifiedby their roles (tape operator, storage administrator, DBA) and that would then determine what they areauthorized to see, read, or act upon.
For external threats, you need to protect data both in-flight and at-rest. In-flight deals with data thattravels over a wire, or wirelessly through the air, from source to destination. When companies have multiplebuildings, the transmissions can be encrypted at the source, and decrypted on arrival.The bigger threat is data at-rest. Hackers and cyber-thieves looking to download specific content, like personal identifiable information, financial information, and other sensitive data.
IBM was the first to deliver an encrypting tape drive, the TS1120. The encryption process is handled right at the driveitself, eliminating the burden of encryption from the host processing cycles, and eliminating the need forspecialized hardware sitting between server and storage system. Since then, we have delivered encryption onthe LTO-4 and TS1130 drives as well.
When disk drives break or are decommissioned, the data on them may still be accessible. Customers have a tough decision to make when a disk drive module (DDM) stops working:
Send it back to the vendor or manufacturer to have it replaced, repaired or investigated, exposing potentialsensitive information.
Keep the broken drive, forfeit any refund or free replacement, and then physically destroy the drive. Thereare dozens of videos on [YouTube.com] on different ways to do this!
The launch previewed the [IBM partnership with LSI and Seagate] to deliver encryption technology for disk drives, known as "Full Drive Encryption" or FDE.Having all data encrypted on all drives, without impacting performance, eliminates having to decide which data gets encryptedand which doesn't. With data safely encrypted, companies can now send in their broken drives for problemdetermination and replacement.Anytime you can apply a consistent solution across everything, without human intervention anddecision making, the less impact it will have. This was the driving motivation in both disk and tape driveencryption.
(Early in my IBM career, some lawyers decided we need to add a standard 'paragraph' to our copyright text in the upper comment section of our software modules, and so we had a team meeting on this. The lawyer that presented to us that perhaps only20 to 35 percent of the modules needed to be updated with this paragraph, and taught us what to look for to decidewhether or not the module needed to be changed. Myteam argued how tedious this was going to be, that this will take time to open up each module, evaluate it, and make the decision. With thousands of modules involved the process could take weeks. The fact that this was going to take us weeks did not seem to concern our lawyer one bit, it was just thecost of doing business.Finally, I asked if it would be legal to just add the standard paragraph to ALL the modules without any analysis whatsoever. The lawyer was stunned. There was no harm adding this paragraph to all the modules, he said, but that would be 3-5x more work and why would I even suggest that. Our team laughed, recognizing immediately that it was the fastest way to get it done. One quick program updated all modules that afternoon.)
To manage these keys, IBM previewed the Tivoli Key Lifecycle Manager (TKLM).This software helps automate the management of encryption keys throughout their lifecycle to help ensure that encrypted data on storage devices cannot be compromised if lost or stolen. It will apply to both disk and tapeencryption, so that one system will manage all of the encryption keys in your data center.
For those who only read the first and last paragraphs of each post, here is my recap:Information Security is intended as an end-to-end capability to protect against both internal and external threats, restricting access only to those who have a "need-to-know" or are "qualified-to-act". Security approacheslike "single sign-on" and encryption that applies to all tapes and all disks in the data center greatly simplify the deployment.
In yesterday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For those who prefer audio podcasts, here is Marissa Benekos interviewing Andy Monshaw, IBM General Manager of IBM System Storage.
This post will focus on Information Availability, the first of the four-part series this week.
Here's another short 2-minute video, on Information Availability
I am not in marketing department anymore, so have no idea how much IBM spentto get these videos made, but hate for the money to go wasted. I suspect theonly way they will get viewed is if I include them in my blog. I hope youlike them.
As with many IT terms, "availability" might conjure up different meanings for different people.
Some can focus on the pure mechanics of delivering information. An information infrastructure involves all of thesoftware, servers, networks and storage to bring information to the application or end user, so all of the chainsin the link must be highly available: software should not crash, servers should have "five nines" (99.999%) uptime, networks should be redundant, and storage should handle the I/O request with sufficient performance. For tape libraries, the tape cartridge must be available, robotics are needed to fetch the tape, and a drive must be available toread the cartridge. All of these factors represent the continuous operations and high availability features of business continuity.
In addition to the IT equipment, you need to make sure your facilities that support that equipment, such aspower and cooling, are also available.Independent IT analyst Mark Peters from Enterprise Strategy Group (ESG) summarizes his shock about the findings in a recent [survey commissioned by Emerson Network Power]on his post [Backing Up Your Back Up]. Here is an excerpt:
"The net take-away is that the majority of SMBs in the US do not have back-up power systems. As regional power supplies get more stretched in many areas, the possibility of power outages increases and obviously many SMBs would be vulnerable. Indeed, while the small business decision makers questioned for the survey ranked such power outages ahead of other threats (fires, government regulation, weather, theft and employee turnover) only 39% had a back-up power system. Yeah, you could say, but anything actually going wrong is unlikely; but apparently not, as 79% of those surveyed had experienced at least one power outage during 2007. Yeah, you might say, but maybe the effects were minor; again, apparently not, since 42% of those who'd had outages had to actually close their businesses during the longest outages. The DoE says power outages cost $80 billion a year and businesses bear 98% of those costs."
Others might be more concerned about outages resulting from planned and unplanned downtime. Storage virtualizationcan help reduce planned downtime, by allowing data to be migrated from one storage device to another withoutdisrupting the application's ability to read and write data. The latest "Virtual Disk Mirroring" (VDM) feature of the IBM System Storage SAN Volume Controller takes it one stepfurther, providing high-availability even for entry-level and midrange disk systems managed by the SVC.For unplanned downtime, IBM offers a complete range of support, from highly available clusters, two-site and three-site disaster recovery support, and application-aware data protection through IBM Tivoli Storage Manager.
Many outages are caused by human error, and in many cases it is the human factor that prevent quick resolution.Storage admins are unable to isolate the failing component, identify the configuration or provide the appropriateproblem determination data to the technical team ready to offer support and assistance. For this, IBM TotalStorageProductivity Center software, and its hardware-version the IBM System Storage Productivity Center, can helpreduce outage time and increase information availability. It can also provide automation to predict or provideearly warning of impending conditions that could get worse if not taken care of.
But perhaps yet another take on information availability is the ability to find and communicate the right informnationto the right people at the right time. Recently, Google announced a historic milestone, their search engine nowindexes over [One trillion Web pages]!Google and other search engines have changed the level of expectations for finding information. People ask whythey can find information on the internet so quickly, yet it takes weeks for companies to respond to a judge foran e-discovery request.
Lastly, the team at IBM's[Eightbar blog] pointedme to Mozilla Lab's Ubiquity project for their popular FireFox browser. This project aims to help people communicate the information in a more natural way, rather than unfriently URL links on an email. It is still beta, of course, but helps show what "information availability" might be possible in the near future.Here is a 7-minute demonstration:
For those who only read the first and last paragraphs of each post, here is my recap:Information Availability includes Business Continuity and Data Protection to facilitatequick recovery, storage virtualization to maximize performance and minimize planned downtime, infrastructure management and automation to reduce human error, and the ability to find and communicate information to others.
Earlier this year, IBM launched its[New Enterprise Data Center vision]. The average data center was built 10-15 years ago,at a time when the World Wide Web was still in its infancy, some companies were deploying their first storage areanetwork (SAN) and email system, and if you asked anyone what "Google" was, they might tell you it was ["a one followed by a hundred zeros"]!
Full disclosure: Google, the company, justcelebrated its [10th anniversary] yesterday, and IBM has partnered with Google on a varietyof exciting projects. I am employed by IBM, and own stock in both companies.
In just the last five years, we saw a rapid growth in information, fueled by Web 2.0 social media, email, mobile hand-held devices, and the convergenceof digital technologies that blurs the lines between communications, entertainment and business information. This explosion in information is not just "more of the same", but rather a dramatic shift from predominantly databases for online transaction processing to mostly unstructured content. IT departments are no longer just the"back office" recording financial transactions for accountants, but now also take on a more active "front office" role. For a growing number of industries, information technology plays a pivotal role in generating revenue, making smarter business decisions, and providing better customer service.
IBM felt a new IT model was needed to address this changing landscape, so IBM's New Enterprise Data Center vision has these five key strategic initiatives:
Highly virtualized resources
Business-driven Service Management
Green, Efficient, Optimized facilities
In February, IBM announced new products and features to support the first two initiatives, including the highlyvirtualized capability of the IBM z10 EC mainframe, and and related business resiliency features of the [IBM System Storage DS8000 Turbo] disk system.
In May, IBM launched its Service Management strategic initiative at the Pulse 2008 conference. I was there in Orlando, Florida at the Swan and Dolphin resort to present to clients. You can read my three posts:[Day 1; Day 2 Main Tent; Day 2 Breakout sessions].
In June, IBM launched its fourth strategic initiative "Green, Efficient and Optimized Facilities" with [Project BigGreen 2.0], which included the Space-Efficient Volume (SEV) and Space-Efficient FlashCopy (SEFC) capabilitiesof the IBM System Storage SAN Volume Controller (SVC) 4.3 release. Fellow blogger and IBM master inventor Barry Whyte (BarryW) has three posts on his blog about this:[SVC 4.3.0Overview; SEV and SEFCdetail; Virtual Disk Mirroring and More]
Some have speculated that the IBM System Storage team seemed to be on vacation the past two months, with few pressreleases and little or no fanfare about our July and August announcements, and not responding directly to critics and FUD in the blogosphere.It was because we were holding them all for today's launch, taking our cue from a famous perfume commercial:
"If you want to capture someone's attention -- whisper."
My team and I were actually quite busy at the [IBM Tucson Executive Briefing Center]. In between doing our regular job talking to excited prospects and clients,we trained sales reps and IBM Business Partners, wrote certification exams, and updated marketing collateral. Fortunately, competitors stopped promotingtheir own products to discuss and demonstrate why they are so scared of what IBM is planning.The fear was well justified. Even a few journalists helped raise the word-of-mouth buzz and excitement level. A big kiss to Beth Pariseau for her article in [SearchStorage.com]!
(Last week we broke radio silence to promote our technology demonstration of 1 million IOPS using Solid StateDisk, just to get the huge IBM marketing machine oiled up and ready for today)
Today, IBM General Manager Andy Monshaw launchedthe fifth strategic initiative, [IBM Information Infrastructure], at the[IBM Storage and Storage Networking Symposium] in Montpellier, France. Montpellier is one of the six locations of our New Enterprise Data Center Leadership Centers launched today. The other five are Poughkeepsie, Gaithersburg, Dallas, Mainz and Boebligen, with more planned for 2009.
Although IBM has been using the term "information infrastructure" for more than 30 years, it might be helpful to define it for you readers:
“An information infrastructure comprises the storage, networks, software, and servers integrated and optimized to securely deliver information to the business.”
In other words, it's all the "stuff" that delivers information from the magnetic surface recording of the disk ortape media to the eyes and ears of the end user. Everybody has an information infrastructure already, some are just more effective than others. For those of you not happy with yours, IBM hasthe products, services and expertise to help with your data center transformation.
IBM wants to help its clients deliver the right information to theright people at the right time, to get the most benefits of information, while controlling costs and mitigatingrisks. There might be more than a dozen ways to address the challenges involved, but IBM's Information Infrastructure strategic initiative focuses on four key solution areas:
Last, but not least, I would like to welcome to the blogosphere IBM's newest blogger, Moshe Yanai, formerly the father of the EMC Symmetrix and now leading the IBM XIV team. Already from his first poston his new [ThinkStorage blog], I can tell he is not going to pullany punches either.