Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Before we started, we asked the first survey question: "How is storage planning conducted in your shop?" Of the various responses, nearly four out of ten responded "Part of an overall IT infrastructure strategy".
Jon Toigo went first, and spent 20 minutes or so laying out the problem as he sees it. Jon travels all over visiting customers struggling with their storage infrastructures, so he gets to hear a lot of this first hand.
I then spent 20 minutes or so presenting IBM's vision, strategy and offerings to help solve these problems. I could speak for hours on this topic, but we kept it short for this one-hour webcast. To learn more, request a visit to the Tucson Executive Briefing Center.
At the end of my talk, we put out the second survey, asking the audience "What is your number one priority with respect to storage operations today?" Over one fourth of the attendees were focused on reducing storage infrastructure cost of ownership by any means possible.
I am glad we saved the last 15 minutes for Q&A, as there were a lot of questions.
The replay is now available. If you attended the event and want to hear it again, or want to share it with your colleagues, or you missed it and want to hear it, then [Register for the Replay].
To make true advances in any industry or field requires forward thinking—as well as industry insight and experience. It can't be done just by packaging a bag of piece parts and putting a new label on it. But forward thinkers are putting smarter, more powerful technology to uses that were once unimaginable -- either in scale or in progress.
The graphics developed for the IBM Smarter Planet vision are interesting. This one for Infrastructure includes images relating to public utilities, like gas, water and electricity, clouds representing cloud computing, green forests representing the need for energy efficiency and reducing carbon footprint to fight global warming, roads, representing the intricate transportation and traffic systems, highways and city streets that connect us all together, and a printed circuit board, representing the Information Technology that makes all of this possible.
Ironically, I didn't even know I made the final cut until I got three, yes three, separate requests for interviews about it. I already reached the "million hits" milestone. Other people track these things for me, so it will be interesting how much additional traffic my latest [15 minutes of fame] will generate.
Infrastructure is just one of the 25 different areas that IBM's vision for a Smarter Planet is trying to address, including the need for smarter buildings, smarter cities, smarter transportation systems, smarter energy grids, smarter healthcare and public safety, and smarter governments.
This week I am at the Data Center Conference 2009 in Las Vegas. There are some 1700 people registered this year for this conferece, representing a variety of industries like Public sector, Services, Finance, Healthcare and Manufacturing. A survey of the attendees found:
55 percent are at this conference for the first time.
18 percent once before, like me
15 percent two or three times before
12 percent four or more times before
Plans for 2010 IT budgets were split evenly, one third planning to spend more, one third planning to spend about the same, and the final third looking to cut their IT budgets even further than in 2009. The biggest challenges were Power/Cooling/Floorspace issues, aligning IT with Business goals, and modernizing applications. The top three areas of IT spend will be for Data Center facilities, modernizing infrastructure, and storage.
There are six keynote sessions scheduled, and 66 breakout sessions for the week. A "Hot Topic" was added on "Why the marketplace prefers one-stop shopping" which plays to the strengths of IT supermarkets like IBM, encourages HP to acquire EDS and 3Com, and forces specialty shops like Cisco and EMC to form alliances.
Day 2 began with a series of keynote sessions. Normally when I see "IO" or "I/O", I immediately think of input/output, but here "I&O" refers to Infrastructure and Operations.
Business Sensitivity Analysis leads to better I&O Solutions
The analyst gave examples from Alan Greenspan's biography to emphasize his point that what this financial meltdown has caused is a decline in trust. Nobody trusts anyone else. This is true between people, companies, and entire countries. While the GDP declined 2 percent in 2009 worldwide, it is expected to grow 2 percent in 2010, with some emerging markets expected to grow faster, such as India (7 percent) and China (10 percent). Industries like Healthcare, Utilities and Public sector are expected to lead the IT spend by 2011.
While IT spend is expected to grow only 1 to 5 percent in 2010, there is a significant shift from Capital Expenditures (CapEx) to Operational Expenses (OpEx). Five years ago, OpEx used to represent only 64 percent of IT budget in 2004, but today represents 76 percent and growing. Many companies are keeping their aging IT hardware longer in service, beyond traditional depreciation schedules. The analyst estimated over 1 million servers were kept longer than planned in 2009, and another 2 million will be kept longer in 2010.
An example of hardware kept too long was the November 17 delay of 2000 some flights in the United States, caused by a failed router card in Utah that was part of the air traffic control system. Modernizing this system is estimated to cost $40 billion US dollars.
Top 10 priorities for the CIO were Virtualization, Cloud Computing, Business Intelligence (BI), Networking, Web 2.0, ERP applications, Security, Data Management, Mobile, and Collaboration. There is a growth in context-aware computing, connecting operational technologies with sensors and monitors to feed back into IT, with an opportunity for pattern-based strategy. Borrowing a concept from the military, "OpTempo" allows a CIO to speed up or slow down various projects as needed. By seeking out patterns, developing models to understand those patterns, and then adapting the business to fit those patterns, a strategy can be developed to address new opportunities.
Infrastructure and Operations: Charting the course for the coming decade
This analyst felt that strategies should not just be focused looking forward, but also look left and right, what IBM calls "adjacent spaces". He covered a variety of hot topics:
65 percent of energy running x86 servers is doing nothing. The average x86 running only 7 to 12 percent CPU utilization.
Virtualization of servers, networks and storage are transforming IT to become on big logical system image, which plays well with Green IT initiatives. He joked that this is what IBM offered 20 years ago with Mainframe "Single System Image" sysplexes, and that we have come around full circle.
One area of virtualization are desktop images (VDI). This goes back to the benefits of green-screen 3270 terminals of the mainframe era, eliminating the headaches of managing thousands of PCs, and instead having thin clients rely heavily on centralized services.
The deluge in data continues, as more convenient access drives demand for more data. The anlyst estimates storage capacity will increase 650 percent over the next five years, with over 80 percent of this unstructured data. Automated storage tiering, ala Hierarchical Storage Manager (HSM) from the mainframe era, is once again popular, along with new technologies like thin provisioning and data deduplication.
IT is also being asked to do complex resource tracking, such as power consumption. In the past IT and Facilities were separate budgets, but that is beginning to change.
The fastest growing social nework was Twitter, with 1382 percent growth in 2009, of which 69 percent of new users that joined this year were 39 to 51 years old. By comparison, Facebook only grew by 249 percent. Social media is a big factor both inside and outside a company, and management should be aware of what Tweets, Blogs, and others in the collective are saying about you and your company.
The average 18 to 25 year old sends out 4000 text messages per month. In 24 hours, more text messages are sent out than people on the planet (6.7 billion). Unified Communications is also getting attention. This is the idea that all forms of communication, from email to texts to voice over IP (VoIP), can be managed centrally.
Smart phones and other mobile devices are changing the way people view laptops. Many business tasks can be handled by these smaller devices.
It costs more in energy to run an x86 server for three years than it costs to buy it. The idea of blade servers and componentization can help address that.
Mashups and Portals are an unrecognized opportunity. An example of a Mashup is mapping a list of real estate listings to Google Maps so that you can see all the listings arranged geographically.
Lastly, Cloud Computing will change the way people deliver IT services. Amusingly, the conference was playing "Both Sides Now" by Joni Mitchell, which has the [lyrics about clouds]
Unlike other conferences that clump all the keynotes at the beginning, this one spreads the "Keynote" sessions out across several days, so I will cover the rest over separate posts.
Eventually, there comes a time to drop support for older, outdated programs that don't meet the latest standards. I had several complain that they could not read my last post on Internet Explorer 6. The post reads fine on more modern browsers like Firefox 3 and even Google's Chrome browser, but not IE6.
Google confirms that warnings are appearing:
[Official: YouTube to stop IE6 support].
My choice is to either stop embedding YouTube videos, some of which are created by my own marketing team specifically on my behalf, or drop support for IE6. I choose the latter. If you are still using IE6, please consider switching to Firefox 3 or Google Chrome instead.
Over on his Backup Blog, fellow blogger Scott Waterhouse from EMC has a post titled
[Backup Sucks: Reason #38]. Here is an excerpt:
Unfortunately, we have not been able to successfully leverage economies of scale in the world of backup and recovery. If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data.
If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale.
I suspect that where Scott mentions we in the above excerpt, he is referring to EMC in general, with products like
Legato. Fortunately, IBM has scalable backup solutions, using either a hardware approach, or one purely with software.
The hardware approach involves using deduplication hardware technology as the storage pool for IBM Tivoli Storage Manager (TSM). Using this approach, IBM Tivoli Storage Manager would receive data from dozens, hundreds or even thousands
of client nodes, and the backup copies would be sent to an IBM TS7650 ProtecTIER data deduplication appliance, IBM TS7650G gateway, or IBM N series with A-SIS. In most cases, companies have standardized on the operating systems and applications used on these nodes, and multiple copies of data reside across employee laptops. As a result, as you have more nodes backing up, you are able to achieve benefits of scale.
Perhaps your budget isn't big enough to handle new hardware purchases at this time, in this economy. Have no fear,
IBM also offers deduplication built right into the IBM Tivoli Storage Manager v6 software itself. You can use sequential access disk storage pool for this. TSM scans and identifies duplicate chunks of data in the backup copies, and also archive and HSM data, and reclaims the space when found.
If your company is using a backup software product that doesn't scale well, perhaps now is a good time to switch over to IBM Tivoli Storage Manager. TSM is perhaps the most scalable backup software product in the marketplace, giving IBM an "irrefutable advantage" over the competition.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended several sessions intended to answer the questions of the audience.
In an effort to be cute, the System x team have a "Meet the xPerts" session at their System x and BladeCenter Technical Conference, so the storage side decided to do the same. Traditionally, these have been called "Birds of a Feature", "Q&A Panel", or "Free-for-All". They allow anyone to throw out a question, and have the experts in the room, either
IBM, Business Partner or another client, answer the question from their experience.
Meet the Experts - Storage for z/OS environments
Here were some of the questions answered:
I've seen terms like "z/OS", "zSeries" and "System z" used interchangeably, can you help clarify what this particular session is about?
IBM's current mainframe servers are all named "System z", such as our System z9 or System z10. These replace the older zSeries models of hardware. z/OS is one of the six operating systems that run on this hardware platform. The other five are z/VM, z/VSE, z/TPF, Linux and OpenSolaris. The focus of this session will be storage attached and used for z/OS specifically, including discussions of Omegamon and DFSMS software products.
What can we do to reduce our MIPS-based software licensing costs from our third party vendors?
Consider using IBM System z Integrated Information Processor
What about 8 Gbps FICON?
IBM has already announced
[FICON Express8] host bus adapter (HBA) cards, that will auto-negotiate to 4Gbps and 2Gbps speeds. If you don't need full 8Gbps speed now, you can
still get the Express8 cards, but put 4/2/1 Gbps SFP ports instead. Currently, LongWave (LW) is only supported to 4km at 8Gbps speed.
I want to use Global Mirror for my DS8100 to my remote DS8100, but also make test copies of my production data to
an older ESS 800 I have locally. Any suggestions? Yes, consider using FlashCopy to simplify this process.
I have Global Mirror (GM) running now successfully with DSCLI, and now want to deploy IBM Tivoli Storage Productivity Center for Replication. Is that possible? Yes, Productivity Center for Replication will detect existing GM relationships, and start managing them.
I have already deployed HyperPAV and zHPF, is there any value in getting Solid-State Drives as well?
HyperPAV and zHPF impact CONN time, but SSD impacts DISC time, so they are mutually complementary.
How should I size my FlashCopy SE pool? SE refers to "Space Efficient", which stores only the changes
between the source and destination copies of each LUN or CKD volume involved. General recommendation is to start with 20 percent and adjust accordingly.
How many RAID ranks should I configure per DS8000 extent pool? IBM recommends 4 to 8 ranks per pool.
Meet the Experts: Storage for Linux, UNIX and Windows distributed systems
This session was focused on storage systems attached to distributed servers, as well as products from Tivoli used to manage them. Here were some of the questions answered:
When we migrated from Tivoli Storage Manager v5 to v6, we lost our favorite "Operational Reporting" tool. How can we get TOR back? You now get the new Tivoli Common Reporting tool.
How can we identify appropriate port distribution for multiple SVC node pairs for load balancing?
IBM Tivoli Storage Productivity Center v4.1 has hot-spot analysis with recommendations for Vdisk migrations.
We tried TotalStorage Productivity Center way back when, but the frequent upgrades were killing us. How has it been lately? It has been much more stable since v3.3, and completely renamed to Tivoli Storage Productivity Center to avoid association with versions 1 and 2 of the predecessor product. The new "lightweight agents" feature of v4.1 resolve many of the problems you were experiencing.
We have over 1600 SVC virtual disks, how do we handle this in IBM Tivoli Storage Productivity Center? Use the Filter capability in combination with clever naming conventions for your virtual disks.
How can we be clever when we are limited to only 15 characters? Ok. We understand.
We are currently using an SSPC with Windows 2003 and 2GB memory, but we are only using the Productivity Center for Replication feature of it. Can we move the DB2 database over to a Windows 2008 server with 4GB of memory?
Consider using the IBM Tivoli Storage Productivity Center for Replication software instead of SSPC for special
circumstances like this.
We love the XIV GUI, how soon will all other IBM storage products have it also? As with every acquisition,
IBM evaluates if there are technologies from new products that can be carried back to existing products.
We are currently using 12 ports on our existing XIV, and love it so much we plan to buy a second frame, but are concerned about consuming another 12 ports on our SAN switch. Any suggestions? Yes, use only six ports per frame. Just because you have more ports, doesn't mean you are required to use them.
We have heard there are concerns from the legal community about using deduplication technology, any ideas how to address that?
Nobody here in the room is a lawyer, and you should consult legal counsel for any particular situation.
None of the IBM offerings intended for non-erasable, non-rewriteable (NENR) data retention records (DR550, WORM tape, N series SnapLock) support dedupe today, and none of IBM's deduplication offerings (TS7650,N series A-SIS,TSM) make any claims for fit-for-purpose for compliance regulatory storage. However, be assured that all of IBM's dedupe technology involves byte-for-byte comparisons so that you never lose any data due to false hash collisions. For all IBM compliance storage, what you write will be read back in the correct sequence of ones and zeros.
Continuing my week in Chicago, I decided to attend some of the presentations from the System x side. This is the advantage of running both conferences in the same hotel, attendees can choose how many of each they want to participate in.
Wayne Wigley, IBM Advanced Technical Support (ATS), presented a series of presentations on different server virtualization offerings available for System x and BladeCenter servers. I am very familiar with virtualization implemented on System z mainframes, as well as IBM's POWER systems, and have working knowledge of Linux KVM and Xen, so I was well prepared to handle hearing the latest about Microsoft's Hyper-V and VMware's Vsphere version 4.
Microsoft Hyper-V 2008
Hyper-V can run as part of Windows 2008, are standalone on its own.Different levels of Windows 2008 include licenses for different number of Windows virtual machines (VMs).Windows Server 2008 Standard includes 1 Windows VM, Enterprise includes 4 Windows VMs, and the Datacenter edition includes unlimited number of Windows VMs. If you want to run more Window VMs than come included, you need to pay extra for each additional one. For example, to run 10 Windows VMs on a 2-socket server would cost about $9000 US dollars on Standard but only $6000 US dollars on Datacenter edition (list prices from Microsoft Web site).
Unlike VMware, which takes a monolithic approach as hypervisor, Hyper-V is more like Xen with a microkernelized approach. This means you need a "parent" guest OS image, and the rest of the Guest OS images are then considered "child" images.These child images can be various levels of Windows, from Windows XP Pro to Windows Server 2008, Xen-enabled Linux, or even a non-hypervisor-aware OS.The "parent" guest OS image provides networking and storage I/O services to these "child" images.For the hypervisor-aware versions of Windows and Linux, Hyper-V allows optimized access to the hypervisor, "synthetic devices", and hypercalls. Synthetic devices present themselves as network devices, but only serve to pass data along the VMBus to other networking resources. This process does not require software emulation, and therefore offers higher performance for virtual machines and lower host system overhead.For non-hypervisor-aware OS images, Hyper-V provides device emulation through the "parent" image, which is slower.
Microsoft System Center Virtual Machine Manager (SCVMM) can manage both Hyper-V and VMware VI3 images.Wayne showed various screen shots of the GUI available to manage Hyper-V images.In standalone mode, you lose the nice GUI and management console.
Hyper-V supports external, internal and private virtual LANs (VLAN). External means that VMs can communicate with the outside world over standard ethernet connections. Internal means that VMs can communicate with "parent" and "child" guest images on the same server only. Private means that only "child" guests can communicate with other "child" images.
Hyper-V supports disk attached via IDE, SATA, SCSI, SAS, FC, iSCSI, NFS and CIFS. One mode is "Virtual Hard Disk" (VHD) similar to VMware VMDK files. The other is "pass through" mode, which are actual disk LUNs accessed natively. VHDs can be dynamic (thin provisioned), fixed (fully allocated), or differencing. The concept of differencing is interesting, as you start with a base read-only VHD volume image, and have a separate "delta" file that contains changes from the base image.
Some of the key features of Hyper-V 2008 are:
Being able to run concurrently 32-bit and 64-bit versions of Linux and Windows guest images
Support for 64 GB of memory and 4-way symmetric multiprocessing (SMP) per VM
Clustering for High Availability and Quick Migration of VM images
Live backup with integration with Microsoft's Volume Shadow Copy Services (VSS)
Virtual LAN (VLAN) support, and Virtual and Pass-through physical disk support
A clever VMbus, virtual service parent/client approach to sharing hardware
Optimized performance options for hypervisor-aware versions of Windows and Linux, and emulated supportfor non-hypervisor-aware OS images.
VMware Vsphere v4.0
This was titled as an "Overview" session, but really was an "Update" session on the newest features of this release. The big change appears to be that VMware added "v" in front of everything.
Under vCompute, there are some new features on VMware's Distributed Resource Scheduler (DRS) which includes recommended VM migrations. Dynamic Power Management (DPM) will move VMs during periods of low usageto consolidate onto fewer physical servers so as to reduce energy consumption.
Under vStorage, vSphere introduces an enhanced Plugable Storage Architecture (PSA), with supportfor Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP). This vStorage API allows forthird party plugins for improved fault-tolerance and complex I/O load balancing algorithms. This releasealso has improved support for iSCSI, including Challenge-Handshake Authentication Protocol (CHAP) support.Similar to Hyper-V's dynamic VHD, VMware supports "thin provisioning" for their virtual disk VMDK files.A feature of "Storage Vmotion" allows conversion between "thick" and "thin" provisioning formats.
The vStorage API for Data Protection provide all the features of VMware Consolidated Backup (VCB). The APIprovides full, incremental and differential file-level backups for Windows and Linux guests, including supportfor snapshots and Volume Shadow Copy Services (VSS) quiescing.
VMware introduces direct I/O pass-through for both NIC and HBA devices. While thisallows direct access to SAN-attached LUNs similar to Hyper-V, you lose a lot of features like Vmotion, High Availability and Fault Tolerance. Wayne felt that these restrictions are temporary, that hopefully VMwarewill resolve this over the next 12 months.
Under vNetwork, VMware has virtual LAN switches called vSwitches. This includes support for IPv6and VLAN offloading.
The vSphere server can now run with up to 1TB of RAM and 64 logical CPUs to support up to 320 VM guest images.Each VM can have up to 255GB RAM and up to 8-way SMP.Vsphere ESX 4 introduces a new virtual hardware platform called VM Hardware v7. While Vsphere 4.0 can run VMs from ESX 2 and ESX 3, the problem is if you have new VMs based on this newer VM Hardware v7, you cannot run them on older ESX versions.
Vsphere comes in four sizes: Standard, Advanced, Enterprise, and Enterprise Plus, ranging in list price from $795 US dollars to $3495 US dollars.
While IBM is the #1 reseller of VMware, we also are proud to support Hyper-V, Xen, KVM and other similar products.Analysts expect most companies will have two or more server virtualization solutions in their data center, and it is good to see that IBM supports them all.
Continuing my week in Chicago for the IBM Storage and Storage Networking Symposium and System x and BladeCenter Technical Conference, I presented a variety of topics.
Hybrid Storage for a Green Data Center
The cost of power and cooling has risen to be a #1 concern among data centers. I presented the following hybrid storage solutions that combine disk with tape. These provide the best of both worlds, the high performance access time of disk with the lower costs and reduced energy consumption of tape.
IBM [System Storage DR550] - IBM's Non-erasable, Non-rewriteable (NENR) storage for archive and compliance data retention
IBM Grid Medical Archive Solution [GMAS] - IBM's multi-site grid storage for PACS applications and electronic medical records[EMR]
IBM Scale-out File Services [SoFS] - IBM's scalable NAS solution that combines a global name space with a clustered GPFS file system, serving as the ideal basis for IBM's own[Cloud Computing and Storage] offerings
Not only do these help reduce energy costs, they provide an overall lower total cost of ownership (TCO) thantraditional WORM optical or disk-only storage configurations.
The Convergence of Networks - Understanding SAN, NAS and iSCSI in the Data Center Network
This turned out to be my most popular session. Many companies are at a crossroads in choosing data and storage networking solutions in light of recent announcements from IBM and others. In the span of 75 minutes, I covered:
Block storage concepts, storage virtualization and RAID levels
File system concepts, how file systems map files to block storage
Network Attach Storage, the history of the NFS and CIFS protocols, Pros and Cons of using NAS
Storage Area Networks, the history of SAN protocols including ESCON, FICON and FCP, Pros and Cons of using SAN
IP SAN technologies, iSCSI and Fibre Channel over Ethernet (FCoE), Pros and Cons of using this approach
Network Convergence with Infiniband and Fibre Channel over Convergence Enhanced Ethernet (FCoCEE), why Infiniband was not adopted historically in the marketplace as a storage protocol, and the features and enhancements of Convergence Enhanced Ethernet (CEE) needed to merge NAS, SAN and iSCSI traffic onto a single converged data center network [DCN]
Yes, it was a lot of information to cover, but I managed to get it done on time.
IBM Tivoli Storage Productivity Center version 4.1 Overview and Update
In conferences like these, there are two types of product-level presentations. An "Overview" explains howproducts work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. I decided to combine these into one sessionfor IBM's new version of [Tivoli Storage Productivity Center].I was one of the original lead architects of this product many years ago, and was able to share many personalexperiences about its evolution in development and in the field at client facilities.Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performingILM assessments for clients, and now there are dozens of IBM practitioners in Global Technology Services andSTG Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center[EBC], because it addressesseveral top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
I did not get as many attendees as I had hoped for this last one, as I was competing head-to-head in the same time slot as Lee La Frese covering IBM's DS8000 performance with Solid State Disk (SSD) drives, John Sing covering Cloud Computing and Storage with SoFS, and Eric Kern covering IBM Cloudburst.
I am glad that I was able to make all of my presentations at the beginning of the week, so that I can then sit back and enjoy the rest of the sessions as a pure attendee.
This week I am in Chicago for the IBM Storage and Storage Networking Symposium, which coincides with the System x and BladeCenter Technical Conference. This allows the 800 attendees to attend both storage or server presentations at their convenience. There were hundreds of sessions, over 20 time slots, so for each time slot, you have 15 or so topics to choose from.Mike Kuhn kicked off the series of keynote sessions. Here's my quick recap of each one:
Curtis Tearte, General Manager, IBM System Storage
Curtis replaced Andy Monshaw as General Manager for IBM System Storage. His presentation focused on how storage fits into IBM's Dynamic Infrastructure strategy. Some interesting points:
a billion camera-enabled cell phones were sold in 2007, compared to 450 million in 2006.
IBM expects that there will be 2 billion internet users by 2011, as well as trillions of "things".
In the US, there were 2.2 million medical pharmacy dispensing errors resulting for handwritten prescriptions.
Time wasted looking for parking spaces in Los Angeles consumed 47,000 gallons of gasoline, and generated 730 tons of carbon dioxide.
In the US, 4.2 billion hours are lost, and 2.9 billion gallons of gas consumed, due to traffic congestion.
Over the past decade, servers went from 8 watts to 100 watts per $1000 US dollars.
Data growth appears immune to the economic recession. The digital footprint per person is expected to grow from 1TB today to over 15TB by 2020.
10 hours of YouTube videos are uploaded every minute.
Bank of China manages 380 million bank accounts, processing over 10,000 transactions per second.
At the end of the session, Curtis transitioned from demonstrating his knowledge and passion of storage to his knowledge and passion in his favorite sport: baseball. Chicago is home to both the Cubs and the White Sox.
Roland Hagan, Vice President Business Line Executive, System x
IBM sets the infrastructure agenda for the entire industry. The Dynamic Infrastructure initiative is not just IT, but a complete end-to-end view across all of the infrastructures in play, including transportation, manufacturing, services and facilities.Companies spent over $60 billion US dollars on servers last year. Of these, 53 percent for x86-based servers, 9 percent for Itanium-based, 26 percent for RISC-based (POWER6, SPARC, etc.), and 11 percent mainframe. Theeconomic downturn has impacted revenues, but the percentages continue about the same.
The dominant deployment model remains one application per server. As a result, power, cooling and management costs have grown tremendously. There are system admins opposed to consolidating server images with VMware, Hyper-V, Xen or other server virtualizaition technologies. Roland referred to these admins as "server huggers".To help clients adopt cloud computing technologies, IBM introduced [Cloudburst] appliances. IBM plans to offer specialized versions for developers, for service providers, and for enterprises.
IBM's Enterprise-X Architecture is what differentiates IBM's x86-based servers from all the competitors, surrounding Intel and AMD processors with technology that provides distinct advantages. For example, to support server virtualization, IBM's eX4 provides support for more memory, which often is more critical than CPU resources when deploying large number of guest OS images. IBM System x servers have an integrated management module (IMM) and was the first to change over from BIOS to the new Unified Extensible Firmware Interface [UEFI] standard.
IBM servers offer double the performance, consume half the power, and cost a third less to manage, than comparably priced servers from competitors. Of the top 20 more energy efficient server deployments, 19 are from IBM. Roland cited customer reference SciNet, a 4000-server supercomputer with 30,000 cores based on IBM [iDataPlex] servers. At 350 TeraFLOPs it is ranked #16 fastest supercomputer in the world, and #1 in Canada. With apower usage effectiveness (PUE) less than 1.2, it also is very energy efficient. This means that for every 12 watts of electricity going in to the data center, 10 watts are used for servers, storage and networking gear, andonly 2 watts used for power and cooling. Traditional data centers have PUE around 2.5, consuming 25 watts total for every 10 watts used by servers, storage and networking gear.
Clod Barrera, Distinguished Engineer, Chief Technical Strategist for IBM System Storage
Clod presented trends and directions for disk and tape technology, disk and tape systems, and the direction towards cloud computing.
Ideally, every airline would use the most experienced seasoned professional airline pilots money could buy, but some airlines, in an effort to compete on ticket price, may elect instead to have less experienced pilots.Here's a great excerpt:
Airline history lesson 101: It used to be, up until the mid 1980’s, that a young pilot would be hired on at a major carrier, become a flight engineer (FE), and then spend a few years managing the systems of the older-generation airplanes. But he or she was learning all the while. These new “pilots” sat in the FE seat and did their job, all the while observing the “pilots” doing the flying, day in and day out.
The FE’s learned from the seasoned pilots about the real world of flying into the Chicago O’Hares and New York LaGuardias. They learned decision making, delegation, and the reality of “captain’s final authority” as confirmed in the law. When they got the chance to upgrade, they became a copilot. The copilot’s duty was to assist the captain in flying; but even during their time as the new copilot, they had the luxury of the FE looking over their shoulders — i.e., more learning. This three-man-crew concept, now a fond memory in the domestic markets but used predominately in international flying, was considered one more layer of protection. But it’s gone.
To become the public speaker I am today, IBM put me through a variety of speaking classes. I taught high school and college classes to practice in front of groups. But most importantly, I traveled with seasoned colleagues and watched them in action from the front row.I learned how to handle tough questions, how to react to hecklers causing trouble, and how to deal with the unexpected before, during and after each presentation. In addition to speaking skills, I ended up having to learn travel skills, foreign language skills, and a variety of cultural social skills. All part of the job in my line of work.
Likewise, being a storage administrator is an important job, and for some data centers, not something to give lightly to a fresh college graduate. Unless they have had format IT Infrastructure Library [ITIL] certification coursework, I doubt they would understand the processes and disciplines demanded by the typical data center. I have been to accounts where new hires are not allowed to touch production systems for the first two years. Instead they watch the seasoned professionals do their jobs, and are given only access to "sand box" systems that are used for application testing or Quality Assurance (QA). Sadly, I have also been to other accounts where people with no storage experience whatsoever were tossed into the admin pool and let loose with superuser passwords, all in an effort to save money during times of exponential data growth rates, only to pay the price later with outages or lost data.
The parallels between the airline industry and the IT industry are eerie.
As I mentioned in my post [Moving Over to MyDeveloperWorks], those of us bloggers on IBM's DeveloperWorks are moving over to a new system called "MyDeveloperWorks" which has a host of new features.
Fortunately for me, I missed the note to volunteer to be one of the first bloggers on the block to volunteer to move over. I was traveling and decided not to deal with it until I got back.However, fellow IBM Master Inventor, Barry Whyte, was not so lucky. It is safe to say he was stupid enough to volunteer, and is probably regretting the decision every day since. In case you lost his RSS feed, or can't find him anymore on Google or whatever search engine, here is his[new blog].
As for my blog, I have asked to postpone the move until all the problems that Barry has encountered are resolved. That might be a awhile, but if you lose access to mine sometime in the near future, hopefully at least you have been warned as to what might have happened.
Jon Toigo has a funny cartoon on his post, [As I Listen to EMC Brag on “New” Functionality…]. Basically, it pokes fun that many of us bloggers argue which vendor was first to introduce some technology or another. We all do this, myself included.
Recently, Claus Mikkelsen's, currently with HDS, [brought up accurately some past history from the 1990s], which is before many storage bloggers hired on with their current employers. Claus and I worked together for IBM back then, so I recognized many of the events he mentions that I can't talk about either. In many cases, IBM or HDS delivered new features before EMC.
I've been reading with some amusement as fellow blogger Barry Burke asked Claus a series of questions about Hitachi's latest High Availability Manager (HAM) feature. Claus was too busy with his "day job" and chose to shut Barry down. Sadly, HDS set themselves up for ridicule this round, first by over-hyping a function before its announcement, and then announcing a feature that IBM and EMC have offered for a while. The problem and confusion for many is that each vendor uses different terminology. Hitachi's HAM is similar to IBM's HyperSwap and EMC's AutoSwap. The implementations are different, of course, which is often why vendors are often asked to compare and contrast one implementation to another.
In his latest response,[how to mind the future of a mission-critical world], Barry reports that several HDS bloggers now censor his comments.That's too bad. I don't censor comments, within reason, including Barry's inane questions about IBM's products, and am glad that he does not censor my inane questions to him about EMC products in return. The entire blogosphere benefits from these exchanges, even if they are a bit heated sometimes.
We all have day jobs, and often are just too busy, or too lazy, to read dozens or hundreds of pages of materials, if we can even find them in the first place. Not everyone has the luxury of a "competitive marketing" team to help do the research for you, so if we can get an accurate answer or clarification about a product that is generally available directly from a vendor's subject matter expert, I am all for that.
This week, I have been presenting how to do important things without travel. Of course, there are times where you need some boots on the ground, while your support team remains remote.
Last month, fellow co-worker Liz Goodman reached out to me. She was part of a ten-person team that went to Tanzania as part of IBM's[Corporate Service Corps]. Other teams went to Brazil, China, Ghana, Romania, South Africa, The Philippines, Turkey and Vietnam.(I've been to half these other countries, but the closest I have ever been toTanzania was a safari I took in Kenya that included the Masai Mara national park which runsalong the border with Tanzania's Serengheti national park).
Liz was one of the lucky[200 candidates chosen among over 5000 applications] IBM reviews each year for this program. IBM does business in over170 countries, so learning to work in or with emerging growth markets requires a bit of "cultural intelligence".Liz and three others worked with the University of Dudoma [UDOM] to lead some students in adopting a [Moodle] infrastructure based on Linux, Apache, PHP and MySQL [LAMP] platform. She noticed that I had experience with both Moodle and LAMP from [my work with OLPC], and reached out to me for help.I was able to provide some insight, things to watch out for, and how to tackle not just the technical challenges, but a few that many don't consider:
Educational content. Digitizing materials already available in hardcopy, or obtaining digital rights to existing content.
Business Process. Getting the teachers and students to adopt new process and procedures enabled by these new capabilities.
Project Management. Fortunately, Liz is already [PMP-certified], and knows well the importance of managing even a small 4-person, 4-week project like this.
How well did her team do? Liz blogged before, during and after her trip. Read all about iton her blog [Liz Goes To Tanzania]!
Jim Stallings, IBM General Manager for Global Markets, will explain why a smarter planet needs a dynamicinfrastructure. I used to work for Jim, when he was in charge of the IBM Linux initiative and I was on the Linux forS/390 mainframe team.
Erich Clementi, IBM Vice President, Strategy & General Manager Enterprise Initiatives, will explain how to best leverage opportunities with cloud computing.
Steve Forbes, Chairman and CEO of Forbes Inc. and Editor-in-Chief of Forbes Magazine, will presentGlobal Outlooks and the Challenge of Change.
Rich Lechner, IBM Vice President, Energy & Environment, will explain the importance of Building an Energy-Efficient Dynamic Infrastructure. I also worked for Rich, back when he was the VP of Marketing for IBM System Storage, and Iwas back then the "Technical Evangelist". See my post [The Art of Evangelism] to better understand why I don't carry that title anymore.
In addition to these presentations, you will be able to "walk" around to different booths and have on-line chats with subject matter experts and download resources. Don't worry, this is not based on [Second Life], but rather using "On24" much simpler visual interface.Of course, you can follow on [Twitter] or join the fan club at[Facebook].
This is a worldwide event, with translated resource materials and on-line subject matter experts in six different languages (English, French, Italian, German, Mandarin and Japanese). Those in North, Central and South Americas can participate June 23, and those in Europe, Asia and the rest of the world on June 24. [Register Today] and mark your calendars!
Continuing this week's theme of doing important things without leaving town, I present our results foran exciting project I started earlier this year.
For seven weeks, my coworker Mark Haye and I voluntarily led a class of students here in Tucson, Arizona in an after-school pilot project to teach the ["C" programming language] using [LEGO® Mindstorms® NXT robots]. The ten students, boys and girls ages 9 to 14 years old, were already part of the FIRST [For Inspiration and Recognition of Science and Technology] program, and participated in FIRST Lego League[FLL] robot competitions.Since the students were already familiar building robots, and programming them with a simple graphical system of connecting blocks that perform actions. However, to compete in the next level of robot competitions, FIRST Tech Challenge [FTC],we need to leave this simple graphical programming behind, and upgrade to more precise "C" programming.
Mark is a software engineer for IBM Tivoli Storage Manager and has participated in FLL competitions over the past nine years. This week, he celebrates his 25th anniversary at IBM, and I celebrate my 23rd. The teacher, Ms. Ackerman, and the students referred to us as "Coach Mark" and "Coach Tony".
This was the first time I had worked with LEGO NXT robots. For those not familiar with these robots, you can purchase a kit at your localtoy store. In addition to regular LEGO bricks, beams, and plates, there are motors, wheels, and sensors. A programmable NXT brick has three outputs (marked A,B, and C) to control three motors, and four inputs (marked 1,2,3,4) to receive values from sensors. Programs are written and compiled on laptops and then downloaded to the NXT programmable brick through an USB cable, or wirelessly via Bluetooth.
In the picture shown, an image of the Mars planetary surface is divided into a grid with thick black lines.A light sensor between the front two wheels of the robot is over the black line.
We used the [RobotC programming firmware] and integrated development environment (IDE) from [Carnegie Mellon University].The idea of this pilot was to see how well the students could learn "C". With only a few hours after class on each Wednesday, could we teach young students "C" programming in just seven weeks?
My contribution? I have taught both high school and college classes, and spent over 15 years programming for IBM, so Mark asked me to help.We started with a basic lesson plan:
A brief history of the "C" language
Understanding statements and syntax
Setting motor speed and direction
Compiling and downloading your first program
Understanding the "while" loop
Retrieving input sensor values
Understanding the "if-then-else" statement
Defining variables with different data types
Manipulating string variables
Writing a program for the robot to track along a black line on a white background.
Understanding local versus global scope variables
Writing a program for a robot to count black lines as it crosses them.
Perform left turns, right turns, and to cross a specific number of lines on a grid pattern to move the robot to a specific location.
Weeks 6 and 7
Mission Impossible: come up with a challenge to make the robot do something that would be difficult to accomplish using the previous NXT visual programming language.
At the completion of these seven weeks, I sat down to interview "Coach Mark"on his thoughts on this pilot project.
This is a practical programming skill. The "C" language is used throughout the world to program everything from embedded systems to operating systems, and even storage software. This would allow the robots to handle more precise movements, more accurate turns, and more complicated missions.
Can kids learn "C" in only seven weeks?
Part of the pilot project was to see how well the students could understand the material. They were already familiar with building the robots, and understood the basics of programming sensors and motors, so we were hoping this was a good foundation to work from. Some kids managed very well, others struggled.
Did everything go according to plan?
The first two weeks went well, turning on motors and having robots move forward and backward were easy enough. We seemed to lose a few students on week 3, and things got worse from there. However, several of the students truly surprised us and managed to implement very complicated missions. We were quite pleased with the results.
What kind of problems did the kids encounter?
Touch sensor required loops waiting for pressing. Motors did not necessarily turn as expected until more advanced methods were used. Making 90 degree left and right turns accurately was more difficult than expected.
Any funny surprises?
Yes, we had a Challenge Map representing the Mars planetary surface from a previous FLL competition that was dark red and divided into squares with thick black lines. An active light sensor returns a value of "0" (complete darkness) to "100" (bright white).However, the Mars surface had craters that were dark enough to be misinterpreted as a black line causing some unusual results. This required some enhanced programming techniques to resolve.
Did robots help or hurt the teaching process?
I think they helped. Rather than writing programs that just display "Hello World!" on a computer screen, the students can actually see robots move, and either do what they expect, or not!
And when the robots didn't do what they were expected to?
The students got into "debug" mode. They were already used to doing this from previous FLL competitions, but with RobotC, you can leave the USB cable connected (or use wireless Bluetooth) and actually gather debugging information while the robot is running, to see the value of sensors and other variables and help determine why things are not working properly.
Any applicability to the real world of storage?
We have robots in the IBM System Storage TS3500 tape library. These robots scan bar code labels, pull tapes out of shelves and mount them into drives.The programming skills are the same needed for storage software, suchas IBM Tivoli Storage Manager or IBM Tivoli Storage Productivity Center.
The world is becoming smarter, instrumented with sensors, interconnected over a common network, and intelligent enough to react and respond correctly. The lessons of reading sensor values and moving motors can be considered the first step in solutions that help to make a smarter planet.
Spend twenty hours a week running a project for a non-profit.
Teach yourself Java, HTML, Flash, PHP and SQL. Not a little, but mastery. [Clarification: I know you can't become a master programmer of all these in a year. I used the word mastery to distinguish it from 'familiarity' which is what you get from one of those Dummies type books. I would hope you could write code that solves problems, works and is reasonably clear, not that you can program well enough to work for Joel Spolsky. Sorry if I ruffled feathers.]
Volunteer to coach or assistant coach a kids sports team.
Start, run and grow an online community.
Give a speech a week to local organizations.
Write a regular newsletter or blog about an industry you care about.
Learn a foreign language fluently.
Write three detailed business plans for projects in the industry you care about.
Self-publish a book.
Run a marathon.
In 2007, 51 percent of graduating college students could find jobs in their field, and this year it has dropped to only 20 percent. If you find yourself with some time on your hands, either recently graduated or recently unemployed, consider volunteerism.Last year, I chose to donate my time and money to an innovative project called "One Laptop per Child" [OLPC]. It was one of my [New Years Resolutions] for 2008. I was actually "recruited" by folks from the OLPC after they read my [series of blog posts] on things that can be done with their now famous green-and-white XO laptop.
The first half of the year, I spent helping "Open Learning Exchange Nepal" [OLE Nepal], a non-government organization (NGO) to help education in that country. XO laptops were provided to second and sixth graders at several schools, and my assignment was to help with the school "XS" server. This would be the server that all the laptops connect to. My blog posts on this included:
Rather than [Move to Nepal], I was able to help by building an identical XS server in Tucson, and provide support remotely. This included getting the "Mesh Antennas"to be properly recognized, having an internet filter using [DansGuardian] software, and working out backup procedures.
For the second half of the year, I was asked to mentor a college student inHyderabad, India as part of the ["Google Summer of Code"] to develop an[Educational Blogger System]on the XS server. We called it "EduBlog" and based it on the popular [Moodle] educational software platform.This was going to be tested with kids from Uruguay, but sending a serverdown to this country proved politically-challenging, so instead, I [builta server and shipped it] to a co-location facility in Pennsylvania that agreed to donate the cost and expenses needed to run the server there with full internet connection. I acted as "system admin" for the box, was able to connect remotely via SSH, while Tarun, the college student I was mentoring, developed the EduBlog software. Twice the system washacked, but I was able to restore the system remotely thanks to a multi-boot configuration that allowedme to reboot to a read-only operating system image and restore the operating system and data.
The students and teachers in Uruguay were helped locally by [Proyecto Ceibal]. We were able to translate the system into Spanish, and the project was a big success, enough to convince local government to provideXO laptops to their students to further the benefits.
I get a lot of suggestions for what to put on my blog.I realize that tweets are limited to 140 characters, so pointing to a video URL without muchexplanation or warning can be dangerous. An email can at least add appropriate warnings,such NSFW (Not Safe For Work) or "sorry if this offends you". The only warning I got fora video posted to YouTube by "StorageNetworkDud" was this short email:
"Sorry about the language they have used in some translations, but not sure who put this. It was on twitter."
Fortunately, I have my browser set up not to automatically play YouTube videos. The titlehelped warn me of the content, which turned out to be a [fan-subbed] scene from a World War II movie with brown-shirted tyrannical leader of an evil empire talking to his top generals. He dismisses all but threewith "Hollis, Burke, and Twomey stay in here" followed by a lengthy recap of EMC's recent troublesin the marketplace. At least in the video, the fuhrer correctly follows Tim Sander's advice:"if you have to tell someone bad news, say it in person."
While I understand that many people don't like EMC, the #3 storagevendor in the world, this type of "geek humor" hits a new low. The video was posted over amonth ago, but in light of the recent [shooting in Washington DC], I felt it was just notappropriate to post it here.
Readers, I appreciate all the suggestions, but give me some better warning next time!
This week I am in Minneapolis, MN, so was hoping that the complicated process of moving this blog over to "MyDeveloperWorks" would happen while I was gone, but alas, that does not appear to be the case.
Meanwhile, my partner in crime, Barry Whyte, has moved his blog [Storage Virtualization]successfully over to the new server.
Perhaps next week. If all goes well, the URL links should redirect correctly, but those of you out there using feed readers might require you to re-subscribe to get the right RSS feeds.
Continuing my blog coverage of the [Forrester IT Forum 2009 conference],I will group a bunch of topics related to Cloud Computing into one post. Cloud Computing was a big topichere at the IT Forum, and probably was also in the other two conferences IBM participated in this week inLas Vegas:
The CIOs and IT professionals at this Forrester IT Forum seemed to be IT decision makers with a broader view. There was a lot of interest in Cloud Computing. What is Cloud Computing? Basically, it is renting IT capability on an as-needed basis from a computing service provider. The different levels of cloud computing depends on what the computing service provider actually provides. How do these compare with traditional co-location facilities or your own in-house on-premises computing? Here's my handy-dandy quick-reference guide:
Cloud Software-as-a-Service [SaaS], Examples: SalesForce and Google Apps.
Cloud Infrastructure-as-a-Service [IaaS], such as Amazon EC2, RackSpace.
Tradtional Co-Location facility, you park your equipment on rented floorspace, power, cooling and bandwidth.
Traditional On-Premises, what most people do today, build or buy your own data center, buy the hardware, write or buy the software, then install and manage it.
A main tent session had a moderated Q&A panel of three Forrester Analysts titled "Saving, Making and Risking Cash with Cloud Computing." Here are some key points from this panel:
Is Cloud Computing just another tool in the IT toolbox, or does it represent a revolution? The panel gave arguments for both. As a set of technology, protocols and standards, it is an evolutionary progression of other standards already in place, and an extension of methods used in co-location and time-share facilities. However,from a business model perspective, Cloud Computing represents a revolutionary trend, eliminating in some cases huge up-front capital expenses and/or long-term outsourcing contracts. PaaS and IaaS offerings can be rented by the hour, for example.
An example of using Cloud Computing for a one-time batch job: The New York Times decided to build an archive of 11 million articles, but this meant having to convert them all from TIFF to PDF format. The IT person they put in charge of this rented 100 machines on [Amazon Elastic Compute Cloud (EC2)] for 24 hours and was able to convert all 4TB of data for only $240 US dollars.
Cloud Computing can make it easier for companies to share information with clients, suppliers and business partners, eliminating the need to punch holes through firewalls to provide access.
Since it is relatively cheap for companies to try out different cloud computing offerings with little or no capital investment, the spaghetti model applies--"throw it on the wall, and see what sticks!"
What application areas should you consider running in the cloud? Employee self-service portals-Yes, ERP-Mixed, On-time batch jobs-Mixed, Email-Yes, Access Control-No, Web 2.0-Mixed, Testing/QA-Mixed, Back Office Transactions-No, Disaster Recovery-Mixed.
Different IT roles will see varying benefits and risks with cloud computing. However, by 2011, every new IT project must answer the question "Why not run in the cloud?"
There were a variety of track sessions that explored different aspects of cloud computing:
Software-as-a-Server: When and Why
This session had three Forrester analysts in a Q&A panel format. SaaS can provide much-needed relief from application support, maintenance and upgrade chores. The choice and depth of offerings is improving from SaaS providers. However, when comparing TCO between SaaS and on-premises deployments, can yield different results for different use cases. For example, a typical SaaS rate of $100 US dollars per user per month, with discounts, could be $1000 per year, or $10,000 over a 10-year period. Compare that to the total 10-year costs of an on-premises deployment, and you have a good ball-park comparison. SaaS can provide faster time-to-value, and you can easily just try-before-you-buy several alternative offerings before making a decision.
The downside to SaaS is that you need to understand their data center, where it is located, and how it is protected for backup and disaster recovery. Some SaaS providers have only a single data center, so it mightbe disruptive if it experiences a regional disaster.
Cloud IT Services: The Next Big Thing or Just Marketing Vapor?
Economic pressures are forcing companies to explore alternatives, and Cloud IT services are providingadditional options over traditional outsourcing. Only 70-80 percent of companies are satisfied with traditionaloutsourcing, so there is opportunity for Cloud IT services to address those not satisfied. Scalable, consumption-based billing with Web-based accessibility and flexibility is an attractive proposition. Tenyears ago, you could not buy an hour on a mainframe with your credit card, now you can.
Cloud technologies are mature, and there is interest in using these services. About 10 percent of companies are piloting SaaS offerings, 16 percent piloting PaaS offerings, and 13 percent investing in deploying "private clouds" within their data center. This week Aneesh Chopra, who is Barack Obama's pick as the first CTO for the US Federal Government, [stated to congressional leaders]: “The federal government should be exploring greater use of cloud computing where appropriate.”
IBM is betting heavily on their Cloud Computing strategy, has already gone through the reorganizations needed to be positioned well, and claims to have thousands of clients already. HP has some cloud offerings focused on their enterprise customers. Dell is investing and reorganizing for cloud as well.
Network Strategic Planning for Challenging Times
While not limited to Cloud Computing, companies are seeing WAN traffic doubling every 18 months, but withoutthe corresponding increases in budget to cover it. The Forrester analyst covered WAN optimization management services, hybrid Ethernet-MPLS offerings to help people transition from MPLS VPNs to Carrier-grade Ethernet.
Who should you hire for WAN optimization? Do you trust your own Telco that provides your bandwidth to help you figure out ways to use less of it? Alternatives include System Integrators and Service providers like IBM and EDS.Or, you could try to do it yourself, but this requires capital investment in gear and performance monitoring software.
New workloads like Voice over IP (VoIP) and digital surveillance can help cost-justify upgrading your MPLS VPNs to Carrier-grade Ethernet. The possibility of converging this with iSCSI and/or Fibre Channel (FC) over Ethernet (FCoE) and this can help reduce costs as well. Both MPLS and Ethernet will co-exist for awhile, and hybrid offerings from Telcos will help ease the transition. In the meantime, switching some workloads to Cloud Computing can provide immediate relief to in-house networks now. Converging voice, video, LAN, WAN and SAN traffic may require the IT departments to reorganize how the IT role of "network administrator" is handled.
Navigating the Myriad New Sourcing Models
The landscape of outsourcing has changed with the introducing of new Cloud Computing offerings. However, adapting these new offerings to internal preferences may prove challenging. The Forrester analyst suggesting being ready to try to influence their companies to adopt Cloud Computing as a new sourcing option.
Traditional outsourcing just manages your existing hardware and software, often referred to as "Your mess for less!" However, outsourcing contract law is mature and many outsource providers are large, well-established providers. In contrast, some SaaS providers are small, and the few that are largemay be fairly new to the outsourcing business. Here are some things to consider:
Where will the data physically be located? There are government regulations, such as the US Patriot Act, that can influence this decision.Many Canadian and European customers are avoiding providers where datais stored in the United States for this reason.
What is the service delivery chain? Some cloud providers in turn useother cloud providers. For example a SaaS provider might develop the software and then rent the platform it runs on from a PaaS, which in turn mightbe using offshore or co-location facilities to actually house their equipment.Knowing the service delivery chain may prove important on contractnegotiations. Clarify "cloud" terminology and avoid mixed metaphors.
What is their contingency plan? What is your contingency plan if the system is slow or inaccessible. What is their plan to protect against data loss during disasters? What if they go out of business? Source Code Escrow has proven impractical in many cases. SLAs should provide for performance, availability and other key metrics. However, service level penalties are not a cure-all for major disruptions, loss of revenues or reputation.
How will they handle security, compliance and audits? Heavy regulatory requirements may favor dedicated resources to be used.
Who has "custodianship" of the data? Will you get the data back if you discontinue the contract? If so, what format will it be in, and will it make any sense if you are not running the same application as the cloud provider?
Will they provide transition assistance? Moving from on-premises to cloud may involve some effort, including re-training of end users.
Are the resources shared or dedicated? For shared resource environments, is the capacity "fenced off" in any way to prevent having other clients impact your performance or availability.
I am glad to see so much interest in Cloud Computing. To learn more, here is IBM's [Cloud Computing] landing page.
Forrester analysts kicked off the keynote sessions for Day 1 of the Forrester IT Forum 2009 event. The theme for this conference is "Redefining IT's value to the Enterprise."Rather than focusing on blue-sky futures that are decades away, Forrester wants to present instead a blend of pragmatic informationthat is actionable now in the next 90 days along with some forward-looking trends.
If you ask CEOs how well their IT operations are doing, 75 percent will saythey are doing great. However, if you dig down, and ask how their companies are leveraging IT to help generate revenues, reduce costs, improve employee morale, drive profits, improve customer service, or manage risks, then the percentage drops down to 30 to 35 percent.
What are the root causes of this "perception gap" in value between business and IT? Several ideas come to mind:
Some CEOs still consider IT departments as "cost centers". Rather than exploiting technology to help drive the rest of the business, they are seen as a necessary evil, an extension of the accounting department, for example.
Some CEOs consider IT's role as basically "keeping the lights on". They only notice IT when the lights go out, or other business outages caused by disruptions in IT.
IT departments measure themselves in technology terms, not business terms. CEOs and the rest of the senior management team may not be "tech savvy", and the CIO and IT directors may not be "business savvy", resulting in failure to communicate IT's role and value to the rest of the business.
This conference is focused on CIOs and IT professionals, and how they can bridge the tech/business gap. The first two executive keynote presentations emphasized this point.
Bob Moffat, Senior VP and Group Executive, IBM
Bob Moffat (my fifth-line manager, or if you prefer, my boss's boss's boss's boss's boss) is the Senior VP and Group Executive of IBM's Systems and Technology Group that manufactures storage and other hardware. He presented how IBM is helping our clients deploy smarter solutions. Globalization has changed world business markets, has changed the reach of information technology, and has changed our client's needs.To support that, IBM is focused on making the world a smarter planet, instrumented with appropriate sensors, interconnected over converging networks, and intelligent to provide visibility, control and automation.
It's time to rethink IT in light of these new developments, to think about IT in client terms, with business metrics. Bob gave several internal and customer examples, here's one from the City of Stockholm:
Covering nine square miles of Stockholm Sweden, IBM led [the largest project of its kind] for traffic congestion in Europe. To reduce congestion caused by 300,000 vehicles, the City of Stockhold enacted a "congestion fee" with real-time recognition of license plates and a Web infrastructure to collect payments. The analytics, metrics and incentives have paid off. Since August 2007, traffic is reduced 18 percent, a reduction of travel time on inner streets, and a 9 percent increase in "green" vehicles.
In addition to smarter traffic, IBM has initiatives for smarter water, smarter energy, smarterhealthcare, smarter supply chain, and smarter food supply.
Dave Barnes, Senior VP and CIO, United Postal Service (UPS)
Dave Barnes must act as the "trusted advisor" to the rest of the senior management team. UPS delivers packages worldwide. They put sensors on all of the vehicles, not just to know how fast they were driving,but also how often they drove in reverse gear, and sensors on the engines to determine maintenance schedules.Analytics found that driving in reverse was the most dangerous, and by providing this information to the drivers themselves, the drivers were able to come up with their own innovative ways to minimize accidents.This is one role of IT, to provide employees the information they need to enable them to be better at their own jobs.
Dave also mentioned the importance of collaborating across business units. Their "Information Technology Steering Committee (ITSC)" has 15 members, of which only three are from the IT department. This helped deploy social media initiatives within UPS. For example, Twitter has been adopted so that senior management can get unfiltered customer feedback. This is perhaps another key role of IT, to flatten an organization from cultural hierarchies that prevent top brass up in the ivory tower from hearing what is going wrong down on the street. Too often, a customer or client complains to the nearest employee, and this may or may not get passed up accurately along the chain of command. Twitter allowed executives to see what was going on for themselves.
Dave also covered the "Best Neighbor" approach. If you were going to build a deck in your back yard, you might ask your neighbors that have already done this, and learn from their experience. Sadly, this does not happen enough in IT. To address this, UPS has a "Tech Governance Group" that focused on business process across the organization. For example, they improved "package flow", reducing 100 million miles in the past few years.
Lastly, he mentioned that many technologists are "loners". They have a few like that, but try to hire techies who look to team across business units instead. Likewise, they try to hire business people who are somewhat tech savvy. For example, they have encouraged business employees to write their own reports, rather than requesting new reports to be developed by the IT department. The end result, the business people get exactly the reports they want, faster than waiting for IT to do it. Another role for IT is to provide end-users the tools to make their own reports.
(Dave didn't mention what tools these were, but it sounded like the Business Intelligence and Reporting Tools [BIRT] that IBM uses.)
These two sessions were a great one-two punch to the audience of 600 CIOs and IT professionals. First, IBM sets the groundwork for what needs to be done. Then, UPS shows how they did exactly that, adopting a dynamic infrastructure and got great results. This is going to be an interesting week!
Recently, IBM and the University of Texas Medical Branch (UTMB) [launched an effort] using IBM's World Community Grid "virtual supercomputer" to allow laboratory tests on drug candidates for drug-resistant influenza strains and new strains, such as H1N1 (aka "swineflu"), in less than a month.
Researchers at the University of Texas Medical Branch will use [World Community Grid] to identify the chemical compounds most likely to stop the spread of the influenza viruses and begin testing these under laboratory conditions. The computational work adds up to thousands of years of computer time which will be compressed into just months using World Community Grid. As many as 10 percent of the drug candidates identified by calculations on World Community Grid are likely to show antiviral activity in the laboratory and move to further testing.
According to the researchers, without access to World Community Grid's virtual super computing power, the search for drug candidates would take a prohibitive amount of time and laboratory testing.
This reminded me of an 18-minute video of Larry Brilliant at the 2006 Technology, Entertainment and Design [TED] conference. Back in 2006, Larry predicted a pandemic in the next three years, and here it is 2009 and we have the H1N1 virus.
His argument was to have "early detection" and "early response" to contain worldwide diseases like this.
A few months after Larry's "call to action" in 2006, IBM and over twenty major worldwide public health institutions, including the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC], [announced the Global Pandemic Initiative], a collaborative effort to help stem the spread of infectious diseases.
One might think that with our proximity to Mexico that the first cases would have been the border states, such as Arizona, but instead there were cases as far away as New York and Florida. The NYT explains in an article [Predicting Flu With the Aid of (George) Washington] that two rival universities, Northwestern University and Indiana University, both predicted that there would be about 2500 cases in the United States, based on air traffic control flight patterns, and the tracking data from a Web site called ["Where's George"] which tracks the movement of US dollar bills stamped with the Web site URL.
The estimates were fairly close. According to the Centers for Disease Control and Prevention [H1N1 Flu virus tracking page], there are currently 3009 cases of H1N1 in 45 states, as of this writing.
This is just another example on how an information infrastructure, used properly to provide insight, make predictions, and analyze potential cures, can help the world be a smarter planet. Fortunately, IBM is leading the way.
Wrapping up this week's theme on Cloud Computing, I finish with an IBM announcement for two new products to help clients build private cloud environments from their existing Service Oriented Architecture (SOA) deployments.
IBM WebSphere CloudBurst Appliance -- a new hardware appliance that provides access to software virtual images and patterns that can be used as is or easily customized, and then securely deployed, managed and maintained in a private cloud.
IBM WebSphere Application Server Hypervisor Edition -- a version of IBM WebSphere Application Server software optimized to run in a virtualized hardware server environments such as VMware, and comes preloaded in WebSphere Cloudburst.
With more than 7,000 customer implementations worldwide, IBM is the SOA market leader. Of course, both of these products above can be used with IBM System Storage solutions, including Cloud-Optimized Storage offerings like Grid Medical Archive Solution (GMAS), Grid Access Manager software, Scale-Out File Services (SoFS), and the IBM XIV disk system.
IBM is part of the "Cloud Computing 5" major vendors pushing the envelope (the other four are Google, Microsoft, Amazon and Yahoo). In fact, IBM has a number of initiatives that allow customers to leverage IBM software in a cloud. IBM is working in collaboration with Amazon Web Services (AWS), a subsidiary of Amazon.com, Inc. to make IBM software available in the Amazon Elastic Compute Cloud (Amazon EC2). WebSphere sMash, Informix Dynamic Server, DB2, and WebSphere Portal with Lotus Web Content Management Standard Edition are available today through a "pay as you go" model for both development and production instances. In addition to those products, IBM is also announcing the availability of IBM Mashup Center and Lotus Forms Turbo for development and test use in Amazon EC2, and intends to add WebSphere Application Server and WebSphere eXtreme Scale to these offerings.
For more about IBM's leadership in Cloud Computing, see the IBM [Press Release].
Continuing this week's theme on Cloud Computing, Dynamic Infrastructure and Data Center Networking, IBM unveiled details of an advanced computing system that will be able to compete with humans on Jeopardy!, America’s favorite quiz television show. Additionally, officials from Jeopardy! announced plans to produce a human vs. machine competition on the renowned show.
For nearly two years, IBM scientists have been working on a highly advanced Question Answering (QA) system, codenamed "Watson" after IBM's first president, [Thomas J. Watson]. The scientists believe that the computing system will be able to understand complex questions and answer with enough precision and speed to compete on Jeopardy!Produced by Sony Pictures Television, the trivia questions on Jeopardy! cover a broad range of topics, such as history, literature, politics, film, pop culture, and science. It poses a grand challenge for a computing system due to the variety of subject matter, the speed at which contestants must provide accurate responses, and because the clues given to contestants involve analyzing subtle meaning, irony, riddles, and other complexities at which humans excel and computers traditionally do not. Watson will incorporate massively parallel analytical capabilities and, just like human competitors, Watson will not be connected to the Internet or have any other outside assistance.
If this all sounds familiar, you might remember some of the events that have led up to this:
In 1984, the movie ["The Terminator"] introduced the concept of [Skynet], a fictional computer system developed by the militarythat becomes self-aware from its advanced artificial intelligence.
In 1997, an IBM computer called Deep Blue defeated World Chess Champion [Garry Kasparov] in a famous battle of human versus machine. To compete at chess, IBM built an extremely fast computer that could calculate 200 million chess moves per second based on a fixed problem. IBM’s Watson system, on the other hand, is seeking to solve an open-ended problem that requires an entirely new approach – mainly through dynamic, intelligent software – to even come close to competing with the human mind. Despite their massive computational capabilities, today’s computers cannot consistently analyze and comprehend sentences, much less understand cryptic clues and find answers in the same way the human brain can.
In 2005, Ray Kurzweil wrote [The Singularity is Near] referring to the wonders that artificial intelligence will bring to humanity.
The research underlying Watson is expected to elevate computer intelligence and human-to-computer communication to unprecedented levels. IBM intends to apply the unique technological capabilities being developed for Watson to help clients across a wide variety of industries answer business questions quickly and accurately.
This week's theme is alternative sourcing through Cloud Computing.
I thoughtI would start off the week interviewing an owner at a Small or Medium-sized Business [SMB] that recently adopted this approach.
Meet Fred, one of the new co-owners of my singles activities club, TucsonFun and Adventures, known affectionately as [TFA]. TFA recentlyadopted a new "Software-as-a-Service" [SaaS] for the company's Web site.
While the experience is still fresh in his mind, I thought this would be a goodopportunity to illustrate some of the concepts of alternative sourcing through Cloud Computing byusing a local example.
Give me some background on the company. How long has it been around? How many employees?
TFA has been in business since 1997, and has six employees, including an office manager, event planners and event coordinators.
How critical is "Web presence" to the business?
It's very important in several ways.First, the TFA staff plans 25 events per month, and our hundreds of members register for these events mostly through the Web site. Second, we have it connected to our bank accounts, so that it can process credit cards to collect the funds for renewals and event registrations.Third, it serves as a way to communicate upcoming events to our members, especially trips, so they can save the date on their own calendars. And fourth, the Web site serves as a "landing page"for all of our radio spots, newspaper ads, and other marketing efforts.
TFA had a Web site before, and now you have helped launch this new Web site. What motivated this change?
Our members were complaining about our 1999-era Web site. The pages were written in HTML, ASP (Active Service Page) and SQL (Structured Query Language) connected to a Microsoft SQL Server 2005 database. It was mostly text-based, with the only animation being text scrolling horizontally across the screen. The Web hostingprovider offered reliable access, but was located in New York state on East Coast time. If a member signed up for an event after 9pm or 10pm here in Tucson, it was marked as the next date, which could change the price of the event, or indicate the deadline was missed.If there were any changes to the pages or logic needed, or new columns required in the database, it gotexpensive. The TFA employees don't know how to program in ASP or SQL, so we hadto hire outside professionals each time.
Does this new Software-as-a-Service (SaaS) Web site address these problems you were trying to solve?
Yes. The new Web site is hosted by [Memberize] which provides a hosted membership management application. The TFA staff can nowmanage its membership, plan events, and communicate them with graphics, videos,and links to maps. They don't need to know ASP or SQL programming, because a built-in[WYSIWYG] editor is simple enough for anyone with standard word-processing skills. The database allowed the optionto add customized fields for each member we have in our club.
Was it difficult to switch over?
Not at all. Memberize gave us a 60-day free trial, and we needed all that time totransfer over our membership records, customize the style of the overall templatefor all pages, and then copy over the content from our old Web site. Wehad to transfer over our e-commerce service over, and contact GoDaddy to transfer the domain. The employee training required was fairly minimal.Cost-wise, it was only a few hundred dollars one-time setup fee, and then we pay a monthly fee,based on a tiered pricing structure based on the count of our active members.
How has the reaction been from your membership?
I've gotten a lot of positive feedback. The learning curve was minimal. Ourmembers found the new Web site intuitive and interactive. For example, thecalendar of events can be shown in a single month-at-a-glance format, with greendots showing the events you are signed up for.
And from your perspective, Fred, is the new Web site easy to administer?
Yes, I can now easily generate standard reports, and create my own ad-hoc reports as needed. This wasn't possible with the old system unless I hired an ASP programmer.
Hopefully, this provides some insight on how even the smallest SMB enterprises can adopt a Dynamic Infrastructure through alternative sourcing. Cloud Computing takes many forms, including Software-as-a-Service managed offerings.
People are confused over various orders of magnitude. News of the economic meltdownoften blurs the distinction between millions (10^6), billions (10^9), and trillions (10^12).To show how different these three numbers are, consider the following:
A million seconds ago - you might have received your last paycheck (12 days)
A billion seconds ago - you were born or just hired on your current job (31 years)
A trillion seconds ago - cavemen were walking around in Asia (31,000 years)
That these numbers confuse the average person is no surprise, but that it confuses marketing people in the storage industry is even more hilarious. I am often correcting people who misunderstandMB (million bytes), GB (billion bytes) and TB (trillion bytes) of information.Take this graph as an example from a recent presentation.
At first, it looks reasonable, back in 2004, black-and-white 2D X-Ray images were only 1MBin size when digitized, but by 2010 there will be fancy 4D images that now take 1TB, representinga 1000x increase. What?When I pointed out this discrepancy, the person who put this chart together didn't know what to fix.Were 4D images only 1GB in size, or was it really a 1000000x increase.
If a 2D image was 1000 by 1000 pixels, each pixel was a byte of information, then a 3D imagemight either be 1000 by 1000 by 1000 [voxels], or 1000 by 1000 at 1000 frames per second (fps). Thefirst being 3D volumetric space, and the latter called 2D+time in the medical field, the rest of us just say "video".4D images are 3D+time, volumetric scans over time, so conceivably these could be quite large in size.
The key point is that advances in medical equipment result in capturing more data, which canhelp provide better healthcare. This would be the place I normally plug an IBM product, like the Grid Medical Archive Solution [GMAS], a blended disk and tape storage solution designed specifically for this purpose.
So, as government agencies look to spend billions of dollars to provide millions of peoplewith proper healthcare, choosing to spend some of this money on a smarter infrastructure can result in creating thousands of jobs and save everyone a lot of money, but more importantly, save lives.
Short 2-minute [video] argues the case for Smarter Healthcare
For more on this, check out Adam Christensen's blog post on[Smarter Planet], which points to a podcast byDr. Russ Robertson, chairman of the Counsel of Medical Education at Northwestern University’s Feinberg School of Medicine, and Dan Pelino, general manager of IBM's Healthcare and Life Sciences Industry.